1 statistics for quality - massey university

70
1 Statistics for Quality Objectives To understand the term “quality” in general and its relationship with statistics. How to achieve quality improvement by reducing the variation, and fol- lowing systematic quality improvement methods such as Shewhart-Deming cycle, Six Sigma cycle etc. To understand the importance of statistical thinking and the role of several simple statistical tools for use at the shop floor to engage everyone in an organisation to improve quality. To understand the distinction between common and special causes of vari- ation, formation of rational subgroups, and avoidance of process tamper- ing. To appreciate the role of Design of Experiments (DOE) for reducing com- mon cause variation and measurement of process capability. To understand the methodology of Shewhart control charting for process monitoring. To implement ¯ X, R and S variables control charts for Phase I and II, and understand their construction. To implement p and c control charts, and understand their construction. To comprehend the role of sampling inspection for product assurance, Operating Characteristic curves and quality levels. 1

Upload: others

Post on 04-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

1 Statistics for Quality

Objectives

• To understand the term “quality” in general and its relationship withstatistics.

• How to achieve quality improvement by reducing the variation, and fol-lowing systematic quality improvement methods such as Shewhart-Demingcycle, Six Sigma cycle etc.

• To understand the importance of statistical thinking and the role of severalsimple statistical tools for use at the shop floor to engage everyone in anorganisation to improve quality.

• To understand the distinction between common and special causes of vari-ation, formation of rational subgroups, and avoidance of process tamper-ing.

• To appreciate the role of Design of Experiments (DOE) for reducing com-mon cause variation and measurement of process capability.

• To understand the methodology of Shewhart control charting for processmonitoring.

• To implement X, R and S variables control charts for Phase I and II, andunderstand their construction.

• To implement p and c control charts, and understand their construction.

• To comprehend the role of sampling inspection for product assurance,Operating Characteristic curves and quality levels.

1

2 Role of Statistics in Quality

Quality and its management played a crucial role in human history. Managingquality was important even for ancient civilisations. Standardisation was recog-nised as the first step towards quality. In ancient Rome, a uniform measurementsystem was introduced for manufacturing bricks and pipes; and building regu-lations were in force. Water clocks and sundials were used in ancient Egypt andBabylon (15th century BC) even though they were not terribly accurate. TheChinese Song Dynasty (10th century) even mandated the control of shape, size,length, and other quality factors of products in handicrafts using measurementtools, such as carpenter’s squares.

The industrial revolution began in the United Kingdom during the 18th

century and then extended to US and other countries. Quality has becomeharder to manage due to mass production. Mass production was achievableby the division of labour and the use of machinery. In such a production line,workers performed repetitive tasks in a cooperative way using machinery. Thisresulted in huge productivity gains. But the number of factors and variablesaffecting the quality of a product in a mass production line were also numerouswhen compared to the production of a single item by an artisan who did all workfrom start to end. Division of labour for mass production also took away thepride of workmanship. Hence quality suffered in the production line and qualitymonitoring became an essential activity. Due to mass manufacture, engineerswere forced to look beyond using standardised measurements. The causes ofquality variation were numerous and hence statistical methods were needed forquality monitoring and assurance.

Prof Water Shewhart and Harold Dodge implemented statistical methodsfor quality in the mid twenties in USA. The Second World War was the maincatalyst for the extensive use of statistical quality control methods for improvingAmerica’s war time production. Certain statistical methods were even classifiedas military secrets. Dr. Kaoru Ishikawa, a well-known Japanese quality philoso-pher speculated that the Second World War was won by quality control and byutilisation of statistical methods. The western industries could not sustain theirachievements in quality mainly due to the failure of management. The Japanesesuccess in the quality front in the late half of the last century can be partly at-tributed to the wider use of some simple statistical tools together with moreadvanced ones such as experimental designs. A word of caution! Quality prob-lems can only be partly solved by statistical methods. For achieving excellencein quality, company-wide participation, customer focus, good management etcare important. In the last three decades, many companies in both developedand developing countries embraced the concept of total quality managementwhich evolved from the humble beginning namely using simple statistical toolsat the shop-floor.

2

2.1 What is quality?

The term quality is understood in different ways in different contexts. Forsome, quality means excellence in all aspects. But we cannot ignore the priceor affordability aspect and this may be a major limiting factor for achievingexcellence. For example, an expensive luxury model car having several ‘extra’features cannot be compared to a more affordable car while ignoring the pricefactor. It is inappropriate to declare that the cheaper car as of ‘poor’ qualitycompared to the luxury car, when the prices are not in the same range.

Some define quality to mean ‘fitness-for-use’. For some products, the perfor-mance or use characteristics are important. For example, we may compare thecleaning performance of two laundry detergents. How safe is a brand of deter-gent to the users or environment? This leads to a dimension of safety associatedwith the detergent quality. Does the detergent affect the life of our garments?Here we think of the durability aspect of the fabric quality. Does the detergentaffect the colour of the fabric? Here we think of the visual appeal or aestheticdimension of quality. The ATM machines make cash available 24 hours a day- a dimension of availability. Banks are concerned about the functioning of theATM machine without a failure or repair- a dimension of reliability associatedwith the ATM machine quality. Is it easy to service it in the event of malfunc-tion or perform routine maintenance? Here we consider the serviceability andmaintainability aspects of the ATM machine. Thus we see that quality is morea latent concept that involves (i) performance, (ii) reliability, (iii) durability,(iv) serviceability, (v) aesthetics and (vi) features etc. Sometimes quality issimply understood by the brand image of the product (perceived quality), orhow far it conforms to the Standards or customer specifications. We require anoperational definition for quality rather than simply conceptualising it from along list of its descriptors.

Many published national and international Standards define quality as thetotality of features and characteristics of a product or service that bear on itsability to satisfy given or implied needs. The features and characteristics re-ferred to in the above definition may be of several types. Physical characteristicsare those continuous variables such as length, weight, voltage, viscosity, etc. Thefeature variables are usually attribute in nature such as sensory ones like taste,appearance, colour and smell. Certain quality characteristics such as reliability,maintainability, serviceability are time dependent. The phrase given needs inthe definition requires a clear list of (customer) needs to be identified. It is ob-vious that all the needs of the customer of a product or service, including safety,design, aesthetics, etc, should be listed in the set of given needs. The price oraffordability of the product or service should also receive major consideration.The performance characteristics should receive priority if price is not a limitingfactor. Societal needs are also to be be considered in addition to customer needsin certain cases. How far does this set of characteristics and features meet thegiven needs? That is, the ability to satisfy the (customer) needs is the qualitybuilt into the product or service through the features and characteristics. Thisis shown in Figure 1.

3

‘given needs’

‘totality of featuresand characteristics’

Quality

Figure 1: A View of Quality

Suppose that we would like to understand the ‘quality’ of a brand of black-currant nectar bottled by a company. Let us suppose that the following featuresand characteristics are identified:

Features Colour, appearance, smell, flavour, packaging and labelling etc

Characteristics Relative density at 20˚C, content of wine acid, alcohol, aceticacid, vitamin C, microbial attributes etc

How to measure the above? Let the analytical measures of the characteristicsand features, called quality measures, be as given below:

1. Colour rating scale 1 to 5 (5 being excellent)

2. Appearance rating scale 1 to 5 (5 being excellent)

3. Smell rating scale 1 to 5 (5 being excellent)

4. Flavour rating scale 1 to 5 (5 being excellent)

5. Density measurement by laboratory methods

6. Acid content measurement by laboratory methods

7. Alcohol measurement by laboratory methods

8. Vitamin C measurement by laboratory methods

9. Packaging etc visual inspection

The ‘totality’ of the above features and characteristics is expected to satisfythe ‘needs’. Hence a list of customer needs should be identified by surveying thecustomers. Certain characteristics such as microbiological characteristics partlyrepresent ‘societal’ needs. How far the product features and characteristics meetthese needs determines the quality of the black current nectar.

4

In practice, we often use the actual measurement of the characteristic(s) ofinterest as a quality measure. Statistical methods can be employed to identifythe important characteristics that are highly correlated with the latent variablequality. The fraction of nonconforming product is also adopted as an (inverse)measure of quality. Such fraction nonconforming measures will also be calledquality levels. It is also possible to develop a demerit quality score by ratingmethods, etc. Basically we will employ one or more quality measures whosemajor function is to provide analytical information on the ‘quality’ of a productor service.

2.2 Understanding and reducing variation

Quality philosophers such as Dr. Genichi Taguchi define quality as the loss aproduct causes to society after being shipped, other than any losses caused byits intrinsic function. Variation from the target of a quality characteristic maybe caused by the uncontrollable factors known as noise, such as

• Outer noise due to humidity, temperature, vibration, dust etc.

• Inner noise due to wear and deterioration.

• In-between noise due to the material, worker etc.

The strength of these noises largely determines the amount of variabilityfrom the target and directly impacts on controllable process factors or param-eters such as increasing or decreasing speed, temperature etc. Variability canbe defined and understood only in statistical terms. Hence the use of statisticalmethods becomes important for reducing the variability or improving quality.Montgomery (1996, p.4) defines quality as one which is inversely proportionalto variability. In other words, ‘quality improvement’ requires reduction of vari-ability in processes and products.

2.2.1 Process Optimisation and Robustness

Experimental design is a broad area of statistics having its roots in agriculturalexperiments. Many of the key terms of experimental design such as plot, treat-ment, etc still have an agricultural connotation. Nevertheless, the applicationof this branch of statistics is not just limited to agriculture.

The design of experiments (DOE) is extremely useful in identifying andmanaging the key variables affecting the quality characteristic(s). In industrialexperiments, the controllable input factors will be systematically varied and theeffect of these factors on the response variables will be observed. The variation inthe response quality characteristic(s) will be understood using statistical modelsand the conclusions will to provide an optimum strategy for the actual massmanufacture. Hence the design of experiment is an off-line quality improvementtool.

A product is called robust if it performs equally well in good and bad condi-tions of use. For example, a robust detergent powder is expected to achieve good

5

results whether the water is hard or soft and cold or hot. Robust experimentaldesigns identify the optimum mix of controllable factor levels which produces aresponse robust to external noise factors.

2.2.2 Statistical Process Control

Statistical process control (SPC) is the methodology for monitoring and opti-mizing the process output, mainly in terms of variability, and for judging whenchanges (engineering actions) are required to bring the process back to a ”stateof control”. This strategy of control differs from the engineering process control(EPC) where the process is allowed to adapt by automatic control devices etc.In other words SPC techniques aim to monitor the production process whileEPC is used to adjust the production process.

2.2.3 Sampling Inspection

Sampling inspection or acceptance sampling is a quality assurance techniquewhere decisions to accept or reject manufactured products, raw materials, ser-vices etc are taken on the basis of sampling inspection. This method providesonly an indirect means for quality improvement. Calibration of instrumentsto tackle measurement errors is done using statistical methods which aim toprovide measurement quality assurance.

3 Shewhart-Deming cycle of process improve-ment

Prof Walter Shewhart (1931) who invented the control chart technique andregarded as the father of the SPC proposed the following three postulates froman engineering view point:

1. “All chance systems of causes are not alike in the sense that they enableus to predict the future in terms of the past”.

2. ‘Systems of chance causes do exist in nature such that we can predict thefuture in terms of the past even though the causes be unknown. Such asystem of chance is termed constant.”

3. “It is physically possible to find and eliminate chance causes of variationnot belonging to a constant system.”

The above three postulates may appear unclear in the first reading. Thefollowing paragraphs explain them, and then show how they lead to SPC andother procedures.

A production process is always subjected to a certain amount of inherentor natural variability caused by a number of process and input variables. Thisstable system of chance causes, known as common causes belong to the process.

6

The application of industrial experimentation, proper screening actions to usethe appropriate inputs and other pre-production planning activities are to en-sure that the variability in the quality characteristic caused by the process andinput variables is minimal. In other words, the variability expected in the actualproduction process should be largely ‘error’ or natural or inherent variability.Whether the rate of this ‘error’ variability is maintained constant over time,across various machines etc, should be monitored. There are always commonfactors that are inherently ‘designed’ into the process and they continuously af-fect the process. They produce roughly the same amount of variation but hencepredictable. The variation produced by common causes is sometime referred toas noise, because there is no ‘real’ change in process performance. Noise cannotbe traced to a specific cause, and it is therefore, although predictable, it is eitherunexplainable or uncontrollable.

If the variability in the process is due to identifiable sources such as lowyield raw material, improper machine function, wrong methods, etc, the processwill be operating at an unsatisfactory level. Such sources of variability, whichare preventable, are called assignable or special causes of variation. Assignablecauses lie outside the process, and they contribute significantly to the total vari-ation observed in performance measures. The variation created by assignablecauses is usually unpredictable, but it is explainable after they have been ob-served.

The nature of common cause variation lends itself to the application of thestatistics, while the variation produced by assignable causes does not. So wecharacterize the inherent process variation and develop tools to “predict’ thepresence of special or assignable causes on statistical grounds.

SPC includes many statistical tools to warn the presence of special cause(s)of variability in a production process. The actual elimination or correction of theprocess variable causing extra variability is basically an engineering function.Hence it is necessary for the technical person to be familiar with the statisticaltechniques and for the statistician to have some knowledge of the productionprocesses. It is also important that it should be economically feasible to elimi-nate any special cause of variation.

3.1 State of Statistical Control

Although both common and assignable causes create variation, common causescontribute to ‘controlled’ variation while assignable causes contribute to ‘uncon-trolled’ variation. Shewhart explained the term controlled variation as follows.“A phenomenon will be said to be controlled when, through the use of past ex-perience, we can predict, at least within limits, how the phenomenon may beexpected to vary in the future. Here it is understood that prediction within limitsmeans that we can state, at least approximately, the probability that the observedphenomenon will fall within given limits.” In other words, a general probabil-ity law will apply when the process is subjected to only common causes. Inparticular we can state that the current state-of-the-art production involves aconstant amount of variability due to common causes.

7

A production process is said to be in a state of statistical control ifthe only variability present is due to common causes, and all specialcauses are absent i.e. a constant common cause system representsthe process. The special or assignable causes will increase the variabilitybeyond the level permitted by the common or chance causes. Such an increasein variability due to special causes can be detected using the probability lawsgoverning the stable state of control. This is explained below:

Variation due to special causes are bound to occur over time due change ofraw material or operatives, sudden machine failures, etc. Therefore the processis usually sampled over time, either in fixed or variable intervals. The presenceof special causes can be monitored by considering a control statistic such asthe mean X of a sample of units taken at any given time. The approximateprobability distribution of the control statistic can then be used to define arange for the inevitable (and hence allowable) common cause variation, knownas control limits. This allowable variation can also result in false alarms. Thatis, even when no special causes are present, we may be forced to look for thepresence of special causes. This false alarm probability is usually kept low byusing a signal rule for special causes.

3.2 Process Tampering

Common cause variability is often the result of uncontrollable variables repre-senting the current state of the art of production. Without understanding thenature of variability permitted by the common causes, if the production processis ‘tampered’ by unnecessary interventions, then the variability in the qualitycharacteristic will actually increase. It is important that unnecessary process in-terventions such as tool changes etc should not become another source of ‘specialcause’. Deming used to demonstrate this concept using a demonstration called‘Funnel Experiment’ which is briefly described below:

As shown in Figure 2, a funnel is mounted on a stand and the spout isadjusted towards a target. A marble is then dropped through the funnel andthe final resting position is noted. The distance between the target and the finalresting place represents the random variation. Let us suppose that we do notadjust the funnel position and simply drop the marble several times and note theresting positions representing the random variation. Let us also consider certainadditional rules or strategies which represent process intervention or adjustmentactions. A set of such rules used by Deming for his funnel adjustment includingthe strategy of not adjusting the funnel (called Rule 1) are summarised below:

Rule 1 The funnel remains fixed, aimed at the target.

Rule 2 Move the funnel from its previous position a distance equal to thecurrent error (location of drop), in the opposite direction.

Rule 3 Move the funnel to a position that is exactly opposite the point wherethe last marble dropped, relative to the target.

8

Rule 4 Move the funnel to the position where the last marble dropped.

Figure 3 shows the variability involved with the 4 rules using 400 simulatedstandard normal random variables (X,Y) targeted at (0,0). It is clear from theDeming’s funnel demonstration that intervention in a production process shouldnot be made unnecessarily if the process is already stable or in control. Theprocess stability must be monitored statistically (based on probability laws). Ifthis is done, then the variation in the process output is reduced by avoidingunnecessary process interventions or process tampering.

Figure 2: Deming’s Funnel Demonstration

3.3 Shewhart-Deming cycle

The plan-do-study-act (PDSA) cycle of quality improvement, also known asthe plan-do-check-act (PDCA) cycle, is largely the contribution of Shewhartand Deming for quality problem solving and learning. The four phases of thisShewhart and Deming cycle of quality improvement are the following:

Plan This phase involves developing a hypothesis and determining what re-sources and changes are needed to solve the problem in hand.

Do This phase largely involves experimental action on a small scale to imple-ment what is planned or hypothesised in Phase 1.

Study/Check After the conduct of the experiment, the outcomes are analyzed.The main focus here is on learning from the experimental results.

9

-3 -2 -1 0 1 2 3 4 5

-3

-2

-1

0

1

2

3

4

Y

X

Funnel Experiment of Deming - Rule 1Funnel Remains Fixed

543210-1-2-3-4-5

4

3

2

1

0

-1

-2

-3

-4

Y

X

Funnel Experiment of Deming - Rule 2

Funnel Moved the Distance of the CurrentError in the Opposite Direction

20100-10-20

15

10

5

0

-5

-10

-15

Y

X

Funnel Experiment of Deming - Rule 3Bow Tie Effect- Funnel moved opposite

last marble point relative to the target

0- 10-20-30

40

30

20

10

0

-10

-20

Y

X

Funnel Experiment of Deming - Rule 4Random Walk- Move to last marble position

Figure 3: Simulated Results of Deming’s Funnel Experiment

10

Act Based on the learning in the earlier phase, action is taken to implementthe improvement strategy on a large scale. No improvement is final innature and hence the cycle restarts at the first (Plan) phase.

PLAN

DOCHECK

ACT

Figure 4: PDCA cycle

Quality problem solving is rarely a one-shot exercise, and hence multiplecycles of PDSA will be needed (see Figure 4). For scientific problem solving,the well known deduction-induction cycle is continuously applied for discoveringnew truths. Hence the PDCA cycle is viewed as the modified way of using thescientific method for solving quality problems.

4 Statistical Thinking for Quality

It is not uncommon to act on information based on past experience, perceptionsor anecdotal evidence. For example, assume that a friend of yours bought ahouse and made a 15% capital gain within one year, and tells you that it is agood idea to buy a property based on his past experience. Are we supposed toact on his advice? The answer lies in statistical thinking and data analysis. Mostfinancial markets react to several micro and macro events but the individualreactions to such events are all not the same and often the scale of the marketreactions can be excessive. For example, “noise trading” is common in stockmarkets. The lack of statistical thinking exists in all walks of life.

The American Society for Quality (ASQ, www.asq.org) Statistics Division(2004) publication Glossary and Tables for Statistical Quality Control suggeststhe following definition:

“Statistical Thinking is a philosophy of learning and action based on thefollowing fundamental principles:

• All work occurs in a system of interconnected processes,

• Variation exists in all processes, and

• Understanding and reducing variation are keys to success.”

11

The above definition is explained briefly in the following paragraphs:We must recognise that any response or output is caused by variables in-

volved in an interconnected process. The factors or variables causing outputvariations often interact and cannot be thought to be independent of each other(see Figure 5). Hence the first task must be to understand the structure of theinterconnected process.

Inputs(from

suppliers)

Outputs(for

customers)

InterconnectedProcess

Figure 5: Process flow and interconnectivity

Everything (manufacturing or non-manufacturing) must be regarded as aprocess, and there are always variations. In a manufacturing process, variationis caused by machines, materials, methods, measurements, people, and physicaland organizational environment. In non-manufacturing or business processes,people contribute a lot to the total variation in addition to methods, measure-ment and environment.

Data should guide decisions. That is, statistical models based on data helpus to understand the nature of the variation. The reduction of variation is doneby actions such as eliminating special causes or designing a newly improvedsystem with smaller variability.

Reverting to the house price example, we first recognise that the change inhouse prices is not an unconnected or isolated event but the outcome realizedfrom an interconnected process. The process leading to house price variationinvolves a number of interconnected variables such as the mortgage rate, infla-tion, wages and employment, migration etc. If we build a suitable statisticalmodel, then we may recognise that 15% increase in house prices in a short pe-riod is rather special rather than common. In order to reduce the house pricevariation, special cause factors (such as dominance of short term speculative in-vestors or sudden uncontrolled migration etc) must be addressed appropriately.Reducing the common cause variability in house prices require planned housingdevelopment for meeting the needs of the population.

4.1 Six Sigma Methodology

In the mid 1980, Motorola Corporation faced stiff competition from competitorswhose products were of improved quality. Hence a resolution was made to

12

improve the quality level to 3.4 DPMO (defects per million opportunities) orbelow. That is, the resolution was to keep the process variation one-sixth ofthe variation allowed by the upper or lower specifications. The low defect levelwas achieved by the Six Sigma process management model in Motorola. TheSix Sigma model can be viewed as an improvement over the Shewhart-Demingcycle. This management methodology is highly data driven and involves thefollowing steps summarised by the acronym DMAIC (define, measure, analyze,improve, control).

Define Identify the process or product that needs improvement. Benchmarkwith key product or process characteristics of other market leaders.

Measure Select product characteristics (dependent variables); Map the pro-cesses; Make necessary measurement and estimate the process capability.A methodology known as the Quality function deployment (QFD) is usedfor selecting critical product characteristics.

Analyse Analyze and benchmark the important product/process performancemeasures. Identify the common factors for successful performance. Foranalyzing the product/process performance, various statistical and basicquality tools will be used.

Improve Select the performance characteristics which should be improved.Identify the process variables causing the variation, and perform statis-tically designed experiments to set the improved conditions for the keyprocess variables.

Control Implement statistical process control methods. Reassess the processcapability and revisit one or more of the preceding phases, if necessary.

Statistical thinking and tools play an important role in all the DMAICphases. It is important to note that the variables causing quality must beidentified, experimented and improvements are achieved and held.

Quality is the business of everyone in an organisation. Hence employeetraining in the use technical tools and problem solving is an integral part ofthe Six Sigma quality management model. Trained employees were even givenmartial arts titles “black belts” and “master black belts” depending on their skilllevels and experience! You may be surprised to know that Motorola saved severalbillion dollars using Six Sigma methodology. From early nineties, Six Sigmamethodology is being adopted by many multinational companies for achievingquality and hence profitability.

To encourage statistical thinking in the shop floor, and to train in manage-ment methodologies such as Six Sigma, several EDA tools are used. SimpleEDA tools such as histograms, scatter plots, boxplots etc are extremely usefulfor understanding a production process.

13

4.1.1 Histogram for Process stability

Assume that certain machining operation is done on a six-head Bullard. Eachhead acted as a separate machine and tools are set in slides which can be movedin or out when adjustment is required. Process adjustments are done by oper-ators such as changing tool etc. For certain dimensional characteristic, assumethat the histogram shown in Figure 6 was drawn with extensive past data. Thebimodal shape in the histogram suggests that that the six heads are not thesame. So actions can be taken to train the two operators to standardise themachine adjustment actions.

Histogram of dimension

dimension

Den

sity

1.90 1.91 1.92 1.93 1.94

010

2030

4050

Figure 6: Histograms for Judging Process Stability

Specification limits are limits defined to represent the extreme possible valuesof a quality characteristic for conformance of the individual unit of the product.For example the minimum and maximum for the dimensional characteristicmay be externally fixed as 1.91cm and 1.93cm respectively. We will call theminimum value as the lower specification limit (LSL) and the maximum valueas the upper specification limit (USL). A quality characteristic may have only asingle specification limit (LSL or USL) or both. Specification limits are fixed ontechnical grounds and the actual production should be well contained within thespecification limits to prevent production of nonconforming or defective items.Histograms are therefore useful to graphically assess whether the productionprocess is capable of meeting the specifications. The above histogram shows that

14

the process is not meeting the LSL and the USL conditions and a good fractionof the production must be nonconforming. This high level of nonconformancecalls for variability reduction. Separate histograms for each head with overlaidspecification limits may be drawn. They will be useful to graphically assess theprocess capability of each head.

4.1.2 Check Sheet

A check sheet is a simple device for data collection, and summarising all (his-torical) defect data on the quality characteristic(s) against time, machine, op-eratives, etc. It helps one identify trends or meaningful results in addition toits role in proper record keeping. While designing a check sheet, it is necessaryto clearly specify the following:

1. Quality measure

2. Part or operation number

3. Date

4. Name of analyst/sampler

5. Brief method of sampling/data collection

6. All information useful for diagnosing poor quality

The design of a check sheet depends on the requirements of data collection.For example, the check sheet shown as Figure 7 includes the time of samplingand the specification limits for a can filling operation.

There is no definition for a check sheet and it can be designed in variousways dictated by our requirements. It can also be designed to record severalfactors and responses for industrial experimentation purposes.

4.1.3 Defect Concentration Diagram

A defect concentration diagram, also known as a location check sheet, is pictureof a unit, showing all relevant views and the various types of (apparent) defects.It helps to analyse whether the location of the defects on the unit (such asposition of oil leakage in a container) gives any useful information about thecauses of defects. When defect data are recorded on a defect location checksheet over a large number of units, patterns such as clustering may be observed.This is helpful for identifying and removing the source of defects. Figure 8provides a defect location check sheet for a painting operation.

15

XYZ Industries PQRS.

Can Filler Machine: AAAA Date: Shift (circle): I II III Specification: 991ml Minimum

Operator:________________ Target: 1000 ml Volume (ml) Hour

1 Hour

2 Hour

3 Hour

4 Hour

5 Hour

6 Hour

7 Hour

8 Total

under 980 980 - 985 986 - 990 991 - 995 996 - 1000

1001 - 1005 1006 - 1010 over 1010

Steps: 1. Check five cans per hour for 8 hours 2. Place a tally mark in the proper box after measurement Remarks:

Figure 7: A typical Check sheet

16

Front View

Defect TypesScratchesDirtThickThinBubble

Figure 8: A Location Check Sheet

Ideally we would expect the defects to be randomly (uniformly) distributed.The distance between defects of the same type, as well as the distance betweendefects of different types, may be used as a numerical measure for clusteringand dependence. If the location check sheet is not clearly indicating assignablepatterns, then the relevant data can be analyzed comparing the real data withsimulated uniform random numbers.

4.1.4 Pareto diagram

This graphical tool derives its name from the Italian economist Pareto, and wasintroduced for quality control by Dr. Juran, a famous quality Guru. Juranfound that a vital few causes lead to a large number of quality problems, whilea trivial many cause only a few problems.

First the causes are ordered in descending order of interest. The character-istic of interest may be the percentage cause of nonconforming items, economiclosses, etc. The ranked causes are then shown as bars where the bar heightrepresents the characteristic of interest. The cumulative percentages are alsoshown in the diagram. A sample Pareto diagram is shown in Figure 9. ThePareto diagram is similar to a bar diagram. In both tools, bars represent fre-quencies. In a bar chart they are not arranged in descending order, while theyare for a Pareto diagram.

The Pareto chart is also successfully used in non-manufacturing applicationssuch as software quality assurance. A software package is usually tested forerrors before being released commercially. A software package will consist ofseveral modules and subroutines written by several persons and upgraded overtime. If the software fails on a test, it is possible to track which of the modulesactually caused the failure. Repeated test results when displayed in the form of

17

a Pareto chart will identify those modules which are to be simplified by breakingcomplex subroutines into smaller ones etc.

contact num. price code supplier code part num. schedule date

Pareto Chart for defect

Error causes

Err

or fr

eque

ncy

0

50

100

150

200

250

300

0%25

%50

%75

%10

0%

Cum

ulat

ive

Per

cent

age

Figure 9: Pareto Diagram

4.1.5 Cause and Effect Diagram

This diagram, also called a fish-bone diagram, was introduced by the JapaneseProfessor Dr. Kaoru Ishikawa (hence it is also known as an Ishikawa diagram).The cause and effect diagram provides a graphical representation of the rela-tionship between the probable causes leading to an effect. The effects are oftenthe vital few noted in a Pareto chart. The causes are generally due to machines,materials, methods, measurement, people and environment. The diagram canalso be drawn to represent the flow of the production process and the associatedquality related problems during and between each stage of production. A sam-ple cause and effect diagram is shown in Figure 10 which relates to the causefor printed circuit board surface flaws.

Very often, it may be necessary to establish a relationship between a char-acteristic of interest and a major cause quantitatively. If this is done using ex-perimental designs, the cause may be shown enclosed in a box. If only empiricalevidence exists, not fully supported by data, then the cause may be underlined.The main advantage of a cause and effect diagram is that it leads to early detec-tion of a quality problem. Often, a brainstorming session is required to identify

18

the sub-causes. The disadvantage of the cause and effect diagram is its inabilityto show the interactions between the problem causes or factors.

A cause and effect diagram is often used after a brainstorming session. Oneother diagram used to organise the ideas, issues, concerns, etc of a brainstormingsession is the affinity diagram. This diagram groups the information based onthe natural relationships between the ideas, issues, etc. A tree diagram is onewhich breaks down a subject into its basic elements in a hierarchal way; it canbe derived from an affinity diagram. The basic objective of these diagrams isto organise the relationships in a logical and sequential manner.

Cause−and−Effect diagram

Surface Flaws

Measurements

Environment

Materials

Methods

Personnel

Machines

Micrometers

Microscopes

Inspectors

Alloys

Lubricants

Suppliers

Shofts

Supervisors

Training

Operators

Condensation

Moisture

Brake

Engager

Angle

Speed

Lathes

Bits

Sockets

Figure 10: Cause and Effect diagram

4.1.6 Multi-Vari Chart

This chart is used to graphically display the variability due to various factors.Multi-vari charting is a quick method of analysing the variation and can beviewed as the EDA tool prior to advanced (nested) ANOVA model of variousfactors.

The simplest form of a muti-vari chart displays the variation over a shortspan and a long span of time. For example, five consecutive items are takenfrom a grinding operation every half hour and the diameter of the items sampledis measured. The time taken to produce the five items is the short span of time.Let us use the range, the difference between the largest and smallest observed

19

value, as a measure of variability in this short span of time. These range valuescan then be shown over the longer period of the study.

If a multi-vari chart indicates instability in either the short or longer term,the factors causing the instability must be listed and analyzed further. Notethat all production conditions such as change of raw materials and processinterventions such as tool adjustments etc will be noted on the check sheets.A cause and effect diagram or a brainstorming session will help to list factorspresumably causing such instability.

4.1.7 Run Chart

A run chart is a particular form of a scatter plot with all the plotted pointsconnected in some way. This chart usually shows a run of points above andbelow the mean or median. Run charts are mainly used as exploratory toolsto understand the process variation. For instance, the stability of a productionprocess can be crudely judged by plotting the quality measure or the qualitycharacteristic against time, machine order etc. and then checking for any pat-terns or non-random behaviour (such as trends, clustering, oscillation, etc) inthe production process.

5 Common Causes and Process Capability

Common cause variations belong to a system. The third postulate of Shewhartrelates to removal of common causes (hence reducing the common cause vari-ations) by changing the basic system itself. Experimental designs discussed inChapter 10 are extremely important to achieve reduction in the product vari-ation. This offline line activity of designing of a product commonly employsstatistical designs known as fractional factorial designs, response surface designsetc. These topics are beyond the scope of a first year course. Hence we willbriefly present a case study reported by Sohn and Park (2006) which appearedin Quality Engineering journal. A large number of case studies involving facto-rial designs for reducing variation are available in quality related journals suchas Quality Engineering.

Automotive disc brake systems emit two kinds of noise; one is of low-frequency “groan” noise, and the other is the high-frequency “squeal” noise.Brake discs are produced through the sequential steps of cutting, grinding, etc.After experimentation, Sohn and Park (2006) reported that brake discs andcompressibility of brake pads were identified as the most important output re-sponses related to brake noises. Table 1 shows some of the experimental resultsreported by Sohn and Park (2006) for improving the compressibility of brakepads (under 160 bar pressure µm). The following factors were known to affectcompressibility:

1. No. of cycles of gas emission (Factor A)

2. Temperature of hot forming process, measured in ˚C (Factor B)

20

3. Pressure of hot forming process, measured in N/µm2 (Factor C)

These three factors were crossed to obtain eight different treatment combi-nations. Compressibility measurements were made for four replications. Theold settings were found not optimal after comparing the mean and standarddeviations of compressibility. Using statistical modeling techniques, the experi-mental data were analyzed and then the authors found that the Gas Emissionlevel must be set at 21 Cycles, Temperature at 145˚C and Pressure at 35.3N/µm2 in order to improve the compressibility as well as reduce its variability.Note that these predicted optimum levels were not part of the treatments ap-plied during the experimentation but estimated using the appropriate statisticalmodel. So additional confirmatory experimental runs were made and these runsconfirmed that the model predictions were indeed correct.

Table 1: Experimental Summary for Reducing Compressibility

Treatment Factor A Factor B Factor C Mean Com-pressibility

SD of Com-pressibility

1 18 145 35 147.3 4.22 22 145 35 156 3.83 18 155 35 153.6 5.94 22 155 35 150.3 4.75 18 145 45 154.5 3.66 22 145 45 160 2.87 18 155 45 155.8 5.38 22 155 45 167.3 5.8Old settings 20 150 40 157.2 5.1

Variability in quality characteristics affects quality, and results in noncon-forming units when specifications are not met. In other words, the productionprocess becomes incapable of meeting the specifications. In order to assess,whether a process is capable of meeting the specifications, process capabilityindices are defined.

5.1 Cp index

This index is defined as

Cp =USL− LSL

where USL and LSL are respectively the upper and lower specification limitsand σ is the standard deviation of the process characteristic. Six sigma spread ofthe process is the basic definition of process capability when the quality charac-teristic follows a normal distribution. If Cp =1, then the process is just capableof meeting the specifications, see Figure 11. In reality the process standard de-viation σ is estimated and the true distribution may depart from normal. Hence

21

in order to allow for the sampling variability and other assumption violations,the desired value for the estimated Cp is set 1.33 for existing processes and 1.5for new processes. If the estimated index is lower than 1.33, it implies that theprocess variability is high, and actions must be taken to reduce it.

µ USL LSL 3σ 3σ

Figure 11: Cp =1 scenario

If the quality characteristic has only one specification, either on the lower orupper side, then the following indices are used.

• CpL = µ−LSL3σ (lower specification)

• CpU = USL−µ3σ (upper specification)

If the process is centred at µ, then one has Cp = CpL = CpU . For a betteridea of process centring, the index defined next will be used.

5.2 Cpk index

The index Cpkis defined as

Cpk = min{CpL , CpU}.

If Cpk < Cp, it means that the process is not centred. (It will never happenthat Cpk > Cp.) If Cp is high but not Cpk, it means that the process needscentering. Hence Cp is said to measure the potential capability of the processwhereas Cpk measures the actual capability. Recommended values for CpL, CpU

and Cpk are the same as for Cp. The six sigma methodology aims to achieve aCpk of 1.5 to ensure 3.4 DPMO.

22

5.3 Cpm index

Consider the following specification requirements for which two production pro-cesses are available.

LSL = 35, USL = 65 and target = T = 50 units.

Let the process parameters and the associated process capability indices be:

Process I µ=50 σ=5.0 Cp=1.0 Cpk=1.0

Process II µ=57.5 σ=2.5 Cp=2.0 Cpk=1.0

The two processes have the same Cpk values but obviously the second processis not at the target. For Process II, it can be observed that the index Cp is notequal to Cpk. To have a better indicator of process centering at the desiredtarget, the following index is used:

Cpm =USL− LSL

where τ is the square root of the expected square deviation from the target Tnamely

τ2 = E (X − T )2

= E((X − µ)2 + (µ− T )2

)= σ2 + (µ− T )2

The relationship between the Cp and Cpm indices is as follows:

Cpm = USL−LSL

6√

σ2+(µ−T )2

= Cp

1+δ2

where δ = µ−Tσ .

The process capability indices are computed in two ways. The first approachis to obtain the index using an estimate of sigma for the shorter term (forexample, after experimentation). The other approach is to obtain a long termestimate of the sigma, which is often known after the implementation of theSPC methods, which will be discussed in later sections.

The interpretation of process capability indices becomes difficult in the fol-lowing situations:

1. When the process is not in a state of statistical control

2. Non-normal process

3. Correlated process and

4. Inevitable extra variation between different production periods.

Hence caution must be exercised in interpreting the process capability indices.

23

6 Process Monitoring With Control Charts

A control chart graphically displays a summary statistic of the characteristic(s)of interest for testing the existence of a state of statistical control in the process.Figure 12 shows a typical control chart configuration for monitoring the meanof a quality characteristic. Figure 12 also shows certain limits (labelled LCLand UCL) called control limits. These limits are different from the specificationlimits which represent the extreme possible values of a quality characteristic forconformance of the individual unit of the product. Control limits on a controlchart are fixed using probability laws and are different from the specificationlimits. They are not intended for checking the quality of each unit producedbut as a basis for judging the significance of quality variations from time to timeor sample to sample or lot to lot. If all the special causes are eliminated, thenpractically all the plotted points will lie within the control limits.

Sample Control Chart

Group

Qua

lity

Mea

sure

73.9

9073

.995

74.0

0074

.005

74.0

1074

.015

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

LCL

UCL

Figure 12: A Typical Control Chart

6.1 Rational Subgrouping

In the technique of control charting, past data are used for judging whetherthe future production will be in control or not. To accomplish this, past dataare accumulated. By a subgroup of observations, we mean one of a series ofobservations obtained by subdividing a larger group of observations. By a ra-tional subgroup, we mean classifying the observed values into subgroups such

24

that within a rational subgroup variations are expected to be due to com-mon causes and variations between subgroups may be attributable to specialcause(s). In other words, the subgroups should be such that special causes showup in differences between the subgroups as against special causes differentiatingthe member units of a given subgroup. That is, we would like to see that themember units of a subgroup are as homogeneous as possible. For example, as-sume that a machine has four heads operated by the same worker and using thesame raw material. We would like to accumulate data pertaining to the samemachine head to form a subgroup, rather than pooling the data from all thefour machine heads to form the subgroup. Any head-to-head difference will bereflected in the differences between subgroups. The task of subgrouping requiressome technical knowledge of the production process, particularly the productionconditions and how units were inspected and tested. Subgrouping on the basisof time is the most useful approach followed in practice. This is because theprocess stability tends to be lost over time due to the use of several batches ofraw material(s), changing operatives, etc. The other factors for subgrouping arebatch of raw material, machine, operator, etc. Careful rational subgroupingis extremely important to ensure the effectiveness of a control chartand easy identification of assignable causes. The success of the control charttechnique largely relies on proper subgrouping.

For a constant cause system the variation within a subgroup is the same asthe variation across subgroups. Therefore, if the assumption of a constant causesystem is correct, it should be possible to predict the behaviour of statistics suchas sample averages, ranges, and standard deviations, across subgroups based onthe homogeneous variation observed within subgroups. Data from a constantcause system of variation will display only unexplainable variation both withinand across rational subgroups. The range of variation due to constant causeswill be within predictable statistical limits. Non-random patterns of variationappearing across the rational subgroups can be treated as the signal for specialcauses.

For some processes, uncontrollable factors such as the seasonal nature ofinput quality of materials may be involved. Such structural variation will alsobe treated as common than special. Proper subgrouping is expected to take careof such issues. For example, a monitoring procedure for road accidents mustconsider the structural variation due to Friday and weekends. The monitoringprocedure should be based on two distinct common cause systems namely (i)Monday-Thursday and (ii) Friday-Sunday. If the whole week is treated as asubgroup, the common cause variability may be incorrectly estimated.

6.2 Shewhart Control Charts

Let M be some statistic computed using rational subgroups. We will also callM a control statistic. Let the mean of M be µMand the standard deviation beσM . Then the central line (CL), the upper control limit (UCL) and the lowercontrol limit (LCL) are fixed at

25

UCL = µM + kσM

CL = µM

LCL = µM − kσM

where k is the distance of the control limits from the central line, expressed instandard deviation units. This configuration, known as Shewhart control chart,is shown in Figure 13. The estimation of µM and σM and fixing a value for kare statistical problems.

6 5 4 3 2 1 6 5 4 3 2 1 e etc

subgroup number

M

Upper Control Limit (UCL)

Lower Control Limit (LCL)

Central Line (CL)

µ M + k σ M

µ M - k σ

M

µ M

k σ M

k σ M

Figure 13: Shewhart Control Chart

The Shewhart control charts can be classified as of variables or attributetype depending on the type of quality measure(s) adopted (see Table 2).

Shewhart recommended a value of 3 for the control limit constant k. Hencethe control limits are known as 3-sigma limits. If the computed value of thequality measure for a given subgroup breaches the 3-sigma limits, action willbe initiated to look for assignable causes. Hence, we will also call the 3-sigmalimits as action limits. There are several reasons for using 3 sigma limits. Theimportant ones are-

1. Special cause investigation is expensive for most engineering processesand also time consuming. Hence it is important to avoid the frequency offalse alarms. That is, we want to avoid tampering the process looking forspecial causes when there are not any. In order to keep the rate of false

26

Table 2: Control Chart Types

Control statistic M Name of Chart TypeAverage X X−chart VariablesStandard deviation S S-chart VariablesRange R R-chart VariablesProportion defective p p-chart AttributeNumber of defectives np np-chart AttributeNumber of defects c count or c-chart AttributeDefects per unit u u-chart Attribute

alarms low, we need to tolerate a wider variation in M . If M has a normaldistribution, the probability of M breaching the 3 sigma control limits is0.0027, which is very small. Even if the original characteristic of interestX does not follow a normal distribution, the statistic under considera-tion M such as the mean will approximately follow a normal distributionprovided the sample or subgroup sizes are large. In fact the central limittheorem requires the averages be independent random variables, thoughnot identically distributed.

2. Even if the assumption of normal distribution is not always easy to justifyfor the individual characteristic values, 3-sigma control charts for individ-ual values can be used with caution. In a worst case situation (withoutthe normal or unimodal assumption), three sigma limits will cover onlyabout 89% of the distribution (due to Chebychev’s inequality, and we willnot worry about the theory here). If the process characteristic follows aunimodal probability distribution, then the three sigma limits will cover95% of the distribution, resulting in about 5% false alarm rate (due to atheoretical result known as Gauss inequality).

3. No process characteristic will follow exactly a theoretical distribution.Hence it is difficult to calculate the exact false alarm rates. With 3 sigmalimits, it is economical to investigate the process for special causes asagainst other limits. The 3 sigma limits were found to work well for manydifferent types of processes.

In order to compute the control limits, the unknown parameters µM and σM

of the process are estimated using extensive past (or historical or retrospective)data. This is known as Phase I for setting up the control charts. Phase I dataanalysis is intended to obtain “clean” data to represent the process having onlycommon causes. The EDA tools studied in earlier chapters and the simple QCtools explained in this Chapter will prove invaluable for Phase I analysis.

Based on Phase I investigation, parameters such as the mean, standard devi-ation etc of a typical process in control will be hypothesised. Such control chartsbased on the hypothesised values will be called standards given charts. These

27

charts will be implemented in Phase II for monitoring the current production.That is, subgroups will be drawn one by one and plotted on the Standards givencontrol chart. An example is given in the next section.

The basic signal rule or test for the presence of special causes is that eitherthe upper or the lower control limit must be breached by a plotted point. Thisrule can be supplemented by extra tests based on a series of points for declaringthe presence of special causes. The commonly adopted rules (known as WesternElectric supplementary run rules) are illustrated in Figure 14. Note that theapplication all these rules will hugely increase the false alarm rate and only aselected combination of them must be employed.

7 Variables Control Charts

Variables control charts are used when the quality characteristic is measurableon a continuous scale. For example, the dimension of a piston ring is measurableas a continuous variable. In a typical process, there may be several hundredvariables, and only key performance or use characteristics are considered forcontrol charting using variables charts. The the control statistic M for variablescharts is usually the mean of the quality characteristic. That is, the intentionof control charting is to monitor the process level. For example, the true meandimension of the piston ring may change either upward or downward duringthe production. Hence the subgroup means are used to monitor the processlevel, and the resulting chart is known as the Xbar

(X

)chart. This chart will

be accompanied by either the range (R) chart or standard deviation (S) chartwhich will monitor the increase in (within subgroup) variability over time.

7.1 Estimation of Common Cause Sigma

To construct both charts, it is necessary to estimate the mean (µ) and standarddeviation (σ) of the quality characteristic during a state of control (i.e., whenthe process is subjected to only common causes) using historical data. Theapproach followed in SPC for estimating µ and σ will be explained consideringthe following example.

Cotton yarn is produced by a spinning machine (called a ring spinningframe). This machine will draw slivers of cotton (which are produced by severalpreparatory machines) and spins them into yarn of continuous lengths. A typ-ical spinning mill will have a number of spinning frames, each machine havinga number of spindles with the spinning operation done by each of the spindles.Let us consider the problem of controlling the process of spinning yarn (usingthe quality characteristic YARNCOUNT of yarn produced) for a given spinningframe. The term YARNCOUNT is the number of 840 yard lengths in one poundof yarn. Obviously, the higher the YARNCOUNT, the thinner the yarn. A unitfor inspection purposes is fixed at a 120 yard length of yarn (called a lea). Lea

28

−3σ

−2σ

−1σtarget

3σA

B

C

A

B

C

Test 1: One point beyond Zone A(usual signal with action limits)

−3σ

−2σ

−1σ

target

3σA

B

C

A

B

C

Test 2: Nine points in a row in Zone C and beyond

−3σ

−2σ

−1σ

target

3σA

B

C

A

B

C

Test 3: Six points steadily increasing or decreasing

target

−1σ

−2σ

−3σ

A

B

C

A

B

C

Test 4: 14 points in a row alternating up and down

−3σ

−2σ

−1σ

target

3σA

B

C

A

B

C

Test 5: Two out of three points in arow in Zone A or beyond

−3σ

−2σ

−1σ

target

3σA

B

C

A

B

C

Test 6: Four out of five points in arow in Zone B or beyond

target

−1σ

−2σ

−3σ

A

B

C

A

B

C

(above and below central line)Test 7: 15 points in a row in Zone C

target

−1σ

−2σ

−3σ

A

B

C

A

B

C

sides of central line with none in Zone CTest 8: Eight points in a row on both

Figure 14: Supplementary Run Rules

29

testing and measuring instruments are available which will quickly determinethe quality characteristics YARNCOUNT, strength, number of thick and thinplaces, etc.

Let the nominal or target YARNCOUNT be 40, i.e. a pound of yarn will give40(840) = 33600 yards of length. Assume that the lower and upper specificationlimits for YARNCOUNT are respectively 39.8 and 40.2.

The mill was sampling five leas from randomly selected spindles from thespinning frame for testing during a production shift. On some days/shifts sam-ples were not taken. Multiple samples were taken on a few shifts. Table 3provides the historical data collected by the mill. This Table indicates certainimportant process conditions that were noted during sampling. The mill foundthat the input for the spinning machine, namely the yarn slivers produced inthe preparatory process, was not uniform during certain periods. Such cases, in-dicated as ‘input sliver problem’, occurred intermittently. It took some time tolocate the sources of trouble in the preparatory stages and correct this problem.Samples numbered 17 and 34 are associated with clear (engineering) evidencethat they represent unusual production conditions or measurement problems.These samples must be dropped. The same is the case with subgroups associ-ated with ‘input sliver problem’ and hence the samples numbered 4, 14, and 21are also dropped. All cases where there is no strong technical evidence for lackof control such as sample 25 (casual operator employed) will be included in theanalysis using control charts.

It is also very likely that certain unusual production conditions or specialcauses would have existed during the period of data collection which are notevident in the ordinary course of production operations. A trial control chartfor our Phase I analysis will be used to detect the presence of special causes sothat further technical investigation can be initiated to locate and eliminate thesources of trouble.

How YARNCOUNT varies during a production shift is important for ratio-nal subgrouping and effectiveness of the control charts. More studies must bedone by collecting data in the same shift at different time intervals to under-stand how other process variables such as interference due to doffing, operatorbreaks, maintenance schedules, etc, affect YARNCOUNT. They may suggestmore frequent sampling during a given production shift. One of the commonways of subgrouping is to use a (small) block of time and allow bit longer timebetween subgroups. Frequent sampling and formation of small subgroups isuseful than infrequent sampling and formation of large subgroups.

In order to estimate the true mean (µ) or standard deviation (σ) of thequality characteristic YARNCOUNT, the historical data will NOT be pooled.The retrospective data may contain periods dominated by one or more specialcauses, and a pooled estimate of the standard deviation can be used only whenthe process is known to be in control, i.e. dominated only by common causes.The Shewhart control charts allow only common cause variation within a sub-group, and any extra variation between subgroups is inadmissible and will beattributed to the presence of special causes. For any given subgroup i, the usualstandard deviation Si (n-1 in divisor) or the range Ri will be used to estimate

30

Table 3: Retrospective Data for Trial Control Chart

Sample Date Shift Obs 1 Obs 2 Obs 3 Obs 4 Obs 51 28-09 1 40.0 39.9 40.0 40.1 40.02 28-09 2 40.0 40.0 40.1 40.0 40.13 28-09 3 40.0 40.0 40.0 40.0 39.94 29-09 1 41.0 40.9 41.0 41.0 41.0 input sliver problem5 29-09 2 40.0 40.0 40.1 40.0 40.06 29-09 3 40.0 40.0 40.0 39.9 39.97 30-09 2 40.0 40.1 40.0 40.0 40.08 30-09 3 40.0 40.0 39.9 40.0 40.19 04-10 1 40.1 40.0 39.9 40.0 40.010 04-10 2 40.0 40.0 40.1 40.0 40.011 04-10 3 40.0 40.0 40.0 40.1 40.012 05-10 1 40.0 40.0 40.0 40.0 40.013 05-10 2 40.0 40.0 40.0 39.9 40.014 05-10 3 40.5 41.0 41.0 40.8 41.0 input sliver problem15 06-10 2 40.0 39.9 40.0 40.0 40.016 06-10 3 40.1 40.0 40.0 40.0 40.017 07-10 1 40.1 NA NA NA NA faulty motor18 07-10 2 40.1 40.0 40.1 39.9 39.919 07-10 3 40.0 39.9 40.0 40.1 40.020 08-10 1 40.1 40.0 40.0 40.0 40.121 08-10 2 39.0 38.1 39.0 39.0 39.6 input sliver problem22 08-10 3 40.0 40.0 40.0 40.1 40.023 11-10 1 39.9 40.0 39.9 40.0 40.024 11-10 1 40.0 40.0 40.0 40.1 40.125 11-10 2 40.2 40.0 40.1 40.0 40.0 casual operative26 11-10 3 40.1 40.0 40.0 40.0 39.927 12-10 1 40.1 40.1 40.1 40.0 40.128 12-10 2 40.0 40.0 40.0 40.0 40.029 12-10 3 40.0 40.1 40.0 39.9 40.030 13-10 1 40.0 40.0 40.1 40.0 40.031 13-10 2 39.9 40.0 40.0 40.1 39.932 13-10 2 40.0 40.0 40.0 40.0 40.133 13-10 3 39.9 40.0 40.1 40.0 39.934 14-10 1 60.0 59.9 60.0 40.1 40.0 Yarncount mix up35 14-10 2 40.0 40.1 40.1 40.0 40.036 14-10 3 40.1 40.1 40.0 39.9 40.037 15-10 1 40.1 40.0 39.9 40.0 40.038 15-10 2 39.9 40.0 40.0 40.1 40.139 15-10 3 40.1 40.0 40.1 40.0 40.0

31

the true process standard deviation σ. If there are m (say) such subgroups, themean of the m subgroup standard deviations (Si values) or ranges (Ri values)will be used to estimate the process σ. Similarly the mean of the m subgroupmeans (Xi (say) values) is used to estimate the true process mean µ. ConsiderTable 4 which gives the means, standard deviations and ranges for the YARN-COUNT data. Note that this table omits the samples 4, 14, 17, 21 and 34 andrelates to a total of 34 subgroups only.

Table 4: Subgroup Means, Ranges and Standard Deviations

Old samplenumber

Subgroup i Xi Ri Si

1 1 40.00 0.2 0.07072 2 40.04 0.1 0.05483 3 39.98 0.1 0.04475 4 40.02 0.1 0.04476 5 39.96 0.1 0.05487 6 40.02 0.1 0.04478 7 40.00 0.2 0.07079 8 40.00 0.2 0.070710 9 40.02 0.1 0.044711 10 40.02 0.1 0.044712 11 40.00 0.0 0.000013 12 39.98 0.1 0.044715 13 39.98 0.1 0.044716 14 40.02 0.1 0.044718 15 40.00 0.2 0.100019 16 40.00 0.2 0.070720 17 40.04 0.1 0.054822 18 40.02 0.1 0.044723 19 39.96 0.1 0.054824 20 40.04 0.1 0.054825 21 40.06 0.2 0.089426 22 40.00 0.2 0.070727 23 40.08 0.1 0.044728 24 40.00 0.0 0.000029 25 40.00 0.2 0.070730 26 40.02 0.1 0.044731 27 39.98 0.2 0.083732 28 40.02 0.1 0.044733 29 39.98 0.2 0.083735 30 40.04 0.1 0.054836 31 40.02 0.2 0.083737 32 40.00 0.2 0.070738 33 40.02 0.2 0.083739 34 40.04 0.1 0.0548

The overall mean of the 34 (= m) subgroups is the estimate of µ. Let usdenote the grand or overall mean as X (X double bar). That is

32

µ = X =1m

m∑i=1

Xi = (40.00 + 40.04 + . . . + 40.04)/34 = 40.01

where Xi is the ith subgroup mean. The process standard deviation σ is esti-mated as

σ = S/c4

where S is the average of the subgroup standard deviations, viz.

S =1m

m∑i=1

Si

where Si is the standard deviation of the ith subgroup and c4 is a constantthat ensures σ an unbiased estimator of σ. That is c4 = E(S)/σ. Hence, theconstant c4 is known as the unbiasing constant. c4 is purely a function of thesubgroup size n and values are given in Table 5. For YARNCOUNT data,

S = (0.0707 + 0.0548 + . . . . + 0.0548)/34 = 0.05703.

giving σ = 0.05703 / 0.94 = 0.0607.

Table 5: Unbiasing constants for Ranges and Standard Deviations

n c4 d2

2 0.7979 1.1283 0.8862 1.6934 0.9213 2.0595 0.9400 2.3266 0.9515 2.53410 0.9727 3.07815 0.9823 3.47220 0.9869 3.73525 0.9896 3.931

It is also possible to estimate the process sigma using ranges. That is, theestimator is

σ = R/d2

where R is the mean of the subgroup ranges given by

R =1m

m∑i=1

Ri

33

and d2 is the unbiasing constant for the range. That is, d2 = E(R)/σ valuesare given in Table 11.5 for selected subgroup sizes. For YARNCOUNT data, wefind

R = (0.2 + 0.1 + . . . . + 0.1)/34 = 0.1324

yielding σ = R/d2= 0.1324 / 2.326 = 0.0569. In general (i.e. if subgroup sizen > 2), the range estimate of σ is inefficient compared to the estimate based onthe standard deviation.

7.2 Xbar chart

After estimating the true process level as X and the common cause sigma asσ, the control limits of the X-chart are established using the above estimatedvalues of µ and σ. Since V(X) = σ2

n , the 3-sigma control limits for the X-chartare obtained as:

µ± 3σ√n

Hence the control limits for the X-chart based on the standard deviation estimateof σ are:

LCL = X − 3

(S/c4

)√

n,

UCL = X + 3

(S/c4

)√

n.

For YARNCOUNT data, it is easy to see that

LCL = 40.01- 3(0.0607/√

5) = 39.929.

UCL = 40.01+ 3(0.0607/√

5) = 40.091.

Hence the control limits for the X-chart based on the range estimate of σ are:

LCL = X − 3

(R/d2

)√

n

UCL = X + 3

(R/d2

)√

n

The X-chart control limits for the YARNCOUNT data are:

LCL = 40.01- 3(0.0569/√

5) = 39.933.

34

UCL = 40.01+ 3(0.0569/√

5) = 40.086.

It is easy to compute the control limits using the control limit formulaeappearing in Table 6 to 8. The table also gives certain constants (A2, A3,B1, B2 , B3, B4, D1, D2, D3, D4) called control limit factors for computingcontrol limits. For example, Table 11.6 gives the control limits (based on theR estimate of σ) as X ± A2R with control limit factor A2 = 0.577 (which isequal to 3/d2

√n). The control limit factors are useful for manual computation

of control limits.

Table 6: Formulae and constants for X control chart

subgroup size Factor for control limitsn A A2 A3

2 2.121 1.880 2.6593 1.732 1.023 1.9544 1.500 0.729 1.6285 1.342 0.577 1.4276 1.225 0.483 1.28710 0.949 0.308 0.97515 0.775 0.223 0.78920 0.671 0.180 0.68025 0.600 0.153 0.606Control limitsFor Analysing Past Production for Control (StandardsUnknown):central line =Xcontrol limits = X ±A2R or X ±A3SFor Controlling Quality during Production (StandardsKnown):

central line =X′

control limits = X′±Aσ′or X ±A2R

′n

The control limits will be displayed on a time sequence plot or run chart forthe mean. That is, the subgroup means Xi will be plotted against the subgroupnumber i with reference lines for the control limits. The overall mean X willalso be placed on the chart to produce the central line. Figure 15 is the resultingX-chart for the YARNCOUNT data based on the S estimate of σ.

None of the plotted points (subgroup means Xi) crossed the control lim-its and hence we call the mean YARNCOUNT to be under control. In other

35

S Chartfor yarn

Group

YA

RN

CO

UN

T S

D ●

● ●

● ●

● ●

● ● ●

● ●

0.00

0.02

0.04

0.06

0.08

0.10

0.12

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

LCL

UCL

Figure 15: X-Chart

words, the above chart indicates that there is no significant shift in the meanYARNCOUNT level for the past production periods.

In order to answer the question whether the variability within the subgroupsis stable, either an R-chart (range chart) or a S-chart (standard deviation chart)will be used. These charts evaluate the variability within a process in terms ofthe subgroup ranges and standard deviations. One of these two charts willaccompany the S-chart for monitoring the variation within the subgroups overtime. We will use Table 11.7 and Table 11.8 for relevant formulae and controllimit factors for the S and R charts respectively.

7.3 S chart

The control limits are given by LCL = B3Sand UCL = B4S with the centralline being S. For the given subgroup size of 5, one finds the factors for controllimits B3 = 0 and B4= 2.089 from Table 11.7. For YARNCOUNT data, S =0.05703 and hence

LCL = 0(0.05703) = 0 and UCL = 2.089(0.05703) = 0.119.

These control limits are then placed on a run chart of Si values with a referencecentral line for S = 0.05703. Figure 16 is the S chart for YARNCOUNT data:

36

Table 7: Formulae and Constants for S-chart

subgroup size Factor for control limits Factor for centralline

n B3 B4 B5 B6 c4

2 0 3.267 0 2.606 0.79793 0 2.568 0 2.276 0.88624 0 2.266 0 2.088 0.92135 0 2.089 0 1.964 0.94006 0.030 1.970 0.029 1.874 0.951510 0.284 1.716 0.276 1.669 0.972715 0.428 1.572 0.421 1.544 0.982320 0.510 1.490 0.504 1.470 0.986925 0.565 1.435 0.559 1.420 0.9896Control limitsFor Analysing Past Production for Control (Standards Unknown)central line = S control limits = B3S and B4SFor Controlling Quality during Production (Standards Known):central line = c4σ

′ control limits = B5σ′ and B6σ

S Chartfor yarn

Group

YA

RN

CO

UN

T S

D

● ●

● ●

● ●

● ● ●

● ●

0.00

0.02

0.04

0.06

0.08

0.10

0.12

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

LCL

UCL

Figure 16: R chart

37

Table 8: 11.8 Formulae and constants for R-chart

subgroup size Factor for control limits Factor for centralline

n D1 D2 D3 D4 d2

2 0 3.686 0 3.267 1.1283 0 4.358 0 2.575 1.6934 0 4.698 0 2.282 2.0595 0 4.918 0 2.115 2.3266 0 5.078 0 2.004 2.53410 0.687 5.549 0.223 1.777 3.07815 1.203 5.741 0.347 1.653 3.47220 1.549 5.921 0.415 1.585 3.73525 1.806 6.056 0.459 1.541 3.931Control limitsFor Analysing Past Production for Control (Standards Unknown):central line =Rcontrol limits = D3R and D4RFor Controlling Quality during Production (Standards Known):central line = d2σ

′ = R′n

control limits = D1σ′ and D2σ

′ or D3R′n and D4R

′n

38

None of the plotted points breach the UCL and hence we will conclude that thevariability within the process is in control.

Now consider the computation of control limits for the R-chart.

7.4 R chart

The control limits are given by LCL = D3R and UCL = D4R with the centralline being R. For the given subgroup size of 5, one finds the factors for controllimits D3 = 0 and D4= 2.115 from Table 11.8. For YARNCOUNT data, R =0.12715 and hence

LCL = 0(0.12715) = 0 and UCL = 2.115(0.12715) = 0.2689.

These control limits are then placed on a run chart of Ri values with a referencecentral line for R = 0.12715. Figure 17 is the R-chart for YARNCOUNT data:

R Chartfor yarn

Group

YA

RN

CO

UN

T R

ange

● ● ● ● ●

● ●

● ●

● ● ●

● ●

● ● ● ●

● ●

● ● ●

0.00

0.05

0.10

0.15

0.20

0.25

0.30

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

LCL

UCL

Figure 17: R Chart

Again the R chart suggests that the variability within the process is undercontrol.

7.5 Revision of Control Charts

When a signal (which could possibly be a false alarm) is obtained for lack ofcontrol from a control chart, one usually looks for the presence of special causes.

39

In the event of finding and eliminating a special cause, it is necessary to revisethe control limits deleting the subgroup(s) which signalled the presence of spe-cial causes. It may also be possible that this identified special cause affected theprocess represented by certain subgroups adjacent to the one that signalled itspresence. Any such subgroup which was influenced by the special cause variablesmust be dropped. In other words, we identify a set of subgroups that representsa process subjected to only chance or common causes. A point breaching theX-chart limits need not necessarily be a breaching point on the R- or S-chart(and vice versa). Such a point need not be dropped from the associated R-or S-chart; nor will it call for a revision. For a normally distributed qualitycharacteristic X, the mean X and the sample variance S2 are independentlydistributed, and hence such an action may be justified.

For YARNCOUNT data, all the points lie within the control limits. Hence,the Standard values for the mean and standard deviation for Phase II chartingare set as:

Standard value for mean = X ′= 40.01Standard value for standard deviation = σ′ = S/c4= 0.0607.Using the fixed standard values, X and R-charts or X and S-charts for

are drawn using the Standards Known control limit constants. This chart isfor future use, i.e. for real time or Phase II control. That is, the productionprocess will be sampled for quality monitoring, i.e. subgroups will be formedone by one. As soon as a subgroup is formed, the point will be plotted on thecontrol charts for the standards known case.

If any later subgroup is found to be associated with a special cause, thenit is necessary to disregard the subgroup while revising the standards which isusually done once 50 or 100 subgroups are taken.

7.6 Control chart for Individual Values

For some processes such as chemical processes, it is not possible to define asubgroup in terms of discrete number of n items on which measurements canbe made. For example, a bulk (composite) sample of milk powder taken at agiven time will yield only one measured value on a quality characteristic such aspercentage milk fat. So for processes where “one at a time” data are collected,the control chart must be based on individual values. This chart is known asthe I-chart. This chart is similar to a X chart with subgroup size 1.

For I-charts, the process standard deviation can be estimated in two ways.For Phase I analysis, m past individual measurements (X1, X2, X3 .... Xm)will be used to obtain the estimate σ = S/c4. The control limits are then set at

X ± 3S

c4

where X = 1m

m∑i=1

Xi.

The alternative method of estimating σ is to use the (average) moving range,which is given by

40

MR =1

m− 1

m∑i=2

MRi =1

m− 1

m∑i=2

|Xi − Xi−1|.

That is, σ is estimated as

σ =MR

d2

where d2 is found corresponding to sample size 2). The control limits are thenset at

X ± 3MR

d2

Figure 18 shows a typical I-Chart for monitoring viscosity of a chemical.

Figure 18: I-ChartI−Chart for Viscosity

Group

Vis

cosi

ty

32.5

33.0

33.5

34.0

34.5

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

LCL

UCL

While the S method of estimation is preferred to the moving range methodon statistical grounds, it poses difficulty in Phase II monitoring. We cannotcompute the standard deviation for a datum. Hence moving ranges are plottedto obtain a Moving Range Chart. The formulae and control limit constantsgiven in Table 11.8 will equally apply for the moving range charts.

41

7.7 Time-weighted Charts

Shewhart control charts are useful to quickly detect sudden big shifts occurringin a production process. However, Shewhart control charts are not sensitive todetect small shifts in the process level. The supplementary run rules improve thesensitivity of Shewhart control charts by detecting small process changes. How-ever advanced control charting procedures which provide time varying weightsare more powerful for detecting small changes in the process level. The followingcontrol charts are suitable when higher sensitivity is desired:

1. Moving Average (MA) charts:These charts are based on the control statistic

Mt =(Xt−w+1 + Xt−w+2 + ... + Xt−1 + Xt

)where w - length or span of the moving average,

Mt- moving average of span w(> 0) at time t,

Xt−the current subgroup average.

2. Exponentially Weighted Moving Average (EWMA) Charts:The exponentially weighted moving average (EWMA) is defined as

Zt = λXt + (1− λ) Zt−1

where λ (0 < λ ≤ 1) is a constant. The charting procedure will be basedon the EWMA.

3. CUSUM charts:The cumulative sum (CUSUM) charts accumulate the deviation from thetarget level after allowing for some slackness due to common causes. Ifonly common causes are present, then the deviations from the target willbe both positive and negative and hence the CUSUM will not grow big.If the CUSUM grows too big, then it is an indication that the processlevel has shifted due to special causes. So a decision limit is set for theCUSUMs, which are similar to the control limits.

8 Attribute Control Charts

By the attribute method we mean the measurement of quality through notingthe presence or absence of some characteristic or property in each of the units,and counting how many units do not posses the quality characteristic or prop-erty or simply counting how many times predefined events of nonconformanceoccurred. The advantage of the attribute method is that a single chart can be setup for several characteristics, whereas a variables chart must be set up for eachof the characteristics with an accompanying chart for controlling variability.

42

8.1 p-Chart for Fraction Nonconforming

This chart is also known as the proportion chart. Instead of proportions, ifpercents are used, then the p-chart will stand for percent chart. The p-chartconfiguration is intended to evaluate the process in terms of the proportion orfraction of the nonconforming units. A unit may be classified as nonconformingbased on predefined classification events such as the breach of a specification,go or not-go gauge, judgment, etc. The unit may be classified as nonconformingin the presence of a nonconformity, defect, blemish, presence or absence of somecharacteristic, etc. The classification may also be based on several characteris-tics.

Let p stand for the true fraction nonconforming of the process and p bethe sample fraction nonconforming computed as the ratio of the number ofnonconforming units d to the sample size n. That is, p = d

n . Commonly dfollows a binomial distribution with parameters n and p i.e.

P (d) =(

nd

)pd (1− p)n−d d = 0, 1, 2.....n.

The mean and variance of p are p and p(1− p)/n respectively. If the true valueof p is known, the control limits become

p± 3

√p(1− p)

n

with the central line being at p. Here p could be a standard value p′.Suppose that the true fraction nonconforming is unknown. As usual, it is

assumed that the total number of units tested from the process is subdivided intom rational subgroups consisting of n1, n2,. . . ,nm units respectively and a valueof the proportion defective is computed for each subgroup. For convenience, letus assume that the subgroup sizes are all equal. If di is the number of defectivesfound in the ith subgroup, then the estimate of p is pi = di

n . The average ofvarious pi values is

p =m∑

i=1

di

mn

.The control limits are set at

p± 3

√p (1− p)

n.

For example, consider the data given as Table 9 on the number of defectivesobtained for 50 subgroups of 100 resistors drawn from a process. The value ofp is 0.01. The control limits are found as

0.01± 3

√0.01 (1− 0.01)

100

43

or 0 to 0.03985. If the computed value for LCL is negative, it is set at zero. Thismeans that there is no ‘control’ exercised to detect any quality improvement.Figure 19 provides the p-chart for the above data. Table 9 also gives the pi

values needed for plotting the p-chart.

Table 9: Nonconforming resistors in various subgroupsi di pi i di pi

1 0 0.00 26 0 0.00

2 0 0.00 27 0 0.00

3 2 0.02 28 1 0.01

4 0 0.00 29 0 0.00

5 1 0.01 30 0 0.00

6 0 0.00 31 1 0.01

7 2 0.02 32 3 0.03

8 2 0.02 33 0 0.00

9 1 0.01 34 1 0.01

10 1 0.01 35 2 0.02

11 0 0.00 36 2 0.02

12 2 0.02 37 0 0.00

13 1 0.01 38 2 0.02

14 1 0.01 39 2 0.02

15 1 0.01 40 1 0.01

16 0 0.00 41 1 0.01

17 0 0.00 42 1 0.01

18 0 0.00 43 3 0.03

19 2 0.02 44 2 0.02

20 3 0.03 45 1 0.01

21 1 0.01 46 1 0.01

22 2 0.02 47 0 0.00

23 0 0.00 48 0 0.00

24 1 0.01 49 0 0.00

25 1 0.01 50 2 0.02

If the subgroup sizes are unequal, then p is estimated as

p =∑

di∑ni

and the (varying) control limits are given by

p± 3

√p (1− p)

ni.

Alternatively, an ‘average’ sample size n = 1m

∑ni can be used.

44

Figure 19: p-chartp−Chart for Resistor Defectives

Group

prop

ortio

n de

fect

ive

● ●

● ●

● ●

● ● ●

● ● ●

● ●

● ●

● ●

● ●

● ●

● ● ●

● ●

● ● ●

0.00

0.01

0.02

0.03

0.04

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49

LCL

UCL

8.1.1 Choice of Subgroup Size

Sometimes it may be desirable to have a lower control limit greater than zeroin order to look for samples that contain no defectives or to detect qualityimprovement. If p is small, obviously, the subgroup size should be very large.For example, for p = 0.01, the minimum subgroup size must be 891 for LCLto be greater than zero. Such large subgroup sizes are not practical and hencesupplementary run tests based on several subgroups are employed to detectquality improvement.

8.2 np-chart

The np-chart is essentially a p-chart, the only difference being the observednumber of defectives is directly plotted instead of the observed proportion de-fective. If p is the proportion defective, then d, the number of defectives in thesubgroup size n, follows a binomial distribution whose expected value is np withstandard deviation

√np(1− p). Here p could be a standard value for Phase II

control charting. When no standards are available, one uses the p estimate anddraws the control limits at np± 3

√np (1− p). The central line is drawn at np.

The OC function of the np chart is similar to that of the p-chart and on theX-axis one plots the d values instead of p values.

45

8.3 c-chart for Counts

By the term area of opportunity, we mean a unit or a portion of material,process, product or service in which one or more predefined events occur. Theterm is synonymous with the term unit and is usually preferred where there isno natural unit, e.g. continuous length of cloth.

By defect, we mean the departure of a characteristic from its prescribed spec-ification level that will render the product or service unfit to meet the normalusage requirements. By nonconformity, we mean a product which may not meetthe specification requirements but may meet the usage requirements. For exam-ple, dirt in a block of cheese is a defect but underweight is just nonconformity.

By c or count, we mean the total number of predefined events occurred in agiven area of opportunity (sample). The c-chart or count chart is a configurationdesigned to evaluate the process in terms of such count of events (e.g. count ofdefects or nonconformities occurred in a sample). Note that no classification ofunits as conforming or not occurs while counting the events. If so,the relevantchart is the p-chart.

We assume that the number of nonconformities d follows a Poisson distribu-tion whose mean and variance are equal to the parameter c. That is, d follows:

p (d) =e−ccd

d!d = 0, 1, 2... (c > 0).

The mean and the variance of d are the same and equal to c. Hence the controllimits for count d (with three sigma spread) are given by

c± 3√

c,

the central line being c. If LCL is less than zero, it is set at zero. Here c couldbe a standard value. In its absence, c is estimated as the average number ofnonconformities in a sample, say c, and the control limits are set at

c± 3√

c

Consider Table 10 showing the number of nonconformities observed in 20subgroups of five cellular phones each. The value of c is 84/20 = 4.2. Thecontrol limits are then found as

4.2± 3√

4.2

or 0 to 10.4 and the central line is set at 4.2. One plots the total number ofdefects found in each subgroup thereafter in the c-chart shown as Figure 20.While using a c-chart, a signal for a special cause may require further analysisusing a cause and effect diagram.

46

Table 10: Number of Nonconformities in various subgroups

subgroup d subgroup d subgroup d subgroup d subgroup d

1 3 5 0 9 1 13 1 17 1

1 0 5 1 9 1 13 2 17 1

1 1 5 2 9 1 13 1 17 2

1 1 5 2 9 2 13 2 17 0

1 0 5 1 9 2 13 1 17 2

2 1 6 0 10 0 14 0 18 2

2 0 6 0 10 1 14 0 18 0

2 0 6 2 10 0 14 2 18 1

2 1 6 1 10 1 14 1 18 2

2 0 6 2 10 0 14 0 18 2

3 0 7 0 11 3 15 1 19 0

3 2 7 0 11 0 15 2 19 0

3 0 7 0 11 2 15 1 19 2

3 1 7 0 11 0 15 1 19 0

3 2 7 0 11 1 15 1 19 4

4 2 8 1 12 1 16 0 20 1

4 0 8 0 12 0 16 1 20 0

4 0 8 1 12 0 16 0 20 0

4 0 8 0 12 0 16 0 20 1

4 2 8 0 12 0 16 1 20 0

Figure 20: c-chartc−Chart for Cellphone Defects

Group

no. o

f def

ects

02

46

810

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

LCL

UCL

47

8.4 u-chart

The u-chart or count per unit chart is a configuration to evaluate the processin terms of average number of predefined events per unit area of opportunity.The u-chart is convenient for a product composed of units whose inspectioncovers more than one characteristic such as dimension checked by gauges, otherphysical characteristics noted by tests, and visual defects observed by eye. Underthese conditions, independent defects may occur in one unit of product and apreferred quality measure is to count all defects observed and divide by thenumber of units inspected to give a value for defects per unit (rather than avalue for the fraction defective). Here only the independent defects are to becounted. The u-chart is particularly useful for products such as textiles, wire,sheet materials, etc, which are continuous and extensive. Here the opportunityfor defects/nonconformities is large even though the chance of a defect at oneparticular spot is small.

The total number of units tested is subdivided into m rational subgroups ofsize n each. Here n can be in fractions. For each subgroup, a value of u, thedefects per unit, is computed. The average number of defects is found as

u =total number of defects in all samplestotal number of units in all samples

.

Assuming that the number of defects follows the Poisson distribution, the controllimits of the u-chart are given by,

u ± 3

√u

n

For unequal subgroups, u is found as∑niui∑ni

where ni is the ith subgroup size and ui is the number of defects per unit in theith subgroup. Here n1,n2 . . . need not be whole numbers, e.g. the length of thecloth inspected may be 2.4m. The control limits are set at

u ± 3√

u

ni

The u-chart for the cellular phone data is given as Figure 21.

9 Acceptance Sampling

Acceptance sampling is the methodology by which decisions to accept or notaccept (a lot or a series of lots usually) are based on the results of the inspectionof samples. Acceptance sampling is preferred when:

48

Figure 21: u-chartu−Chart for Cellphone Defects

Group

defe

cts

per

cellp

hone

0.0

0.5

1.0

1.5

2.0

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

LCL

UCL

• Testing is destructive.

• Cost and time for 100% inspection are high.

• Less handling of product is necessary, when handling can cause for exampledegradation of product.

• There are limitations of work force.

• Serious product liability risks exist

• At the pre-shipment and receiving inspection stages.

The disadvantage of acceptance sampling is the risk of accepting bad lots andrejecting good lots. Acceptance sampling when applied on the final productsimply accepts and rejects lots and hence does not provide any direct formof quality improvement. Prof. Dodge, the originator of acceptance sampling,therefore stressed that one cannot inspect quality into a product.

Acceptance sampling plan is a specific plan that clearly states the rulesfor sampling and the associated criteria for acceptance or otherwise. Acceptancesampling plans can be applied for inspection not only end items but for theinspection of (i) components, (ii) raw materials, (iii) operations, (iv) materialsin process, (v) supplies in storage, (vi) maintenance operations, (vii) data orrecords and (viii) administrative procedures etc. Acceptance sampling is alsocommonly employed for safety related inspection by governmental departments,particularly when goods are imported.

49

9.1 Single Sampling Attributes Plan (n, Ac)

The operating procedure of the single sampling attributes plan is as follows:

• From a lot of size N , draw a random sample of size n and observe thenumber of nonconforming units (nonconformities) d.

• If d is less than or equal to the acceptance number Ac, the maximumallowable number of nonconforming units (nonconformities), accept thelot. If d > Ac, do not accept the lot.

The symbol Re (= Ac+1) is used to denote the rejection number. Non-acceptance does not always imply rejection of the batch. Actions such as sal-vaging, scrapping, screening or rectifying inspection etc may also follow insteadof total rejection of the batch.

The acceptance quality limit (AQL, also called Acceptable Quality Levelin older literature) is the maximum percentage or proportion of nonconformingunits (or nonconformities) in a lot that can be considered satisfactory for thepurpose of acceptance sampling. When a consumer designates some specificvalue of AQL, the supplier or producer is notified that the consumer’s accep-tance sampling plan will accept most of the produced lots submitted by thesupplier, provided the process average of these lots is not greater than the des-ignated value of AQL. It must be understood that the specification of AQL isonly for sampling purposes, and is not a licence to knowingly allow nonconform-ing items.

Suppose that an error-free 100% inspection is done to observe the true frac-tion nonconforming p for each lot. Then all lots with p ≤ AQL will be acceptedand all lots with p > AQL will not be accepted. This situation is shown graph-ically in Figure 22 for an AQL of 0.1%.

Due to sampling, one faces the risk of not accepting lots of AQL quality aswell as the risk of accepting lots of poorer than AQL quality. One is thereforeinterested in knowing how an acceptance sampling plan will accept or not acceptlots over various lot qualities. A curve showing the probability of acceptanceover various lot or process qualities is called the operating characteristic (OC)curve and discussed in the next section.

9.2 Operating Characteristic (OC) Curve

The OC curve reveals the performance of a sampling inspection plan in discrim-inating good and bad lots. There are two types of OC curves:

Type A: (For isolated or unique lots) This is a curve showing the probabilityof accepting a lot as a function of the lot quality.

Type B: (For a continuous stream of lots) This is a curve showing the proba-bility of accepting a lot as a function of the process average. That is, the

50

Figure 22: Ideal OC Curve

0.000 0.001 0.002 0.003 0.004 0.005

0.0

0.2

0.4

0.6

0.8

1.0

Fraction Nonconforming

Pro

babi

lity

of A

ccep

tanc

e

AQL

Type B OC curve will give the proportion of lots accepted as a functionof the true process fraction nonconforming p.

A typical OC curve is shown in Figure 23.

9.2.1 OC Function of a Single Sampling Plan

The OC function of the single sampling attribute plan giving the probability ofacceptance for a given lot or process quality p is:

Pa = Pa(p) = Pr(d ≤ Ac | n, Ac, p).

For Type A situations, the hypergeometric distribution is exact for the caseof nonconforming units. Hence,

Pa = Pa(p) = Pr(d ≤ Ac | N,n,Ac, p).

=Ac∑

d=0

(Dd

) (N −Dn− d

)(

Nn

)where N is the lot size, D is the number of defectives in the lot and hence thelot fraction nonconforming p = D/N .

51

Figure 23: A typical OC Curve

0.00 0.01 0.02 0.03 0.04 0.05

0.0

0.2

0.4

0.6

0.8

1.0

Fraction Nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

For Type B situations, the binomial model is exact for the case of fractionnonconforming units and the OC function is given by

Pa(p) = Pr(d ≤ Ac | n, Ac, p)

=Ac∑

d=0

(nd

)pd(1− p)n−d

The above OC function is applicable to a continuous stream of lots and can beused as an approximation to Type A situation when N is large compared ton (n/N < 0.10) and p is small.

For the case of nonconformities per unit, the Poisson model is exact for bothType A and Type B situations. The OC function in this case is

Pa(p) = Pr(d ≤ Ac | n, Ac, p)

=Ac∑

d=0

e−np(np)d

d!

The above OC function is also used as an approximation to binomial whenn is large and p is small such that np < 5. In general, the probability ofacceptance will be underestimated at good quality levels and overestimated at

52

poor quality levels if one approximates the hypergeometric OC function witha binomial or Poisson OC function. The same is true when the binomial OCfunction is approximated by the Poisson OC function. This is shown graphicallyin Figure 24 assuming N = 200, n = 20 and Ac = 1.

Figure 24: Comparison of OC Curves

fraction nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35

0.0

0.2

0.4

0.6

0.8

1.0 Poisson

bionomialhypergeometric

9.2.2 Effect of n and Ac on the OC curve

It can be observed that when n is constant and Ac is increased, Pa(p) willincrease. When Ac is constant and n is increased, then Pa(p) will decrease (seeFigures 25 and 26). These two properties are useful for designing a samplingplan of desired discrimination.

Suppose that a company manufacturing cheese is continuously supplying itsproduction to two supermarkets. Supermarket A requires cheese in lots of 5000units and supermarket B prefers lots of size 10000. Assume that the produceruses two sampling plans with sample size equal to the square root of the lot size.For both plans, let the acceptance number be fixed as one. The company’s truefraction of nonconforming cheeses produced is 1%, which is considered as theacceptable quality level by the consuming supermarkets. The effectiveness of the

53

Figure 25: Effect of Acceptance Number on OC Curve

fraction nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

0.00 0.05 0.10 0.15

0.0

0.2

0.4

0.6

0.8

1.0 n=50, Ac=1

n=50, Ac=2

Figure 26: Effect of Sample Size on OC Curve

fraction nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

0.00 0.05 0.10 0.15

0.0

0.2

0.4

0.6

0.8

1.0 n=50, Ac=1

n=100, Ac=1

54

sampling plans (plan for Supermarket A: n = 71, Ac = 1; plan for supermarketB: n = 100, Ac = 1) is revealed by the respective OC curves shown in Figure 27.

Figure 27: Comparison of OC Curves for a Given AQL

fraction nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

0.00 0.02 0.04 0.06 0.08 0.10

0.0

0.2

0.4

0.6

0.8

1.0 n=71, Ac=1

n=100, Ac=1AQL

The proportion of lots of AQL quality accepted by the two plans are:

• Pa(AQL) = 84% for n = 71, Ac = 1 plan

• Pa(AQL) = 74% for n = 100, Ac = 1 plan.

It is also evident that the plan used for supermarket B is tighter than the planused for supermarket A. For the fixed acceptance number, an increase in samplesize means tightening of inspection. It is always desired that the probability ofacceptance at AQL be higher, such as 95%. Both the plans do not have a highPa at AQL. Plan n = 71 and Ac = 1 is preferable to plan n = 100, Ac = 1since Pa(AQL) = 84% is closer to 95%. The manufacturer is regularly supplyingcheese to both supermarkets. Under the Type B situation of series of lots beingsubmitted, the lots are themselves viewed as random samples from the processproducing cheese. One therefore need not sample in relation to the lot size. Ifit is desired to encourage large lot sizes, then the acceptance number should beaccordingly adjusted so that the Pa at AQL is higher for large lot sizes.

Arguments in favour of the n = 100, Ac = 1 plan can also be given. Theconsuming supermarkets must be protected against bad quality lots. For exam-ple, lots having 5% nonconforming cheeses may be required to be rejected with

55

a large probability to protect the consumer interests. The plan n = 100, Ac =1 is tighter and has a smaller probability of acceptance at the rejectable qualitylevel namely 5% nonconforming (see Figure 28).

Figure 28: Comparison of OC Curves for a Given LQL

fraction nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

0.00 0.02 0.04 0.06 0.08 0.10

0.0

0.2

0.4

0.6

0.8

1.0 n=71, Ac=1

n=100, Ac=1

LQL

Thus it is seen that it will be worthwhile to prescribe an additional index forconsumer protection since the AQL does not completely describe the protectionto the consumer. Hence for consumer protection against bad quality lots, theLimiting Quality Level (LQL) or simply the limiting quality (LQ) is definedas the percentage or proportion of nonconforming units in a lot for which theconsumer wishes the probability of acceptance to be restricted to a (specified)low value.

The producer’s risk (α) is the probability of not accepting a lot of AQLquality and the consumer’s risk (β) is the probability of accepting a lot ofLQL quality. Generally the parameters of the (single) sampling plan mustbe determined for given quality levels, namely AQL, LQL and risks α and β.Figure 29 shows the quality indices AQL, LQL and the associated risks α (=5%)and β (=10%) respectively on a typical OC curve. In practice, the sample sizeand the acceptance number will be chosen for given prouder’s and consumer’srisk points (i.e. AQL, LQL, α and β).

56

Figure 29: OC Curve Showing AQL, LQL, α and β

0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035

0.0

0.2

0.4

0.6

0.8

1.0

Fraction Nonconforming p

Pro

babi

lity

of A

ccep

tanc

e

αα == 0.05

ββ == 0.1

(AQL, 0.95)

(LQL, 0.1)

9.2.3 Average Sample Number (ASN)

The average sample number (ASN) is defined as the average number ofsample units per lot needed for deciding acceptance or non-acceptance. For asingle sampling plan, one takes only a single sample of size n and hence the ASNis simply the sample size n. However the sampling inspection can be curtailed.By curtailed inspection, we mean the stopping of sampling inspection, whena decision is certain. Inspection can be curtailed when the rejection numberis reached since the rejection is certain and no further inspection is necessaryin reaching that decision. Such curtailment of inspection for rejecting a lot isknown as semi-curtailed inspection. If inspection is curtailed once acceptance orrejection is evident, then it is known as fully curtailed inspection. For example,consider the single sampling plan with n = 50 and Ac = 1. Let the sample berandomly drawn and testing of units takes place unit-by-unit. One can curtailinspection, rejecting the lot, as early as the third unit if the first two units arenonconforming. Similarly if all the first 49 units are found to be conforming,then the lot can be accepted without testing the last unit.

Generally it is undesirable to curtail inspection in a single sampling plan.The whole sample is usually inspected in order to have an unbiased record ofquality history.

57

9.3 Double Sampling Plans

Single sampling plans are simple to use. Very often the producer is at a ‘psy-chological’ disadvantage if a single sampling plan is applied to the lots, since nosecond chance is given for the lots not accepted. In such situations, taking asecond sample is preferable.

The operating procedure of the double sampling plan is given in the followingsteps:

1. First draw a random sample of size n1, and observe the number of non-conforming units (nonconformities) d1.

2. If d1 ≤ Ac1, the first stage acceptance number, accept the lot. If d1 ≥ Re1,the first stage rejection number, reject the lot. If Ac1 < d1 < Re1, go toStep 3.

3. Take a second random sample of size n2 and observe the number of non-conforming units (nonconformities) d2. Cumulate d1 and d2, and letD = d1 +d2. If D ≤ Ac2, the second stage acceptance number, accept thelot. If D ≥ Re2 (= Ac2 + 1), reject the lot.

The operating flow diagram for the double sampling plan is given as Fig-ure 30.

Figure 30: Operation of Double Sampling Plan

Draw n1 and observe d1

d1 ≤ Ac1 d1 ≥ Re1

ACCEPT REJECTAc1 < d1 < Re1

Draw n2 and observe d2

Let D = d1 + d2

D ≤ Ac2 D ≥ Re2

The double sampling plan described above can be compactly represented asshown in Table 11:

The double sampling plan has five parameters, n1, n2, Ac1, Re1 and Ac2

(=Re2 − 1). The major advantage of the double sampling plan is that it canbe designed to have a smaller ASN when compared to an ‘equivalent’ singlesampling plan having approximately the same OC curve. The double sampling

58

Table 11: Double Sampling Plan

Stage SampleSize

AcceptanceNumber

RejectionNumber

1 n1 Ac1 Re1

2 n2 Ac2 Re2

plan is relatively harder to administer than the single sampling plan. If theparameters of the double sampling plan are not properly fixed, it may even beinefficient when compared to a single sampling plan.

9.4 Multiple Sampling Plan

The multiple sampling plan is a natural extension of the double sampling plan.The number of stages in a multiple sampling plan is usually fixed at 7. Amultiple sampling plan will require a smaller sample size than a double samplingplan but is more complex to implement. A multiple sampling plan having mstages can be compactly represented as in Table 12:

Table 12: Multiple Sampling Plan

Stage SampleSize

AcceptanceNumber

RejectionNumber

1 n1 Ac1 Re1

2 n2 Ac2 Re2

3 n3 Ac3 Re3

. . . .

. . . .

. . . .

m nm Acm Rem

(=Acm+1)

Let Di =i∑

j=1

dj , i = 1, 2, ...,m. The lot is accepted if Di ≤ Aci and rejected

if Di ≥ Rei, i = 1, 2...m. Further sampling of ni+1 units is carried out ifAci < Di < Rei. It is also usual in multiple sampling not to allow acceptance inthe initial stages whenever small sample sizes are employed. For administrativereasons, it is also usual to fix n1 = n2 = . . . = nm and m=7. The generalisationof the multiple sampling plan is known as the sequential sampling plan whereitems will be tested one by one. The sequential plan is harder to use in practicedue to administrative difficulty and is preferable only when testing is costly ordestructive.

59

9.5 International Standards for sampling Inspection

If the quality history is good, as evidenced by past lot acceptances, it is desirableto reduce the amount of sampling. If the quality history is found to be bad (morelot rejections), then it is essential to tighten the inspection either by increasingthe sample size or reducing the acceptance numbers. In such situations, theuse of more than one sampling plan is desirable with associated rules for usingeach of the plans. Most acceptance sampling Standards formulated by theInternational Standards Organisation (ISO) incorporate such switching rulesand provide comprehensive sampling schemes. A sampling scheme is a specificset of procedures which usually consists of two or three individual acceptancesampling plans in which lot sizes, sample sizes, and acceptance criteria, or theamount of 100% inspection, etc will be related.

There are several ISO Standards for sampling inspection. These Standardsalso provide the OC and ASN properties of the tabulated sampling schemes.They also cover the variables method of sampling where a measurement is madeon a continuous scale on each unit of inspection as against the attribute method.The following are the popular sampling standards used in industry.

1. ISO 2859-0, Sampling procedures for inspection by attributes – Part 0:Introduction to the ISO 2859 attribute sampling system

2. ISO 2859-1, Sampling procedures for inspection by attributes – Part 1:Sampling schemes indexed by acceptance quality limit (AQL) for lot-by-lot inspection

3. ISO 2859-2, Sampling procedures for inspection by attributes – Part 2:Sampling plans indexed by limiting quality (LQ) for isolated lot inspection

4. ISO 2859-3, Sampling procedures for inspection by attributes – Part 3:Skip-lot sampling procedures

5. ISO 2859-4, Sampling procedures for inspection by attributes – Part 4:Procedures for assessment of declared quality levels

6. ISO 3951-1, Sampling procedures for inspection by variables – Part 1:Specification for single sampling plans indexed by acceptance quality limit(AQL) for lot-by-lot inspection for a single quality characteristic and asingle AQL

9.6 Summary

Quality is a latent variable and need not always imply excellence. How far theneeds of customers and the society in general are met in a cost effective manneris the operational way to assess quality.

Statistical methods are important to understand the quality of products (orservices), its measurement and improvement. Quality is inversely proportionalto variability, which can be expressed only in statistical terms. So for improvingquality, we must reduce the variability involved.

60

Statistical thinking is important in understanding a process and for isolat-ing the key variables causing variation. Experimental designs play a key role inachieving the optimum settings for controllable variables in a production pro-cess. New nuisance variables and unusual special cause conditions may ariseduring the production affecting quality. Hence control charts are employed toensure that a state of statistical control exists during the production in orderto hold the gains.

Variables charts consider certain important quality characteristics measur-able on a continuous scale. X chart is used to monitor the process mean or level.S or R charts accompany the X chart to monitor the increase or decrease incommon cause variability. Attribute control charts such as p-chart are employedto monitor the level of nonconformance for several characteristics, quality at-tributes and other specification requirements. The sensitivity of 3-sigma limitsis improved by supplementary run tests.

Acceptance sampling provides quality assurance. Sampling plans such as sin-gle sampling plans are employed not only in the disposition of the final productbut also for procurement quality assurance etc.

Quality must be built into products and services because it provides a com-petitive edge and higher profitability. Statistics plays a useful role along withengineering, management, psychology and other disciplines in achieving quality.

9.7 References

ASQ Statistics Division. (2004). Glossary and Tables for Statistical QualityControl, ASQ Quality Press, Milwaukee, Wisconsin, USA.

Montgomery, D. C. (1996), Introduction to Statistical Quality Control, ThirdEdition, John Wiley & Sons, New York, NY.

Shewhart, W. A. (1931). Statistical Method from an Engineering Viewpoint,Journal of the American Statistical Association, 26, pp. 262-269.

Sohn, H. S. and Park, T. W. (2006). Process Optimization for the Improve-ment of Brake Noise: A Case Study, Quality Engineering, 18, pp. 131-143

61

Exercises

11.1 How would you provide numerical measures of the quality of the serviceprovided by the following?

a. Postal Mail

b. A university canteen

11.2 A city council used to distribute annually paper garbage bags to its rate-payers. It decided to replace the paper bags with plastic ones. The plasticbags are cheaper and thinner compared to the (heavier) paper bags. Ex-perimental studies showed that both types of bags degrade approximatelyat the same time.

Samples of bags submitted by few plastic bag manufacturers were in-spected to short-list a supplier. An order was then placed with a supplierto manufacture the plastic bags in packets of size 52. The manufacturersupplied the plastic bag packets in large batches over time which were thendistributed to the rate-payers (without any batch by batch inspection).

A number of rate-payers complained on the quality of the plastic garbagebags supplied to them. The main complaints were (i) the plastic bagswere not strong enough to hold the usual amount and type of waste and(ii) some packets contained less than 52 bags and hence insufficient foran year. It was found that the use of excessive recycled plastic for someproduction periods caused the strength problems (i.e. splitting etc). Itwas claimed that the under-count of bags was a matter of chance and nota deliberate one.

a. Describe the meaning of the term quality for the plastic bags. Whatdifficulties are involved in comparing the plastic bags with the paperones? Explain your answer considering definition of quality as ‘thetotality of features and characteristics of a product or service thatbear on its ability to satisfy given needs’.

b. What quality measures can be used for the plastic bag quality?

c. Why Taguchi’s philosophy of ‘deviation from the target is a loss tosociety’ is more appropriate in the context of garbage bag quality?

11.3 In finance, the efficiency of a stock market is assessed based on whether ornot the daily returns are randomly distributed. Assume that the normaldistribution models the return variation due to common causes. Use theNASDAQ daily index and show graphically how dominant special andcommon causes are.

11.4 A company is offering financial incentives to sales personnel based on theirshare of weekly sales. Does this strategy recognise the existence of commonand special causes of variation in sales? Why this strategy may affect thestaff morale? Discuss.

62

11.5 Apply PDSA approach to a common activity such as “Keeping in Touchwith Relatives and Friends”. Write down all the steps involved and discussany improvements made. What were the issues faced, and unresolved (ifany).

11.6 A textile mill collected data on the quality of yarn at different time pointsand observed the number of defects due to various causes. The data areshown in Table 13. Draw a Pareto Chart and offer your comments.

Table 13: Yarn Defect Data

Subgroup Numberof Leastested

CountNotMet

LowCSP

ThickPlaces

ThinPlaces

Others

1 100 2 1 0 2 02 100 3 1 1 1 13 100 4 1 2 0 14 100 3 3 4 2 45 100 2 1 0 0 06 100 1 1 0 0 27 100 8 2 1 1 18 100 5 1 1 0 09 100 2 0 0 0 010 100 1 0 0 1 1

11.7 Identify the type of graphical quality tool displayed in Figure 31 and stateits uses:

Figure 31: A QC graphXYZ Company, PQRS.

Sampler:__________________________________ Date:________

No. of Gears Inspected:_____ No. of Burrs:________

xxx

xxxxx

x

63

11.8 Table 14 gives the data on the number and causes of rejection of metalcastings observed in a foundry.

a. Design a Check Sheet which would have enabled the collection ofthese data.

b. Prepare a Pareto Chart for causes of poor metal castings and offeryour recommendations.

Table 14: Castings defect data

Day no. ofmetalcastings

Sand Misrun Shift Drop Corebreak

Broken Others

1 20 2 1 0 2 0 0 12 20 3 1 1 1 1 0 03 20 4 1 2 0 1 1 14 20 3 3 4 2 4 0 05 20 2 1 0 0 0 0 06 20 1 1 0 0 2 0 17 20 8 2 1 1 1 0 08 20 5 1 1 0 0 0 09 20 2 0 0 0 0 1 010 20 1 0 0 1 1 1 2

11.9 Table 15 gives the data (25 samples of size five each taken at equal timeintervals) on the inside diameter of piston rings for an automotive engineproduced by a forging process (data from Montgomery, D. C., Introductionto Statistical Quality Control, John Wiley & Sons, Second Edition). Thisdata set is also used in one of the subsequent the exercise. Perform EDAof the retrospective data using the following tools, and discuss whetheror not your EDA discovered anything alarming to call for an engineeringinvestigation of special causes.

a. histogram

b. run chart

11.10 A large distributing company procures eggs and stores them in thermo-stabilized conditions. For the packaging process of stored eggs, the follow-ing quality characteristics were employed.

– WEIGHT (Specifications: 65±5g)

– HAUGH units- an index for the interior quality of eggs (Specification:≥81 units)

– APPEARANCE (visual test-Pass or Fail)

The retrospective data collected on the above variables are given in Ta-ble 16.

64

Table 15: Piston Ring Diameter Data

Sample Obs 1 obs 2 obs 3 obs 4 obs 51 74.030 74.002 74.019 73.992 74.0082 73.995 73.992 74.001 74.011 74.0043 73.988 74.024 74.021 74.005 74.0024 74.002 73.996 73.993 74.015 74.0095 73.992 74.007 74.015 73.989 74.0146 74.009 73.994 73.997 73.985 73.9937 73.995 74.006 73.994 74.000 74.0058 73.985 74.003 73.993 74.015 73.9889 74.008 73.995 74.009 74.005 74.00410 73.998 74.000 73.990 74.007 73.99511 73.994 73.998 73.994 73.995 73.99012 74.004 74.000 74.007 74.000 73.99613 73.983 74.002 73.998 73.997 74.01214 74.006 73.967 73.994 74.000 73.98415 74.012 74.014 73.998 73.999 74.00716 74.000 73.984 74.005 73.998 73.99617 73.994 74.012 73.986 74.005 74.00718 74.006 74.010 74.018 74.003 74.00019 73.984 74.002 74.003 74.005 73.99720 74.000 74.010 74.013 74.020 74.00321 73.988 74.001 74.009 74.005 73.99622 74.004 73.999 73.990 74.006 74.00923 74.010 73.989 73.990 74.009 74.01424 74.015 74.008 73.993 74.000 74.01025 73.982 73.984 73.995 74.017 74.013

Table 16: Egg Quality Data

Subgroup Eggweight

HaughUnit

Appearance Subgroup Eggweight

HaughUnit

Appearance

1 66.06 83.16 Pass 6 65.11 85.13 Pass1 65.92 82.94 Pass 6 63.94 85.68 Pass1 63.13 87.19 Pass 6 65.28 85.46 Pass1 64.75 87.39 Pass 6 65.07 85.16 Pass2 64.39 86.07 Pass 7 64.91 83.98 Pass2 64.91 87.20 Pass 7 65.74 83.73 Pass2 66.29 85.86 Pass 7 67.11 82.87 Pass2 65.25 84.59 Pass 7 64.40 83.83 Pass3 65.60 87.05 Pass 8 65.50 86.40 Pass3 63.50 88.03 Pass 8 65.61 84.63 Pass3 65.67 82.99 Pass 8 64.09 84.66 Pass3 66.58 82.93 Pass 8 63.96 83.56 Pass4 65.66 86.12 Pass 9 63.44 81.86 Pass4 64.41 87.47 Pass 9 64.45 85.17 Pass4 65.42 86.37 Pass 9 63.64 88.02 Pass4 64.20 86.87 Pass 9 63.30 85.31 Pass5 63.62 83.15 Fail 10 67.17 83.67 Pass5 65.62 84.94 Pass 10 64.89 83.02 Pass5 64.53 87.71 Pass 10 63.81 84.63 Fail5 64.99 84.51 Pass 10 65.30 88.00 Pass

65

a. Draw subgroup-wise boxplots and discuss the variability in Egg weight& Haugh Unit measurements considering the specifications.

b. Prepare a scatter plot of in Egg weight vs. Haugh Unit and discusswhether these two quality characteristics can be controlled indepen-dently.

c. What proportion of eggs passed all the three specifications?

11.11 If a process was known to be normally distributed with mean zero andstandard deviation one, what values of Cp and Cpk would the process gen-erate if the USL was 3 and LSL was -3 and the target was zero? Generatea column of random numbers containing 40 observations from a normaldistribution with zero mean and standard deviation of one. Does the resultagree with the theory? Explain why or why not.

11.12 Obtain the relevant process capability measures for Egg weight and HaughUnit quality characteristics (Table 16 data).

11.13 An automobile manufacturer is interested in controlling the journal diam-eter of a rear wheel axle to dimensions 49.995 to 50.005mm. The datagiven in Table 17 were collected from the automatic grinding machineused to manufacture the wheel axle. A machine adjustment was requiredfollowing the 14th subgroup.

Table 17: Journal diameter data

subgroup time Obs1 Obs2 Obs3 Obs41 9AM 50.006 49.995 50.001 49.9992 9:30AM 50.007 49.999 50.000 50.0003 10AM 49.999 50.006 50.001 49.9974 10:30AM 49.995 50.000 49.994 49.9985 11AM 49.996 49.994 50.004 50.0006 11:30AM 49.996 49.999 49.999 50.0027 12Noon 50.003 50.002 49.999 50.0048 12:30PM 50.000 50.001 50.004 49.9989 1PM 50.003 49.999 49.996 49.99510 1:30PM 50.003 50.000 49.999 50.00111 2PM 50.000 49.999 50.002 50.00412 2:30PM 50.002 50.004 50.001 49.99713 3PM 49.997 49.997 49.999 49.99914 3:30PM 49.990 49.997 49.994 49.99415 4PM 50.001 49.995 49.995 49.99516 4:30PM 50.000 49.999 49.995 49.99917 5PM 49.998 50.003 49.999 49.99518 5:30PM 49.994 49.997 49.998 49.998

a. Obtain the subgroup means, ranges and standard deviations.

b. Obtain the estimate of the common cause sigma using the subgroupranges and standard deviations (i.e., R/d2 and S/c4 respectively).

c. Obtain the X chart control limits using the R/d2 estimate.

d. Obtain the X chart control limits using the S/c4 estimate.

e. Obtain the R chart control limits.

f. Obtain the S chart control limits.

66

g. Draw the X and R charts and interpret the signals if any.

h. Draw the X and S charts and interpret the signals if any.

i. Discuss whether or not the machine adjustment made at about 3:30pmwas indeed in order.

j. Why the above Phase I control charts cannot be used for futuremonitoring?

11.14 Consider the piston ring diameter data (Table 15). Treat the first 20 sub-groups for Phase I analysis and the last 5 subgroups for Phase II analysis.

a. Obtain the subgroup means, ranges and standard deviations.

b. Obtain the estimate of the common cause sigma using the subgroupranges and standard deviations (i.e., R/d2 and S/c4 respectively).

c. Obtain the X chart control limits using the R/d2 estimate.

d. Obtain the X chart control limits using the S/c4 estimate.

e. Obtain the R chart control limits.

f. Obtain the S chart control limits.

g. Draw the X and R charts and interpret the signals if any.

h. Draw the X and S charts and interpret the signals if any.

i. Plot the last 5 subgroup data on the X and S charts derived fromthe Phase I analysis. Interpret the chart.

11.15 A manufacturer of electronic components checks the resistivity of 100resistors drawn randomly from each production batch. Table 18 showsthe number of faulty resistors discovered for 140 batches (read throughcolumns).

Table 18: Faulty Resistor Data

2 5 2 1 5 4 35 2 3 2 3 4 14 3 2 2 1 2 23 4 3 2 5 5 01 4 6 8 2 2 33 4 2 2 2 1 12 8 0 3 4 5 45 3 6 1 2 1 18 2 3 4 5 4 32 3 1 3 2 5 42 8 2 2 4 3 63 2 2 3 5 3 13 2 4 1 1 5 33 3 5 6 4 3 23 5 0 4 3 4 56 4 1 2 3 3 12 4 2 3 3 4 20 4 1 2 7 1 14 2 4 6 4 3 31 4 5 1 3 5 0

a. Plot a p-chart for these data.

67

b. Interpret p-chart for the presence of any special causes.

11.16 The historical data collected by a foundry engineer on the number ofdefective castings in 80 castings sampled randomly from a day’s productionfor a period of 100 days are given in Table 19 (read through columns).

Table 19: Castings Defect data

1 2 3 0 21 0 0 2 20 3 1 0 02 3 1 4 10 0 0 0 22 1 2 2 02 0 3 2 13 1 1 1 11 0 0 1 11 2 0 0 00 1 0 1 12 1 1 1 12 1 0 0 10 0 1 3 01 1 3 2 20 1 1 1 23 1 0 3 20 4 2 1 00 1 3 0 00 1 1 2 2

a. Consider the data for the first 80 days and establish a suitable controlprocedure for the Phase I analysis.

b. Apply the standard to the last 20 subgroups and interpret the results.

11.17 Table 20 gives the nonconformities (d) observed in the daily inspection ofcertain number of disk-drive assemblies (n) . Does the process appear tobe in control?

Table 20: Disk-drive Assembly data

Day n d

1 17 132 19 253 17 04 16 75 18 146 19 187 17 108 19 219 18 1610 16 3

11.18 Table 21 provides the data on the number of joints welded (n) and numberof nonconforming joints (d).

a. Set up an appropriate control chart procedure and discuss whetherthe welding process was in control?

b. Establish the appropriate control limits for future monitoring.

68

Table 21: Welding Quality data

Subgroup n d

1 165 112 85 73 65 54 165 95 85 56 161 97 85 58 61 19 103 210 405 3611 29 212 33 213 60 214 119 315 61 116 37 317 65 118 49 519 103 320 113 321 107 3

11.19 Suppose that a company is applying a single sampling plan with samplesize 160 and acceptance number 1 for lots of size 100,000.

a. Draw the OC curve of the plan.

b. Find the incoming or submitted quality that will be rejected 90% ofthe time.

c. If the AQL is fixed at 0.1% nonconforming, find the probability ofacceptance at AQL.

11.20 Obtain the OC function for the following sampling plan:

Plan

From a large lot, only two units are randomly drawn. If both are conform-ing, the lot is accepted. If both are nonconforming, the lot is rejected. Ifonly one unit is conforming, then one more unit is taken from the remain-der of the lot. If this unit is conforming, the lot is accepted; otherwise, thelot is rejected.

11.21 Let there be a single sampling plan with sample size n and acceptancenumber Ac. Why not we fix the Acceptance Quality Limit (AQL) as AQL= (Ac/n)?

11.22 Compare the performance of the single sampling plans (n = 20, Ac = 0),and (n = 50, Ac =1) using the OC curves. Which plan provides a betterdiscrimination between good and bad lots? Explain why.

11.23 A sweet corn processing factory is procuring cobs from farmers. Theexport specifications require a cob to be at least 18 cm long with no distinctoff-coloured, crushed, dimpled or insect damaged kernels. Consider eachtruck load of cobs delivered as a lot for inspection purposes. Assume that

69

30 randomly drawn cobs are inspected and no nonconformity is tolerated.Draw the OC curve of this plan. If the rejectable quality level is 1%,compute the consumer’s risk.

11.24 Activities/Experiments/Demonstrations

Funnel Experiment SeeBoardman, T. J. and Boardman, E. C. (1990). Don’t Touch ThatFunnel! Quality Progress, 23, pp. 65-69.Arnold , K. J. (2001). The Deck of Cards, Quality Progress, 34, pp.112-112.

Red Bead Experiment SeeTurner, R. (1998). The Red Bead Experiment for Educators, QualityProgress, 31, pp. 69-74.

M&Ms Experiment SeeEllis, D. R. (2004). The great M&Ms experiment, Quality Progress,37, p. 104.

DOE activities SeeBox, G. E. P. (1992). Teaching engineers experimental design with apaper helicopter. Quality Engineering, 4, pp. 453459.Hunter, W. G. (1975) ”101 ways to design an experiment or SomeIdeas About Teaching Design of Experiments”, The University ofWisconsin-Madison, Technical Report No. 413.Vandenbrande, W. (2005). Design of Experiments for Dummies,Quality Progress, 38, pp. 59-65.Wasiloff, E. and Hargitt, C. (1999). Using DOE to Determine AABattery Life, Quality Progress, 32, pp. 67-71.Sarin, S. (1997). Teaching Taguchi’s Approach to Parameter Design,Quality Progress, 30, pp. 102-106.

Penny Demonstration SeeSchilling, Edward G. (1973). A Penny Demonstration To Show theSense of Control Charts, ASQC 27th Annual Technical Conference,Cleveland, OH.

k1-k2 Card Game SeeBurke, R.J., Davis, R. D. and Kaminsky, F. C. (1993). The (k1, k2)Game, Quality Progress, 26,pp. 49-53

70