predicting in-plant test failures

3
IEEE TRANSACTIONS ON RELIABILITY, VOL. R-26, NO. 1, APRIL 1977 29 Predicting In-Plant Test Failures E.R. Carrubba, Member IEEE pany which consistently makes and tests the same product, the problem is not difficult to solve; past experience can lead to Key Words-Prediction, Test failures, Multiple regression. fairly accurate estimates. However, the problem is rather com- Reader Aids- plex for a company which designs, manufactures, and tests a Purpose: Helpful hints wide variety of equipments to different types of requirements. Special math needed for explanation: Regression theory One cannot simply point to past experience on one product, and Special math needed for results: None extrapolate to another. Furthermore, a design reliability pre- Results useful to: Reliability engineers diction, in itself, is quite ineffective for estimating in-plant test Summary & Conclusions-This paper presents an approach failures. for prediction of in-plant test failures so that appropriate cost planning A search of the literature does not yield a great deal of can be undertaken. The approach is based on the technique of multiple regression analysis. The primary independent variables include equip- I ment complexity, reliability program emphasis, and test program is purposely avoided. And maybe rightly so. Hindrances are severity. As a guideline in estimating the number of in-plant test 1) the number of variables that one must evaluate, 2) the book- failures, a Test Failure Estimator (TFE) graph is provided. The results keeping (and associated cost) involved in collecting the appro- of this analysis showed a high degree of correlation between the number of test failures experienced and the combined effect of equipment com- priaedt f u sein Th uatin-and 3) subsequ ntide plexity and test program severity. Statistical tests confirmed that both ment of such techniques. Thus, in-plant test failures continue the multiple correlation coefficient and the degree of explained variance to occur and associated costs continue to be incurred; they are s-significant at a high probability level. Thus, this multiple regression become part of the total cost of doing business. With cost con- analysis 1) verifies a s-significant correlation between the independent straints becoming tighter and tighter, we need to have a better variables and in-plant test failures and 2) demonstrates the feasibility of perspective on the entire cost picture. In recognition of this using these variables to predict the number of in-plant test failures. This area stlll needs more study. In the meantime, the TFE graph p t can be used as a guideline in estimating the number of in-plant test plored as a means of predicting the number of failures exper- failures. ienced during the in-plant test of assemblies, subsystems, and systems. DISCUSSION INTRODUCTION A. Background Over the years, considerable effort has been expended in From past experience and/or intuition, there are at least developing design reliability prediction techniques (e.g., active three major factors that increase the number of in-plant test element group, part count, similar equipment, and detailed failures: 1) equipment complexity, 2) more severe test programs, part-by-part stress) and establishing data sources for use in these and 3) less emphasis on the reliability program effort. However, predictions (e.g., TR-1 100, RADC Notebook, FARADA Hand- in order to be usable, these qualitative viewpoints need to be book, MIL-HDBK-217A, -217B). Much has already been done, made quantitative. and'is being done, in these areas to improve the accuracy of Equipment complexity can easily be translated into a design reliability predictions. numeric, e.g. through a parts count. In this analysis, each The problem of predicting the number of in-plant test separately packaged part (e.g. resistor, integrated circuit, or failures (any anomalies requiring adjustment, repair, or replace- hybrid device) is counted as a separate part. ment) is real and important too. In-plant test failures impact Test program severity is more difficult to convert into a many organizations and cost money. Reliability investigates numeric. There are a variety of in-plant tests which can be failures and uses corrective action; manufacturing needs to re- conducted at the assembly, subsystem, and system levels, e.g., work and repair failed units; quality assurance and test organi- functional tests, evaluation tests, type approval tests, oper- zations trouble-shoot, reinspect, and retest; program offices ational proof tests, qualification tests, demonstration tests, plan for these efforts in terms of resources and schedule. The integration tests, and environmental tests. In order to arrive costs for these activities are reflected in the contractor' s pro- at a numeric for this factor, the type and extent of all tests posal, organization project budgets, and reserve funds. Since are evaluated; then a weight (1 to 10) representing test program any contractor needs to know what his total costs will be, he severity is subjectively assigned. In this process, 1 represents a needs to include the cost of in-plant test failures. benign test program and 10 represents an extremely severe The need for predicting in-plant test failures then is quite test program. The shortcomings of this weighting scheme are obvious. The prediction method is not so obvious. For a com- mitigated by prior experience.

Upload: er

Post on 06-Nov-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Predicting In-Plant Test Failures

IEEE TRANSACTIONS ON RELIABILITY, VOL. R-26, NO. 1, APRIL 1977 29

Predicting In-Plant Test Failures

E.R. Carrubba, Member IEEE pany which consistently makes and tests the same product, theproblem is not difficult to solve; past experience can lead to

Key Words-Prediction, Test failures, Multiple regression. fairly accurate estimates. However, the problem is rather com-

Reader Aids- plex for a company which designs, manufactures, and tests aPurpose: Helpful hints wide variety of equipments to different types of requirements.Special math needed for explanation: Regression theory One cannot simply point to past experience on one product, andSpecial math needed for results: None extrapolate to another. Furthermore, a design reliability pre-Results useful to: Reliability engineers diction, in itself, is quite ineffective for estimating in-plant test

Summary & Conclusions-This paper presents an approach failures.for prediction of in-plant test failures so that appropriate cost planning A search of the literature does not yield a great deal ofcan be undertaken. The approach is based on the technique of multipleregression analysis. The primary independent variables include equip- Iment complexity, reliability program emphasis, and test program is purposely avoided. And maybe rightly so. Hindrances areseverity. As a guideline in estimating the number of in-plant test 1) the number of variables that one must evaluate, 2) the book-failures, a Test Failure Estimator (TFE) graph is provided. The results keeping (and associated cost) involved in collecting the appro-of this analysis showed a high degree of correlation between the numberof test failures experienced and the combined effect of equipment com- priaedt f u sein Th uatin-and 3) subsequ ntideplexity and test program severity. Statistical tests confirmed that both ment of such techniques. Thus, in-plant test failures continuethe multiple correlation coefficient and the degree of explained variance to occur and associated costs continue to be incurred; theyare s-significant at a high probability level. Thus, this multiple regression become part of the total cost of doing business. With cost con-analysis 1) verifies a s-significant correlation between the independent straints becoming tighter and tighter, we need to have a bettervariables and in-plant test failures and 2) demonstrates the feasibility of perspective on the entire cost picture. In recognition of thisusing these variables to predict the number of in-plant test failures.

This area stlll needs more study. In the meantime, the TFE graph ptcan be used as a guideline in estimating the number of in-plant test plored as a means of predicting the number of failures exper-failures. ienced during the in-plant test of assemblies, subsystems, and

systems.

DISCUSSIONINTRODUCTION A. Background

Over the years, considerable effort has been expended in From past experience and/or intuition, there are at leastdeveloping design reliability prediction techniques (e.g., active three major factors that increase the number of in-plant testelement group, part count, similar equipment, and detailed failures: 1) equipment complexity, 2) more severe test programs,part-by-part stress) and establishing data sources for use in these and 3) less emphasis on the reliability program effort. However,predictions (e.g., TR-1 100, RADC Notebook, FARADA Hand- in order to be usable, these qualitative viewpoints need to bebook, MIL-HDBK-217A, -217B). Much has already been done, made quantitative.and'is being done, in these areas to improve the accuracy of Equipment complexity can easily be translated into adesign reliability predictions. numeric, e.g. through a parts count. In this analysis, each

The problem of predicting the number of in-plant test separately packaged part (e.g. resistor, integrated circuit, orfailures (any anomalies requiring adjustment, repair, or replace- hybrid device) is counted as a separate part.ment) is real and important too. In-plant test failures impact Test program severity is more difficult to convert into amany organizations and cost money. Reliability investigates numeric. There are a variety of in-plant tests which can befailures and uses corrective action; manufacturing needs to re- conducted at the assembly, subsystem, and system levels, e.g.,work and repair failed units; quality assurance and test organi- functional tests, evaluation tests, type approval tests, oper-zations trouble-shoot, reinspect, and retest; program offices ational proof tests, qualification tests, demonstration tests,plan for these efforts in terms of resources and schedule. The integration tests, and environmental tests. In order to arrivecosts for these activities are reflected in the contractor' s pro- at a numeric for this factor, the type and extent of all testsposal, organization project budgets, and reserve funds. Since are evaluated; then a weight (1 to 10) representing test programany contractor needs to know what his total costs will be, he severity is subjectively assigned. In this process, 1 represents aneeds to include the cost of in-plant test failures. benign test program and 10 represents an extremely severe

The need for predicting in-plant test failures then is quite test program. The shortcomings of this weighting scheme areobvious. The prediction method is not so obvious. For a com- mitigated by prior experience.

Page 2: Predicting In-Plant Test Failures

30 IEEE TRANSACTIONS ON RELIABILITY, APRIL 1977

Reliability program emphasis can also be subjectively trans- C. Resultslated into a workable numeric. One approach would be todetermine the ratio of reliability program manhours (or dollars) 1. Scatter Diagram: Only equipment complexity seems toto total program manhours (or dollars). Another approach affect the number of in-plant test failures-as equipment com-would be to examine which MIL-STD-785 tasks are required plexity increases, so does the number of failures.and to what depth they are implemented (e.g. FMEA down to 2. Correlation: The linear-correlation coefficient for equip-piece part level). Then using a weighting scheme such as that ment complexity vs. number of failures is 0.948. The corre-suggested for test program severity, a subjective numeric is lation coefficients for the other variables were very low.assigned. 3. Step-wise Correlation: In order of decreasing importance,

the s-independent variables were equipment complexity, testB. Approach program severity and reliability program emphasis. Reliability

program emphasis contributed virtually nothing to the re-The statistical analysis tool of correlation and regression gression equation.

analysis was selected for analyzing the impact of these factors, 4. Multiple Regression: Reliability program emphasis wasboth individually and collectively, on in-plant test failures. dropped as an independent variable, and a multiple regressionData (See Table 1) were obtained on a sample of in-house pro- analysis was performed. This analysis resulted in a multiplejects. The data for the independent variables (equipment com- correlation coefficient of 0.970; the standard error of estimateplexity, test program severity, and reliability program emphasis) was 1.8. The associated regression equation iswere derived in the manner described above; the data for equip-ment test failures were accumulated through the failure report- logNf = -3.34 + 0.988 log C + 0.098T (1)ing system established for each project.

where,TABLE 1

PROJECT DATA MATRIX N estimated number of in-plant test failuresf

Project Equipment Test Program Reliability ObservedComplexity Severity Program Equipment C equipment complexity (parts count)

Emphasis TestFailures* T program severity (weight based on value judgement,

B 14 325 2 2 6.3 ranging from 1 to 10)

C 5 138 8 6 15.5D 20 000 6 4 58.0 Tests of s-significance were performed to check theE 10 950 6 8 23.5 r-reliability of the statistical measures. The coefficient ofF 50 10 8 0.4 multiple correlation is s-significant at the 0.001 level. TheG 40 8 6 0.1 explained variance is s-significantly greater than the unex-

I 250 9 7 1.3 plained variance at the 0.001 level.J 1 548 8 7 1.9K 50 000 9 9 200L 924 10 10 3.2 D. Application

*Mean number of in-plant failures experienced per systemThe regression equation (1) can be used to develop a Test

For the analysis, the Service Bureau Corporation's (SBC) Failure Estimator graph (Figure 1). This graph shows thetime-share services, CALL/370, were used. Specifically, the estimated number of in-plant test failures as a function of equip-following set of statistical analysis programs out of SBC's ment complexity and test program severity. As an example,***STATPACK was employed. for an equipment complexity of 5000 parts and a test program

severity factor of 7, the estimated number of failures is 10.1. Scatter Diagram-prints the value for two selected vari-

ables.2. Correlation-computes the linear-correlation coefficient,

a measure of the association between two variables.3. Step-wise Regression-selects the s-independent variables

in the order in which they account for the variance of a depen-dent variable, and computes pertinent multiple regressionmeasures. ACKNOWLEDGMENT

4. Multiple Regression-computes pertinent multiple re-gression measures for the relationship between a dependent The author is indebted to R.D. Gordon and to the Editorvariable and a set of s-independent variables, and referees for their helpful suggestions in preparing this paper.

Page 3: Predicting In-Plant Test Failures

CARRUBBA: PREDICTING IN-PLANT TEST FAILURES 31

100TEST PROGRAM LEVEL

U_ |BENIGN SEVEREtn 50 LO MED- M MED-9IHH - _____

LOW 1111 H IGH~

12 3 4 5 6 7 8 9 10 7

20 T - TEST PROGRAMF 2 SEVERITY FACTOR 5

-4~~~~~~~~~~~~~~~~~

0-

e 5 ___

LL0

200 500 103 2 5 104 2 5 105EQUIPMENT COMPLEXITY (C)

Figure 1. Test Failure Estimator (TFE) Curves

Eugene R. Carrubba; Electronic Systems Group; Eastern Division; 66 from Northeastern University. He is a registered professional engineer"B" Street; Needham, Mass. 02194 in California. Mr. Carrubba has authored and presented a number of

papers on reliability and systems effectiveness, and is co-author of theE.R. Carrubba (M'60) has 20 years experience in the product assurance book,AssuringProductIntegrity. He is a member of the IEEE, SOLE,field, encompassing both management and technical activities. He is and the EIA G41 Reliability Committee.currently Supervisor of the Reliability/Maintainability/Safety Engineer-ing unit at GTE Sylvania ESG-ED, Needham, Mass. Mr. Carrubba holds Manuscript received 1975 November 10; revised 1976 September 16,degrees in Statistics from Boston University and Electronic Engineering 1976 December 13. lnau

Man usc ri pts Received For information, write to the author at the address listed; do NOT write to the Editor.

"Impact of hardware and software design errors on systems "Optimal placement of spare modules in a cascaded chain",availability", C. Landrault; Laboratoire d'Automatique et Bethrooz Parhami; Arya-Mehr, University of Technology;dAnalyse des Systemes du C.N.R.S.; 7, Avenue du Colonel Devpt. of Mathematics & Computer Science; P0 Box 3406;Roche; 31400 Toulouse FRANCE. Tehran IRAN.

"ime dependent availability analysis of nuclear safety sys- "Estimating field failure rates with a returns-for-repair model:tms", William E. Vesely, Jr; MS A1-3002; Office of Nuclear an application in the electronics industry", Robert C. Carlson,;Regulatory Research; Nuclear Regulatory Commission; Wash- Dept. of Industrial Engineering; Stanford University; Stanford,inton, DC 20555 USA. California 94305 USA.