bayesian statistics applied to reliability analysis

Upload: german-lopez

Post on 06-Jul-2018

231 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    1/44

    9/23/12

    1

    Bayesian Statistics Applied toReliability Analysis and Prediction

    By Allan T. Mense, Ph.D., PE, CRE,Principal Engineering Fellow,Raytheon Missile Systems, Tucson, AZ

    1. Introductory Remarks.Statistics has always been a subject that has baffled many people both technical and non technical. Itsbasis goes back to the mid 18 th century and the analysis of games of chance. Statistics is the applicationof probability and probability theory can be traced to the ancient Greeks but it was most notablydeveloped in the mid 17 th century by the French mathematicians Fermat, Laplace, and others. Rev. Sir

    Thomas Bayes (born London 1701, died 1761, see drawing below) had his works that includes theTheorem named after him read into the British Royal Society proceedings (posthumously) by a colleague

    in 1763. I have actually seen the original publication!

    For years and even in the present day the statisticscommunity seems to have a schism between the so-called“objectivists or frequentists” and their so-called “classical”interpretation of probability and the Bayesians who have abroader interpretation of probability. From a reliability pointof view classical calculations can be thought of as a subset ofBayesian calculations. You do not have to give up theclassical answers but you will have to give up the classicalinterpretation of the results!

    Discussion of the logical consistencies and inconsistencies ofthe two statistical points of view would lead us too far afield[5] . However my personal observations indicate that in the

    battle over which techniques to apply to problems, the Bayesian have won the war but classicaltechniques are still widely used, easy to implement and are very useful. We will use both but thepurpose of this note is to explain Bayesian techniques applied to reliability.

    References.The two major texts in this area are “ Bayesian Reliability Analysis ,” by Martz & Waller [2] which is out ofprint and more recently “ Bayesian Reliability ,” by Hamada, Wi lson, Reese and Martz [3]. It is worthnoting that much of this early work at Los Alamos was done on weapon system reliability and all theabove authors work or have worked at the Los Alamos National Lab [1]. Allyson Wilson headed theBayesian reliability group at LANL and Christine Anderson-Cook now heads that group. They arearguably among the leading authorities in Bayesian reliability in the world. I will borrow freely fromboth these texts and from notes I have from their lectures. There are also chapters covering Bayesian

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    2/44

    9/23/12

    2

    methods in traditional reliability texts e.g. “Statistical Methods for Reliability Data ,” Chapter 14, byMeeker and Escobar [4]. This point paper covers Bayesian reliability theory and Markov Chain MonteCarlo (MCMC) solution methods. The NIST web site also covers Bayesian reliability. Specifically 8.2.5.covers “ What models and assumptions are typically made when Bayesian methods are used forreliability evaluation? ”

    Philosophy.The first and foremost point to recognize is that reliability has uncertainty and therefore should not bethought of as a single fixed number whose unknown value we are trying to estimate. Reliability havingthis uncertainty requires us to treat reliability as a random variable and therefore discuss it using probability distributions, f(R), and the language of statistics i.e. how likely is it that reliability of a systemor component will have some value greater than some given number (typically a reliability specification).We will see that specifying some desired reliability value is not sufficient but requires that we also specifysome level of confidence that reliability is greater than (or less then) the value desired. This will becomeclear when reliability distributions are defined and calculated.

    For those needing a refresher on statistics I recommend “Introduction to Engineering Statistics,” byDoug Montgomery et aland any edition of thistext will do just fine. Thefundamental conceptsyou will need are 1)probability densityfunctions, pdf, written asf(x|a,b,c) where theletters a,b,c refer to

    pieces of information,called parameters, thatare presumed known,knowable or can beestimated , 2) cumulativedistribution function(CDF), written F(x|a,b,c),which is the accumulatedprobability of the

    random variable X from the minimum allowable value of X (typically zero or - ∞) up to X=x, and 3) the

    concept of a likelihood function which in pra ctice is the product of pdf’s and CDF’s evaluated in terms ofall the available data.(See Appendix G ). The use of a likelihood function while familiar to everystatistician is not used by everyday reliability engineers! The concept was proposed by Sir Ronald Fisherback in the early 1900’s and is very useful.

    For those not familiar with the traditional “frequentist” method for establishing a reliability estimateand its confidence interval, Appendix C has been provided.

    Why Bayesian

    • Bayesian methods make use of well known, statisticallyaccurate, and logically sensible techniques to combine differenttypes of data, test modes, and flight phases.

    • Bayesian results include all possible usable information baseon data and expert opinion.

    • Results apply to any missile selected and not just for “averagesets of missiles”.

    • Bayesian methods are widely accepted and have a long trackrecord:

    FAA/USAF in estimating probability of success of launch vehicles

    Delphi Automotive for new fuel injection systems

    Science-based Stockpile Stewardship program at LANL for nuclearwarheads

    Army for estimating reliability of new anti-aircraft systems

    FDA for approval of new medical devices and pharmaceuticals

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    3/44

    9/23/12

    3

    It is also worthy of note that Bayesian reliability has been actively pursued for at least 30 to 40 years andthe Los Alamos National Lab (LANL) developed the techniques to predict the reliability of missiles as wellas the nation’s nuclear stockpile back in the 1980’s and before. . The reasons for using Bayesian can besummarized and are outlined in the chart shown on the previous page.

    Before proceeding in detail there is a simple picture to keep in mind that explains how Bayesianreliability analyses works. One starts by (1) producing the “blue” curve (prior distribution) based uponprevious information deemed useful for predicting the probability of successfully operating units prior totaking data on the system or component of interest, then (2) folding in recent test data as is representedby the “red” distribution (likelihood function) and finally (3) producing, using some rather sophisticatedmath in the general case, a curve such as shown in “green” in Figure 4. The green curve represents theposterior distribution of a unit’s reliability given the most recent data. From this green curve we can inprinciple, calculate everything that is needed.

    Figure 7. The Prior, Likelihood and Posterior, superimposed. One postulates from previous information a priordistribution shown in blue . The tests are performed and put into a likelihood function shown graphically in red . Theresult (in green ) is the answer. It is the posterior reliability distribution.

    Note that the posterior distribution ( green ) is more peaked and narrower than the ( red ) likelihood curvewhich is indicative of having prior information on the reliability. The likelihood function ( red ) by itselfwould be the distribution found from classical or frequentist analysis i.e. from a prior that is uniform.

    9

    Posterior Distribution combiningPrior and Data

    • Blue curve is our prior distribution• Red curve is distribution assuming only information is 4 successes in 5 tests• Green curve is the Bayesian estimate adding the 4 out of 5 to the prior

    – Estimate is between evidence (data) and prior – Distribution is tighter than prior or data only narrower confidence bounds

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

    System Reliability

    P r o b a b i l i t y D e n s i t y F u n c t i o n

    Posterior distribution ofSystem Reliabilityafter 4successes in 5 tests

    Initial distribution ofSystem Reliabilityassuming the mostprobable value is 0.90

    Likelihood of System Reliabilityusing 4 successes in 5 tests only

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    4/44

    9/23/12

    4

    Finding the ( green ) posterior distribution for real situations is mathematically complex, but the essenceof what is done is as simple as the graphical display shown above. For those caring to delve further inthe details, the following sections have been provided.

    Table of Contents.

    Introductory remarksReferencesPhilosophy

    OverviewBasic PrinciplesBayes’ Theorem: Prior, Likelihood, Posterior Bayes’ Theorem Allied to Pass/Fail Reliability General Bayesian Approach to ReliabilityGeneral Procedure for Bayesian Analysis and UpdatingSelecting a Prior

    Likelihood FunctionGenerating System Level Reliability EstimateSummaryTime Dependent Reliability Calculations Using a Weibull DistributionPoisson CountingAppendices

    2. OverviewIt makes a great deal of practical sense to use all the information available, old and/or new, objective orsubjective, when making decisions under uncertainty which is exactly the situation one has with manysystems in the field. This is especially true when the consequences of the decisions can have asignificant impact, financial or otherwise. Most of us make everyday personal decisions this way, usingan intuitive process based on our experience and subjective judgments.[6]

    Using language from the NIST web site we note that so-called classical or frequentist statistical analysis,seeks objectivity by generally restricting the information used in an analysis to that obtained from acurrent set of clearly relevant data. Prior knowledge is not used except to suggest the choice of aparticular population model to "fit" to the data, and this choice is later checked against the data forreasonableness. What is wrong with this approach after all it has used successfully for many years? Theanswer lies in the desire to take into account previous information particularly if we have someinformation from flight tests and some from ground tests and we want to somehow combine this

    information to predict future probabilities of success in operational scenarios. For example why throwaway knowledge gained in lab tests or ground tests even though the testing environments do notduplicate the flight operational environment. We use Bayesian statistics and the only known (to thisauthor) way to incorporate this knowledge quantitatively into reliability calculations. The use ofBayesian statistics makes use of this prior information and should lead to savings of time and moneywhile providing “useable” information to the product engineer.

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    5/44

    9/23/12

    5

    classical BayesianR, fixed R, random

    estimate distribution=s/n f(R )

    s=# successes Pr{R>r}n=# tests 1-F(R )useage usage

    P(k successes|m future tests,) P(k successes|m future tests)

    { | , } (1 )k m k m

    P k m R R Rk 0

    { | } (1 ) ( )k m k

    R

    m P k m R R f R dR

    k

    Lifetime or repair models using frequentist methods have one or more unknown parameters. The frequentist approach considers these parameters as fixed but unknown constants to be estimated usingsample data taken randomly from the population of interest. A confidence interval for an unknown parameter is really a frequency statement about the likelihood that numbers calculated from a samplecapture the true parameter, e.g. MTBF. Strictly speaking, one cannot make probability statements aboutthe true parameter since it is fixed, not random. It is the interval that is random and once you take dataand calculate a confidence interval then either the reliability is in the calculated interval or it is not.

    The Bayesian approach treats these population model parameters as random, not fixed, quantities.Before looking at the current data, use is made of old information, oreven subjective judgments, to construct a prior distribution model forthese parameters. This model expresses the starting assessment abouthow likely various values of the unknown parameters are. Then use ismade of the current data (via Bayes’ formula) to revise this startingassessment, deriving what is called the posterior probability distributionmodel for the population model parameters. Parameter estimates, alongwith confidence intervals ---known as credibility intervals in Bayesianvernacular----, are calculated directly from the posterior distribution.Credibility intervals are legitimate probability statements about theunknown parameters, since these parameters now are consideredrandom, not fixed.

    In the past parametric Bayesian models were chosen because of their flexibility and mathematicalconvenience [1,2]. The Bayesian approach is performed in Raytheon codes (RBRT1 and RBRT2. The LosAlamos National Laboratory (LANL) has also developed and is marketing their Bayesian Code calledSRFYDO.

    A comparison is shown below between classical (frequentist) approach and the Bayesian approach. The

    key to Bayesian is the process of determining f(R ), the posterior distribution (probability densityfunction).

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    6/44

    9/23/12

    6

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    5

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

    f ( R

    ) ,

    p d f

    R, Reliability

    Bayesian Reliability

    Prior

    Likelihood

    Posterior

    A typical (posterior)distribution for f(R )is shown in the graph

    below in green .How one arrives atthis distribution isdiscussed in greatdetail later. Thegreen curverepresents the f(R)we are seeking.Again I note thedistribution is not anend in itself. Oneuses f(R) to 1)

    predict the credibilityinterval for reliabilityand 2) whenmultiplied times the binomial distribution (for example) and integrated over all possible values of R givesthe answer to the question of what is the probability of any given number of successes in some future setof tests. See Bayesian Calculator Rev5.xls. The rest of this white paper concentrates on calculational

    procedures and some insightful examples of Bayesian reliability analysis .

    3. Basic Principles

    Bayes’ analysis begins by assigning an initial distribution of possible unit reliabilities ( f prior (R)) on the basisof whatever evidence is currently available. The initial predictions may be based 1) solely on engineering judgment, 2) on MIL HDBK 217, 3) on data from other techniques such as simulation, or 4) combinationsof all of the above. Initial reliabilities are known as prior reliabilities because they are determined

    BEFORE we run our experiments on the system of interest. When necessary, these priors can bedetermined for each and every component within the overall system or more appropriately we maydecide to specify a prior only at the system or subsystem level. If one assumes the reliability of eachcomponent is equally likely to have a value between R=0 to R=1 (not a smart assumption in most cases),then one is de facto performing the equivalent of the classical of frequentist analyses but theinterpretation of the results is entirely different. Thus classical results are reproducible from Bayesiananalysis. The converse is not true.

    All reliability analyses must obey Bayes’ theorem. By explicitly stating the assumptions for priordistributions one can convincingly show how reliability distributions (probabilities) evolve as moreinformation becomes available. Examples will be shown later in this note.

    Digression: In addition there is a subtlety in the interpretation of Bayesian analyses that gives it a moreuseful interpretation than does the classical or frequentist analyses. I know this sounds vague but itneeds to be discussed later when one is more familiar with the Bayes’ formalism .

    The prior reliabilities are assigned in the form of a distribution and determined before the acquisition ofthe additional data. In general, a prior distribution must be determined with some reasonable care as it

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    7/44

    9/23/12

    7

    will play a major role in determining the reliability distribution of the system until there is sufficient datato outweigh the prior distribution.

    Think of the prior as a statement of the Null Hypothesis in classical hypothesis testing – it is the statusquo or what we already know. In statistical terminology we are testing (comparing in a sense) the

    result, called the posterior distribution, to our Null Hypothesis, called the prior distribution which isbased upon knowledge previously acquired about the product whose reliability we are questioning.

    For example, if we know the reliability of a unit is certainly > 50% and probably less than say 99% thenwe might use a uniform distribution between 0.5 and 0.99 with the probability being zero elsewhere asa useful prior. This would be one of many acceptable but not very useful priors.

    The question is which of this infinite set of choices for a prior should be used in the analysis process.Some of this answer comes from the literature from which many analyses have been performed withvarious priors. Some of the answer comes from noting that expert opinion is valid and the prior is aquantitative way to capture this opinion or experience. For reliability work we choose priors that givethe data “room to move” yet emphasize the regions of R values that characterize or history based onprior knowledge. This is always a qualitat ive trade off and different SME’ s will have different opinions.The practicing Bayesian reliability engineer must assess these expert opinions and perform sometradeoffs. In my experience reliability analysis Bayesian style is NOT a “cookbook” approach andrequires some consensus building.

    As will be seen shortly the exact form of the prior 1) matters little once a reasonable amount of data iscollected from the system of interest but 2) does carry some importance when there is only a smallamount of test data. The prior (being non uniform in most Bayesian approaches) keeps the reliabilityprediction based on operational data from being either too optimistic or too pessimistic. Consider the

    following: you run one test and it is a success would you predict, based on one data value, that theentire population of uints has a reliability of 1? Similarly if the single unit test failed would you tell thecustomer that the predicted reliability of all the units built is 0? I doubt that either conclusion would beacceptable but how do you know and more importantly what can you say about it? Well, we have expertknowledge from having built systems of similar complexity in the past. When these previous units wentinto full production the fraction of successful tests of the units in the field has almost always > 80% andmany times much higher. So as experts we want to take advantage of that knowledge and experienceand the method of doing this in a quantitative manner is by applying Bayesian statistics.

    Given the priors, tests are performed on many systems. The data from these tests is entered into what is

    called a likelihood function, e.g. ( | , ) (1 ) ( 1, 1) s n s

    L R s n R R B s n s where n=# tests, s=#successes. Note: The likelihood function treats R as a random variable and the test data (n,s) as knownas opposed to a binomial distribution that treats R as known and predicts s. Likelihood functions arediscussed in Appendix F.

    There is a likelihood function for every component in the unit’s system , every subsystem, and the unit asa whole. If Bayesian methods are not used, then this likelihood function is the starting point for classical

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    8/44

    9/23/12

    8

    reliability analysis. This is equivalent from a mathematical viewpoint to having used a uniform prior thatranges from 0 to 1, i.e. we are ignorant as to the range of reliabilities the unit might have. When youthink about it for a moment it seems clear that assuming we know nothing about reliability of theproducts we build is cause for not selecting us to build any units! Uniform in a Bayesian reliabilitycontext is analogous to saying you are lacking in experience.

    The product of the prior reliability distribution times the likelihood function results in what is known as a joint reliability distribution. From this joint distribution one can form a posterior reliability distribution(f posterior (R)). It is this posterior distribution that we seek to find. It gives the analyst a better picture of thesystem reliability and its variability.

    In short, the advantage in using Bayesian statistics is that it allows prior information (e.g., predictions,test results, engineering judgment) to be combined with more recent information, such as test or fielddata, in order to arrive at a prediction/assessment of reliability based upon a combination of all availableinformation and provided in the form of a probability distribution that can be used for further important

    assessments.

    As will be seen, Bayesian statistics is particularly useful for assessing the reliability of systems where onlylimited field data exists . It can also handle correlated failure modes where for example partial failures ordegradation of some components can affect one another and cause subsystem and system failureswhen component failures are not recorded.

    In summary, It is important to realize that early test results do not tell the whole story . A reliabilityassessment comes not only from testing the product itself, but is affected by information which isavailable prior to the start of the test, from component and subassembly tests, previous tests on the product, and even intuition based upon experience, e.g. comprehensive reliability tests may have been performed on selected subsystems or components which yield very accurate prior information on thereliability of that subsystem or component. Why should this prior information not be used to supplementthe formal system test result? One needs a logical and mathematically sound method for combining thisinformation. Besides one can save as much as 30% to 50% in testing for the same level of confidence andthat means savings in time, resources and money.

    One of the basic differences between Bayesian and non-Bayesian (frequentist) approaches is that theformer uses both current data and prior information, whereas the latter uses current data only.

    To see how all this discussion is turned into quantitative and useful information one begins with a basic

    understanding of Bayes’ Theorem.

    4. Bayes' Theorem One of the axioms of probability theory states that the probability that two events, A and B, occur isequal to the probability that one occurs, labeled P(A), multiplied by the conditional probability that theother occurs, given that the first occurred, labeled P(B|A).

    Written as a formula:

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    9/44

    9/23/12

    9

    P(A and B) = P(A)P(B|A) =P(A B) (1)

    The symbol is called an “intersection.” One notes that it makes no difference if one interchanges Awith B since the calculation is the probability they both occur. There need not be time ordering of theevents. Thus, the following expression must also be true.

    P(B and A) = P(B)P(A|B) =P(B A) (2)

    Setting equations (1) and (2) equal to one another, it follows directly that

    P(B)P(A|B) = P(A)P(B|A).

    Solving these two terms for P(A|B), one arrives at Bayes’ Theorem i.

    P(A|B) = [P(B|A) / P(B) ] P(A) (Bayes’ Theorem) (3)

    Posterior = [Relative Likelihood] X Prior

    The terms in the square brackets is called the relative likelihood. The probability P(A) is called the priorprobability and P(A|B) is called the posterior probability, it is the conditional probability of event Aoccurring given information about event B. The addition of information, e.g. event B occurring, affectsthe probability of A occurring. The information changes the probability from P(A) to P(A|B). Ifinformation did not change the probability of detection we would be out of the sensor business. Theareas of recognition, detecting signals in the presence of noise, and implementing Kalman filtersrequires Bayesian thinking and calculational framework.

    This above extremely simple result holds a great deal of powerful logic. As a mathematical theorem it is

    always true given the assumptions leading to the theorem are true. The requirement is simply that oneis dealing with random processes that have measurable probabilities. This is so very broad one is hardpressed to find situations in which the theorem does not apply.

    In the following material you will be lead through Bayesian reliability analysis when there is onlypass/fail or binary data. This will be followed by some discussion about how to solve Bayesian reliabilityproblems when one has causal variables that affect the reliability e.g. age of the unit, exposure to highhumidity, etc. This is the situation for which the RMS tool, RBRT2, was developed. This material will thenbe followed by a time-dependent reliability problem. Typically this time dependent analysis involvesexponential, Weibull, lognormal or other distributions and even includes Poisson distributions when the

    interest is in counting the number of failures over a given span of time.

    5. Bayes’ Theorem Applied to Reliability as measured by Pass/Fail tests.The idea that reliability is some single number is not a good way to think about reliability becausereliability has randomness i.e. there is a lot of variability or uncertainty involved in determining reliabilityparticularly for a complex system and even more variability with a collection of such systems. To accountfor this uncertainty one must assign probabilities for a range of possible reliabilities. This is most

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    10/44

    9/23/12

    10

    conveniently done using what is called a probability density function (pdf) or sometimes simply thereliability distribution , f(R). Figure 1 below is an example of such a distribution (shown in blue).

    Figure 1. Probability Density Function. Blue curve on graph shows the relative probabilities of occurrence of arange of possible reliabilities. Mean reliability, E[R]=0.892 and the most probable reliability (peak of curve; the mode)occurs for R = 0.9.

    To use this chart quantitatively one must be able to find the area under the curve. For example, thearea under curve to the right of 0.9 indicates the probability (confidence) of having a reliability greaterthan 0.9. This may sound abstract but it is the manner one must address reliability to be correct.

    Say that this particular pdf represents the reliability of units in the lot [1]. The peak of this graph givesthe most probable reliability (called the mode). If a unit is randomly picked from a lot, the reliability ofthe unit is likely to be close to this most probable value. But there are many unitss that may be more reliable and many that may be less reliable. The probability that a randomly picked unit has reliabilitygreater than the most probable value is easily found using the cumulative probability distribution (CDF)function, shown below in Figure 2.

    Reliability Distributions

    0

    2

    4

    6

    8

    10

    12

    14

    16

    18

    20

    0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1

    R, reliability

    f ( R

    ) ,

    p d f

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    P r {

    R e

    l i a

    b i l i t y

    > R }

    f(R ) 1-F(R )

    Probability thatreliability > 0.9 isgiven by shadedregion and readfrom cumulativecurve to be ~ 0.42

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    11/44

    9/23/12

    11

    Figure 2. The Cumulative Distribution Function, F(R), for the reliability. Each value F(R) represents the areaunder the f(R) curve (Figure 1) from zero up to the reliability value R. The probability that a unit has a reliability belowany given value of R can be read directly from the vertical scale; F(R) = P(reliability < R); e.g., P(reliability < 0.9) =58%. Obviously the probability the reliability is greater than R equals 1 – F(R ).

    Noting from Figure 1 that the most probable value of R is 0.9, the corresponding value for F(R=0.9) =

    0.58, which is the probability that reliability is less than 0.9. So the probability {choosing a unit with areliability greater than 0.9} is simply 1 - 0.58 = 0.42, signifying that there is a 42% probability ofrandomly choosing a unit from the lot with a reliability greater than 0.9. Restated we would say that weare 42% confident that the reliability of the next unit chosen (at random) from the lot of many units inthe field has a reliability of 90% or greater .

    Understanding these graphs is essential to understanding reliability. Once these curves are produced,their interpretation is relatively straightforward.

    6. General Bayesian Approach to ReliabilityAll of Bayesian reliability is explained by the following simple formula.

    (( |( | ) ) ) prior posterior f L R data ta f da R R (4)

    Equation (4 ) is Bayes’ Theorem for reliability distributions. In words, the posterior distribution forreliability given the evidence (new data), equals the product of the prior information (distribution) andthe new data, formulated as a Likelihood Function and this product is then normalized so it integral overall reliability values = 1. The complications occur when we have many components and many possible

    Pr{reliability

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    12/44

    9/23/12

    12

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    0 0.2 0.4 0.6 0.8 1

    L i k e l i h o o

    d ( n

    , s | R )

    R

    Likelihood n=s=1

    factors that may influence the components operation. We will attempt to add in the complications asmall amount at a time.

    Given the formula above the actual mechanics of applying Bayesian probability to reliability distributionsis very straightforward.

    1. One constructs from previous experience and expert knowledge a probability function called a priordistribution to represent the relative probabilities predicted prior to testing the units, i.e. constructf prior(R). This is similar to knowing P(A) in equation (3) but now a distribution of possible reliabilityvalues is provided, instead of a single value.

    Figure 3, A Prior Distribution example using a beta distribution.

    2. Tests are performed and that test data is put into a likelihood function, L(data|R) , similar to the[P(B|A) / P(B)] term in equation (3). The likelihood function, L(data|R), will be discussed below as itcarries all recent test information.

    Shown below is the Likelihood function for the case where there was 1 test (n=1) and itresulted in a success (s=1). By the way this is the only function you would see in aclassical reliability analysis. Most reliability engineers are not used to seeing thebinomial reliability graphed in this form because the binomial probability treats the

    number of successes(s) as the randomvariable and assumesR is known (or wellestimated). In thelikelihood functionwe treat s and n asknown and ask whatis the probability(likelihood) ofachieving theseexperimental resultsfor different valuesof R?

    Figure 4. The likelihood function for the case of one test that results in a success.

    00.010.020.030.040.050.060.070.080.09

    0 0.2 0.4 0.6 0.8 1

    f ( R , p

    r i o r

    ) , p

    d f

    R

    f prior (R )

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    13/44

    9/23/12

    13

    00.0020.0040.0060.008

    0.010.0120.0140.0160.018

    0.02

    0 0.2 0.4 0.6 0.8 1

    f ( R

    , p o s t e r i o r )

    , p d f

    R

    Posterior

    3. Finally Bayes theorem is used to find the posterior reliability distribution, f posterior (R|data) , that isconditioned (depends upon) by the actual data and weighted in some sense by the prior predictionsassumed before the testing. This requires taking the product of the prior distribution and thelikelihood function for all subsystems in the unit. The result is an equation of the form shownpreviously as equation (4) and illustrated below. The formulas are simple examples.

    Figure 5. The posterior distribution.

    Many times it is convenient to show both the posterior pdf, f(R ) and the so-called survival functionS(R)=1-F(R) that gives the Pr{reliability >R}. This is shown below.

    Figure 6. combined look at both f(R) and S(R)=1-F(R).

    Note: The prior and posterior distributions in equation (4) do not have to be of the same functional form(i.e., conjugates) . .

    0

    0.2

    0.4

    0.6

    0.8

    1

    0

    0.005

    0.01

    0.015

    0.02

    0 0.2 0.4 0.6 0.8 1

    P r { r e

    l i a b i l i t y > R

    }

    f ( R

    , p o s t e r i o r )

    , p d f

    R

    Posterior pdf and Pr{reliability >R}Posterior S(R )

    (1 )

    , arg

    (1 )( | , , , )

    ( 1, (1 )

    m m N s N n s

    posterior m m

    s n l em

    m

    R R f R n s Nm

    B N s N n s

    N s s Mode

    N n n

    (1 )

    (1 )( | , )( 1, (1 ) 1)

    m m N N

    m m

    R R f R Nm B N N

    (1 )( | , )

    ( 1, 1)

    s n s R R f R n s

    B s n s

    s Mode

    n

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    14/44

    9/23/12

    14

    7. General Procedure for Bayesian Analysis and UpdatingThe Raytheon Bayesian reliability tool RBRT-2 performs Bayesian reliability analyses on complex systemswhose functional components are assumed to all be in series from a reliability standpoint, i.e. if any

    single component fails the system fails. It also just works with Pass/Fail or what we call binary data.

    A demonstration of how this tool works will be performed at the end of this white paper for a simplecase of three components in series.

    The initial prior assessment is represented by a probability distribution with certain parameters. Theprior distribution is updated using the evidence, resulting in a posterior distribution with its ownparameters. Statistical inferences can then be made from information conveyed by the posteriordistribution.

    1. Select an appropriate prior probability distribution

    2. Obtain new evidence (data)3. Choose a likelihood function, based on the data type4. Update the prior distribution with the new evidence to generate a posterior probability

    distribution5. Use the most recent posterior distribution as the new prior6. Re-iterate steps 2 through 5.

    8. Selecting the PriorSince prior knowledge exists for the reliability of each subsystem, we can use a prior distribution thattakes our existing knowledge into account. As an example if we were to select a beta distribution as a

    prior for each subsystem we would have a lot of flexibility to match previous data and account forexpert analyses. Such a distribution was shown in Figure 3 as the blue curve. The names are notimportant the shape of the distribution is important and that is why one uses functional forms for theprior that can represent many possible shapes. One can specify the prior distribution as a series ofnumerical values if necessary to more accurately reflect previous knowledge or one can perform what iscalled Gaussian kernel smoothing techniques on discrete distribution values taken from data. There isno requirement to use some type of “named” probability distribution.

    The initial prior distribution represents the user’s prior beliefs an d confidence about the reliability orunreliability of the items . Prior distributions range from “weak” to “strong”. Weak distributions are

    wide and relatively flat, and have less influence on the analysis, (the uniform distribution with f(R) = 1for (0

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    15/44

    9/23/12

    15

    data. In this way, using even a weak prior will result in the same posterior conclusion. Analysts arecautioned on the use of an incorrect strong prior, as more data will be required to overcome (andcorrect) its strong (possibly faulty) influence. An objective prior derived from existing test data or fromsystems of a similar type would certainly be better than a subjective prior based on non-expert opinion.

    9. Likelihood FunctionHaving selected a prior distribution, the likelihood function must be evaluated using the test data that isavailable. Remember, if previous flight test data is used to determine the prior then the same datacannot be used in the likelihood function. Double counting is not allowed. If on the other hand someother method was used to estimate the prior distributions the flight test data could be used in thelikelihood and new tests could be added to that likelihood data to update the posterior distribution. Inthe RBRT tool new binary (pass/fail) ground test data for each subsystem is used to update the priordistribution, and generate a posterior distribution for each subsystem. Whenever new data (testevidence – one or any number of test points) is available, the likelihood function is updated and a newposterior distribution is generated. This is call Bayesian updating. In this manner, new reliability

    distributions will be generated with each new test result -- each time getting closer and closer to thetrue reliability, with more confidence.

    10. Generating a System-Level Reliability EstimateSubsystem reliability distributions can be found for very complex systems. A complex system is modeledby multiplying posterior distributions for the components within each subsystem for a series system.This process could go down to the part level or circuit board level but it is generally not practical to doso. In fact, the reliability of complex boards / components is seldom dictated by intrinsic part failures butmore often is degraded by bad assembly practices. So ideally, we would like to generate a system levelreliability model (distribution). This is done by multiplying all the subsystem distributions together and

    then using some sophisticated sampling techniques e.g. 1) Gibbs’ sampling or 2) for more complexsituations Markov Chain Monte Carlo (MCMC) with Metropolis-Hastings algorithms to obtain a system(entire unit) posterior reliability distribution. This will also be demonstrated later in this paper.

    It is important to reiterate that no single reliability exists. One can speak about average reliability butone does not know the spread of possible reliabilities around that mean value. One can speak of themean and the standard deviation of the reliability but one does not know the shape of the distributionof reliabilities about the mean. It is having the complete distribution as shown above in Figure 3 and thecumulative probabilities shown in Figure 4 that are the keys to modern reliability theory and itsapplication to warfighter needs.

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    16/44

    9/23/12

    16

    Cumulative Reliability Distribution Pr{Reliability >R}

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    0.7 0.75 0.8 0.85 0.9 0.95 1

    R, System Reliability

    P r {

    R e

    l > R }

    Figure 8. Inverted Cumulative Reliability Function, 1-F(R).

    The values on the vertical axis indicate the probability that the system reliability is greater than the reliability valueshown on the horizontal axis; e.g., the reliability for which the probability of the system has a reliability greater than90% is seen to be approximately 0.86.

    11. Application of Bayesian Reliability Model

    Raytheon Missile Systems has developed tools to handle both flight tests and ground tests andincorporate the previous information from the many past flight tests.

    The Bayesian approach can integrate ground test data with flight test data to infer lot reliability, andBayesian methods are the only way to combine this information in a logically and mathematicallycorrect manner.

    The fundamental concept in Bayesian reliability is that reliability should be discussed in the context of itbeing a random variable and described by some form of probability distribution function, f(R). Thisprobability density function is called the posterior distribution, and is constructed from priorinformation such as results from the flight (FDE) tests, and from additional (Likelihood) data that wouldcome from ground and captive carry tests (FGT (PCCC/LTC)).

    Bayesian analyses results in a set of distribution functions called posterior distributions that represent

    the predicted ranges of possible reliabilities and associated probabilities of the components, subsystemsand the full system attaining those reliabilities.A model for this Bayesian analysis process applied to pass/fail or binary experiments is shown below in

    an abbreviated form for a singlecomponent.

    Nm and (mode) are parameterschosen by the subject matter

    (1 )

    (1 )

    ( ) (1 ) ,

    ( , | ) (1 ) ,

    ( ) (1 ) ,

    m m

    m m

    N N prior

    s n s

    N s N n s posterior

    f R R R prior distribution

    L n s R R R likelihood function

    f R R R posterior distribution

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    17/44

    9/23/12

    17

    experts/test engineers based on prior analysis e.g. MIL HDBK 217, simulation results, or experience fromsimilar systems. The variables s and n are the number of successes in n tests performed on thecomponent (or system) of interest.

    When there is wide variation in expert opinions about N m, called the accuracy or importance, we can usea distribution for N m to reflect this uncertainty. (Ref: Johnson, Graves, Hamada, and Reese LANL reportLA-UR-01-6915 (2002), pg 4, formula (2)). An example of such a distribution is given by the following

    gamma distribution which is 1( )

    ( | , ) , 1, 0( )

    mm m m N m

    m m m m m mm

    N G N e

    . How to pick

    m and m is then the subject of interest. It turns out that the final results for f posterior (R) are notparticularly sensitive to the values chosen for these two “hyperparameters.” The value can also bedetermined by a distribution. It appears to be a standard procedure to represent a parameter of interestwhose value is not well know by a distribution fun ction. Since the likelihood will “pick out” values thatagree with the data the best keeping priors that are reasonably broad is popular in situations wherethere is potentially a geat deal of data forthcoming.

    The formulation of the joint distribution f posterior (R, Nm| , n, s, m, m) is shown below

    Now I want to obtain f posterior (R | , n, s, m, m) so I need to sample from this joint distribution and thenintegrate out N m. This may not be possible either analytically or numerically for some large complex jointdistributions for say 40 components. This is difficult to do so a method was developed to effectively findthe posterior distribution for R and it is called Markov Chain Monte Carlo (MCMC) numerical samplingwith a selection rule called Metropolis-Hastings. This set of techniques is explained later in this paper.This need for a numerical Monte Carlo technique is the second most important reason why thereliability community has resisted the use of Bayesian analysis. The first most important reason beingthe process of picking a prior distribution in a manner that seems somewhat arbitray to reliabilityenginers who are used to following a tightly controlled prescription like Using Relex and the NPRD andEPRD.

    12. Time Dependent Reliability with Exponential time-to-first failure.

    We have seen above that using pass/fail or binomial statistics, the population parameter of interest isthe reliability itself, R, resulting in f posterior (R). For time dependent reliability where, a component isassumed to have an exponential time to failure distribution given by f(t)= e - t, (R(t)= e - t), theparameter the mean failure rate, will be assumed to be a random. Therefore one needs to start withan assumed prior distribution for . One possibility for a prior for is a gamma distribution given by

    1( )

    ( ) , , , 0 prior e

    g

    . Note that g prior( )=Gamma( ). Various gamma

    (1 ) 1( , | , , , , ) (1 ) ( ) ,m m m m m N s N n s N posterior m m m m f R N n s R R N e

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    18/44

    9/23/12

    18

    -0.02

    0

    0.02

    0.04

    0.06

    0.08

    0.1

    0.12

    0.14

    0 5 10 15 20 25 30

    g ( l a m

    b d a )

    lambda

    Gamma distributiongprior(l,5,2) gprior(l,5,3) gprior(l,2,3) gprior(l,2,8) gprior(l,20,1)

    After the experimental results(i.e. n failure times t i, i=1,2, … ,n)are available one inserts thefailure times into a likelihoodfunction (

    1

    ( | , ) in

    t

    i L data e )

    and multiplies L by the prior toobtain a posterior distribution for

    called g posterior ( ). The posteriordistribution for is found to befor case

    1( ) ( , )

    n

    posterior ii g Gamma n t .(See the appendix for derivation). We then use g posterior ( )when evaluating the time to failure distribution by integrating the exponential distribution times theposterior distribution over all values of from 0 to infinity,

    ( 1)( )0 0

    ( | ) ( ) n T t nt

    posterior f t e g d e d

    . This integral does not have an

    analytic solution except when the terms in the exponents are integers. The reliability as a function of

    time is given by ( ) ( | ) x t

    R t f x dx .

    The integrals could be done numerically and for some simple integrands any old method would work.However one can bypass the integration by using MCMC techniques. Again I would note that due to thisnumerical complexity it has been difficult to get the reliability community on board. Little or noreliability software has been designed to easily solve these Bayesian problems. (Exception is PredictionTechnologies, Inc. in Hyattsville, MD, Frank Groen, Ph.D., President)

    Statisticians solve these problems using the “R” programming language or the code “Winbugs” andcreate scripts to solve the MCMC procedure, however these open architecture programs require ITapproval before use in classified projects.

    This sounds rather complicated but is quite simple and straight forward as will be shown later in thispaper. I will work a problem using time dependent data later.

    13. Time Dependent Reliability with Weibull time-to-first failure.(See Hamada, et al. Chapter 4, section 4)

    Once we have dealt with the exponential distribution then the next logical step is to look at the Weibulldistribution that has two parameters ( ) instead of the single parameter ( ) for the exponentialdistribution. Now let’s address a counting problem which is very typical of lo gistics analysis. With twoparameters we will need a two variable prior distribution f prior ( ) which in some cases can be modeled

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    19/44

    9/23/12

    19

    0.00

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    0.07

    25 35 45 55 65 75

    P r o

    b a b i l i t y

    m, # customers in Ban

    Pr{m customers|< >}

    95% conf. interval(m=35,m=62)

    by the product f ,prior ( )f prior ( ) if the parameters can be shown to be independent. Even ifindependence cannot be proven one uses the product for mathematical tractability.

    14. Poisson Counting

    Let us assume a discrete set of count data {y i, i=1,2,…,n} that we believe comes from a population thathas the same characteristic distribution that leads to the count. Normally these are observationalstudies such as how many people enter a band in a given time. We would perform this observation overidentical time periods on say n consecutive weekdays. I may do this to see if I should stagger the lunchhours of my tellers. Measuring the customer count from say 11am to 1pm for each of 10 weekdaysproduces the following data table.

    56 41 57 46 52 42 44 45 58 40or and average number of 48.1 customers. We would like to characterize the probability of the numberof customers on any given day (between 11am and 1pm), Our first thought is to look at using a Poisson

    distribution whose single parameter is labeled , represents the “average” number of customers inthat time slot on any given day. The probability distribution is given by ( | ) , 1,2,...,

    !

    ik

    ii

    P k e i nk

    where k i is the customer count on the ith day and is considered to be the random variable of interest

    when is known. Classical analysis computes the average of all the k i counts and uses that average as anestimate for call it < >.One then uses < > tocompute FUTUREprobabilities of customernumbers in the bank.

    ( | )!

    k

    P k ek

    Such a probability graph isshown below where< >=48.1 which is calculatedfrom the above data. Thepeak probability is around6% for a customer number

    of 48 (closest value to the mean). So what is the problem? Well is seldom known and the confidence

    interval is very wide. Previous data was used only to provide a point estimate for . There is uncertaintyin and if we can use this previous information to reduce the uncertainty then let’s do it.

    When applying Bayesian techniques we wish to account for the variability in the parameter asopposed to calculating some “fixed” estimate. To do this we need to find a distribution for , (priordistribution) this prior distribution may be based on many factors not the least of which is a expertopinion from subject matter experts (SMEs). For this demonstration let me assume the priordistribution for is given by a gamma distribution, Gamma(a,b), f prior( ) = b(b )a-1exp(-b ) / (a) . The

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    20/44

    9/23/12

    20

    0

    0.005

    0.01

    0.015

    0.02

    0.025

    0.03

    0.035

    0.04

    0.0450.05

    0 10 20 30 40 50 60 70

    f (

    Gamma Distribution, (a,b)

    1 3 1 3 4.8 .1 5 .1 1 .2

    reason for this choice is that a gamma distribution can take on many shapes e.g. very flat (a=b=10 -3)which conveys very little prior information to becoming fairly peaked (a=8,b=20). This kind of flexibility

    allows the reliability engineer some “wiggle room” torepresent the most reasonable prior. For the abovedata one possible set of values are a=1.0139 and

    b=47.442. Given the gamma prior one has thelikelihood function which represents the data fromcounting the number of customers that came intobank over the n recorded days the likelihood function is simply the product of the probability mass functionsfor the Poisson distribution for the n days of data i.e.

    1

    11

    ( | )! !

    nii i

    k k nn

    ni i ii

    e e L k

    k k

    Multiplying the prior times the likelihood gives the posterior distribution f post ( |k).

    Clearly we have another gamma distribution for the posterior distribution of .

    |k ~ Gamma(n k +a, n+b) =

    1( )(( ) )( ) exp( ( ) )

    nk a

    post n b n b

    f n bnk a

    where1

    1 ni

    i

    k k n .

    From the properties of the gamma distribution we know the mean and variance is given by

    22[ | ] , [ | ] ( )

    nk a nk a E k Var k n b n b

    , These expressions can be rewritten in an

    informative way as;

    where the weighting factor w=n/(n+b). The top expression shows the posterior mean is a weighted sum

    of the prior mean, E[ ] and the likelihood or sample data mean k . If the prior is “low weighted” say

    a=b=10 -3 then E[ |k]≈ mean of the count data as it should.

    So how do you use this information? Well from a reliability perspective we use the Poisson equation togive us some idea of how many spares are needed over some fixed period of time assuming = (fixedfailure rate) X (Time span of interest). In this example we are interested in the probabilities of having mcustomers in the bank during the 11am to 1pm time slot.

    11

    1

    ( | ) ( | ) ( )( )!

    nii

    k n aa b

    post prior n

    ii

    e b f k L k f e

    ak

    2 22 2

    2 2 2

    [ | ] (1 ) [ ]

    [ | ] (1 ) [ ]( ) ( )

    n b a E k k wk w E

    n b n b b

    n k b a k Var k w w Var

    n b n n b b n

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    21/44

    9/23/12

    21

    ( ) 1 1

    {m customers in bank| , , , }( , )

    1 1( 1)

    ( )( ) 1

    nk a

    nk a m

    nk am

    n b P n k a b

    m B nk a mn b m

    mmnk a n b m

    n b

    0

    0.05

    0.1

    0.15

    0.2

    20 25 30 35 40 45 50 55 60

    f

    ) , p

    d f

    , avg # customers

    Gamma prior Poisson LikeihoodGamma post

    fpost(lambda) fprior(lambda)

    From this formula we can compute what the bank really wants to know and that is the probability ofhaving m customers in bank during this time period. It is at this point that Bayesians get in troublebecause evaluating the above expression requires MatLab or some other software that can handle largenumbers. Example a=5,b=0.1 and from the data n=10, =48.1 so one must use asymptotic expansionsfor the gamma functions,

    3 5

    1 1 1 1ln( ( )) ln ln

    2 2 12 360 1260 z

    z z z z z z z

    , or

    ( )( , ) ~ y

    y B x y

    xfor x large, y fixed.

    The expression for ln(G(z)) is good to 8 decimal places for z>5.

    Using the latter expression and some algebraic manipulation one finds

    A graph of thisfunction for variousm values gives thefollowing

    ( ){m customers in bank| , , , }

    1

    ( ) 1 1

    ( , )

    nk a

    nk a m

    nk a

    nk a m

    nk a mn b P n k a b

    nk a mn b m

    n b

    m B nk a mn b m

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    22/44

    9/23/12

    22

    Markov Chain Monte Carlo.

    Gibbs sampling for multiple component systems.I want to discuss the concept of a Gibbs sampler [1,3]. To do this I am going to use a Bayesian reliabilitymodeling tool provided to us by Mr. David Lindblad of Northrop-Grumman Company. The tool they used

    is a very nice and easy to use example of what is called Gibbs sampling and is described in detail below.Using Gibbs sampling with a known beta prior but for a multi-component system is also a fairly easyproblem to solve. After this example I will come back to the above problem and introduce Metropolis-Hastings sampling.

    To begin to understand this problem let’s return to the beta binomial problem. If we have for ea chcomponent in a series system the following posterior distribution (i.e. Likelihood X prior)

    ( | , , , ) (1 )i i i N F F post i i i f R N F R R where represent the parameters of the prior

    distribution (BETA( ,0,1)) and presumed known. N i = # tests of the ith component and F i is the number

    of failures of i th component that occurred in the N i tests.

    In the tool shown below the prior for all the components has been chosen (by the customer) to be thesame. This is many times used as a standard assumption when sufficient information is not available totreat each prior separately. Since it turns out that the above posterior distribution can be easilynormalized (i.e. integral of the above distribution performed analytically) and the function can be foundin excel (or MatLab), one can multiply a series (of say 10) of these posterior distributions together tofind the posterior for the series system.

    The sampling from each posterior gives some reliability value (yes?) and taking the product of thesampled reliabilities of the components will produce a sample value of the system reliability for a seriesRBD (yes?).

    Consider the posterior distribution for 10 components where each posterior is of course a prior times alikelihood for that component.

    10

    1

    10

    1

    ( | all ,all , , ) (1 )

    i i i N F F post sys i i i i

    i

    sys ii

    f R N F R R

    where R R

    The practical question is how to take the information provided by this product of distributions toproduce a distribution for R sys? The technique is called Gibbs Sampling and the Wikipedia reference isshown for easy lookup (http://en.wikipedia.org/wiki/Gibbs_sampling ). Also discussions in Hamada [1]and Hoff[3] may be useful. I will describe this technique first by using the example below and then willfollow that example with a more theoretical discussion of why it works.

    http://en.wikipedia.org/wiki/Gibbs_samplinghttp://en.wikipedia.org/wiki/Gibbs_samplinghttp://en.wikipedia.org/wiki/Gibbs_samplinghttp://en.wikipedia.org/wiki/Gibbs_sampling

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    23/44

    9/23/12

    23

    Shown below is a picture of the tool that uses Gibbs sampling that draws random samples of reliabilityvalues, R i, for each component (i=1,2,…,n) and then uses those samples to compute a system reliability ------ since we know the distributions (Beta distribution).

    Digression on Random Sampling from a distribution: If I wish to randomly sample from a distribution one

    technique is the find the inverse function R=F -1

    (probability, parameters). See Averill Law’s book [4] whichis used in at least two courses here at RMS. Example using f(t)= exp(- t). I want to randomly find timessampled from this distribution so I do the following. Using the CDF F(t)=1-exp(- t), I manipulate theequation into the form ln(1-F)=- t, which leads to an expression for t given by t=-ln(1-F)/ = F -1. Since F isuniformly distributed between 0 and 1 (it is after all a probability of T < t) we could generate a random

    number (probability), say U, to represent F (or 1-F) and find the resulting value for t using t=-ln(U)/ . So for every random number drawn I get a different value for t. When you draw values of t in this manneryou are randomly sampling from the exponential distribution by using a random number generator that produces numbers for 1-F between (0,1).

    In the case of the beta distribution the sampling is not as easy as the exponential distribution discussedabove but suffice it to say that it can be done and in excel the command is BETAINV(RAND(),A,B,0,1)where instead of A & B you actually have cell designations that have stored the alpha & beta parametervalues. I will show this later.

    Note that I have put into the first position in the BETAINV command the expression for generating arandom number between 0 and 1 [RAND()]. This command will generate a new random numberwhenever there is a change on the excel spreadsheet so you may want to change the calculationaloption to manual from automatic so that pressing F9 will be the manner in which calculations areupdated. Just a hint.

    Let me dissect this chart piece by piece.

    In the above chart there are 10 components and each component has a distribution given by the

    previously shown posterior distribution 0 00 0( | , , , ) (1 )i i i N F F

    post i i i f R N F R R . Let me describe

    the terms in the column for component 1. This shows that component 1 had N=22 tests performed on it

    Conf Level 0.90 Uniform Comp # 1 2 3 4 5 6 7 8 9 10# Components 10 Prior N(i)=# tests 22 25 55 49 22 32 50 22 28 60Lower Bound 0.685988 Mean = F(i)=# failed 1 1 1 2 0 1 0 0 1 1Alpha(0) 1.361457 0.50 Alpha(i) 22.3615 25.3615 55.3615 48.3615 23.3615 32.3615 51.3615 23.3615 28.3615 60.3615

    Beta(0) 0.097716 T2 = Beta(i) 1.0977 1.0977 1.0977 2.0977 0.0977 1.0977 0.0977 0.0977 1.0977 1.0977

    Demonstrated Mean 0.792718 1/12 [N(i)-F(i)]/N(i) 0.9545 0.9600 0.9818 0.9592 1.0000 0.9688 1.0000 1.0000 0.9643 0.9833Prod of MC Means 0.777148 Mean(MCi) 0.9533 0.9587 0.9805 0.9584 0.9957 0.9671 0.9981 0.9958 0.9627 0.9822Iterations 50,000 Iter # Product Iters:

    1 0.8006 0.9985 0.9207 0.9767 0.9437 1.0000 0.9685 0.9956 0.9992 0.9940 0.9863

    Press F9 to 2 0.7940 0.9750 0.9697 0.9928 0.9803 0.9921 0.9865 0.9977 0.9943 0.8980 0.9898

    perform Calculation 3 0.7090 0.9076 0.9405 0.9852 0.9235 1.0000 0.9810 0.9988 0.9997 0.9353 0.99664 0.7896 0.9507 0.9424 0.9912 0.9417 1.0000 0.9664 0.9997 1.0000 0.9959 0.9814

    5 0.6465 0.9813 0.9687 0.9705 0.9936 1.0000 0.8420 0.9157 1.0000 0.9567 0.9562

    6 0.8699 0.9891 0.9971 0.9604 0.9864 1.0000 0.9643 1.0000 1.0000 0.9772 0.9882

    7 0.8184 0.9853 0.9761 0.9823 0.9437 1.0000 0.9975 0.9972 0.9999 0.9301 0.9922

    (1 )( | , , , )

    ( 1, 1)

    i i i N F F

    post i i ii i i

    R R f R N F

    B N F F

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    24/44

    9/23/12

    24

    Probability Density Function--comp#1

    Histogram Beta

    R10.950.90.850.80.750.7

    0.120.11

    0.10.090.080.070.060.050.040.030.020.01

    0

    and of thise 22 tests there was 1 failure. The row labeled Alpha(i) having the value 22.3615 is calculatedas Alpha(0) + (N(1)-F(1)) = 1.361457+(22-1) = 22.3615 rounded to 4 decimal places.

    The values Alpha(0) and Beta(0) are shown in the column on the far left and represent the exponents in

    the prior distribution 0 0( ) (1 ) prior i i i f R R R where in this formulation all components have the exact

    same prior. I will discuss this choice later along with how the above values for and were calculatedas this is not obvious.

    Similarly the row labeled Beta(i) that has the value 1.0977 is found by calculating Beta(0) + F(1) =0.097716 + 1 =1.0977 rounded to 4 decimal places.

    The next row labeled (N(i)-F(i))/N(i) = number of successes / number of tests for the first component =(22-1)/22 = 0.9545 and this is the “point estimate” that is traditionally taken as the reliability of thatcomponent. This will be compared to the mean of the Bayesian distribution as will be explained below.

    The figure below shows the formulas in each cell. We are concentrating on Column E.

    A B C D E F

    Given the values for the first 8 rows of column E(component 1) we can move on to the actual samplingtaking place in rows 9 through 50009.

    The first iteration for the component 1 is given by thefollowing excel statementIFERROR(BETAINV(RAND(),F$4,F$5,0,1),1) and thisstatement performs a random sample of the Betadistribution whose parameters are ALPHA(i) and BETA(i)

    Conf Level 0.9 Uniform Comp # 1 =E1+1# Components =COUNT(E1:N1) Prior N(i)=# tests 22 25Lower Bound =PERCENTILE(D9:D50008,1-B1) Mean = F(i)=# failed 1 1Alpha(0) =(((2/3)^(1/$B$2))-1)/(1-((4/3) (̂1/$B$2))) 0.5 Alpha(i) =E2-E3+$B$4 =F2-F3+$B$4Beta(0) =B4*((1-(1/2)^(1/$B$2))/(1/2)^(1/$B$2)) T

    2 = Beta(i) =$B$5+E3 =$B$5+F3Demonstrated Mean =PRODUCT(E6:N6) 1/12 [N(i)-F(i)]/N(i) =(E2-E3)/E2 =(F2-F3)/F2Prod of MC Means =PRODUCT(E7:N7) Mean(MCi) =AVERAGE(E9:E5000 =AVERAGE(F9:F5000Iterations =MAX(C:C) Iter # Product R1…R10

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    25/44

    9/23/12

    25

    Comp # 1N(i)=# tests 22F(i)=# failed 1

    Alpha(i) 22.3615Beta(i) 1.0977

    [N(i)-F(i)]/N(i) 0.9545Mean(MCi) 0.9533

    Iter # Product R1…R101 0.9104 0.9717

    2 0.7775 0.9740

    3 0.7849 0.9804

    4 0.8210 0.9706

    5 0.7579 0.9253

    6 0.7057 0.9786

    7 0.7613 0.9586

    from rows 4 and 5 respectively. In this particular case thereliability value produced by the random sample = 0.9717. Thesecond sample for component 1 is 0.9740.m Now we do the samesampling process again and again for 50,000 iterations. Thevalues of reliability, randomly selected from this beta distribution,

    are then displayed in a histogram and subsequently fit to a betadistribution which I did only to prove that the random sampleproduced the distribution it was to suppose to emulate. The pdf isshown in the graph above. The estimated parameters from thedata give (ALPHA(1) = 21.435, BETA(1)=1.0878) compared to theknown distribution values of (22.3615,1.0977). This is consideredto be a good fit. Note that one can find some information ofinterest such as Pr{R>0.863}=.95. Now if we only had onecomponent the problem is easy but let’s look at all 10components in series.

    The product of the randomly sampled reliabilities for any given iteration is given by the row under thecolumn named “Product” the calculation in this column is the product of all 10 reliabilities each sampledfrom their own posterior distribution. The resultant histogram of 50,000 values is shown below. Thebest fit for this product of reliabilities is also a beta distribution with scale parameter ALPHA=28.323 andshape parameter BETA = 8.1041 and the graphic is shown below.

    One can use these parameters and the resulting distribution to answer key questions about the systemreliability e.g. What is the mean median and mode of the reliability? Answer (0.778, 0.783, 0.794). Whatis the 80%, 90%, & 95% , 1-sided lower confidence bound for the system reliability? Answer (0.72, 0.69,0.66) respectively. Worded correctly we are 95% confident the system reliability is greater than 0.66.Many times we will not obtain a good fit to a posterior distribution in which case one has to live with

    Probability Density Function -- 10 components in series

    Histogram Beta

    R(sys)0.960.920.880.840.80.760.720.680.640.60.560.520.480.44

    0.064

    0.056

    0.048

    0.04

    0.032

    0.024

    0.016

    0.008

    0

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    26/44

    9/23/12

    26

    having some numerical process to extract data of interest. The use of “Gaussian kernals” in analyzingand plotting data is one such method and is available in MatLab.

    This tool samples independently for each component from what we call the marginal distribution foreach reliability posterior distribution and by marginal we mean we sample R(i) independently of all

    R(j),j≠i. In the sense of Gibbs sampling we have integrated out all the R(j) varia bles except for R(i) fromwhich we then sample.

    Digression on Marginal Distributions: If f(x,y) is the joint distribution for X and Y then integrating over all

    allowable y values gives ( ) ( , ) marginal distribution for x.upper

    lower

    y

    X XY y f x f x y dy Similarly the

    marginal distribution of y is given by ( ) ( , ) marginal distribution for y.upper

    lower

    x

    Y XY x f y f x y dx

    Since one cannot always perform this integral there are other ways to sample the joint distribution toeffectively find what we need. For the analysis shown here the customer decided that having the sameprior distribution for all the components was what they wanted i.e. ALPHA(0) and BETA(0) were thesame for all 10 components. How were the values of and determined? Good question and inAppendix B is the derivation but the simple answer is that the customer wanted the prior for the systemof all 10 components to be uniform (0,1). This is not the same as making each individual prior uniform(0,1). Interestingly if you sampled from the product of all 10 priors multiplied the values together didthis many many times then looked at the resulting prior distribution you would obtain a uniform prior(0,1). Seeing the proof is important (Appendix B).

    Using this tool can be very helpful in predicting reliabilities for systems made up of components forwhich there are independent (individual component) test results.

    Return to the Beta Binomial one-component example.Let us recall that if we assume R and N m are independent random variables, that we obtained a jointprobability distribution given by the formula below. Note that the joint distribution CANNOT beseparated into two independent distributions e.g. f(R)f(N m) so our ability to integrate out N isproblematic at best. In fact using Maple 13, which is an algebraic program, shows no closed formsolution.

    I want to produce a posterior distribution of R by somehow sampling the above joint distribution in amanner that will account for the variability of N m but not have N m in the final answer. The solution is(MCMC) Markov Chain Monte Carlo (http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo . See alsothe video found at (http://videolectures.net/mlss09uk_murray_mcmc/ ) which is a 45 minute video thatis well worth viewing.

    Scanning the web provides some additional insight into MCMC and Metropolis – Hastings (MH)sampling. The following comes from Barry Walsh’s 2004 lecture notes at MIT.

    (1 ) 1 joint ( , | , , , , ) (1 ) ( ) ,m m m m m

    N s N n s N m m m m f R N n s R R N e

    http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlohttp://en.wikipedia.org/wiki/Markov_chain_Monte_Carlohttp://en.wikipedia.org/wiki/Markov_chain_Monte_Carlohttp://videolectures.net/mlss09uk_murray_mcmc/http://videolectures.net/mlss09uk_murray_mcmc/http://videolectures.net/mlss09uk_murray_mcmc/http://videolectures.net/mlss09uk_murray_mcmc/http://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    27/44

    9/23/12

    27

    “A major limitation towards more widespread implementation of Bayesian approaches is that obtainingthe posterior distribution often requires the integration of high-dimensional functions. This can becomputationally very difficult, but several approaches short of direct integration have been proposed(reviewed by Smith 1991[7], Evans and Swartz 1995[8], Tanner 1996[9]). We focus here on MarkovChain Monte Carlo (MCMC) methods, which attempt to simulate direct draws from some complexdistribution of interest. MCMC approaches are so-named because one uses the previous sample valuesto randomly generate the next sample value, generating a Markov chain (as the transition probabilitiesbetween samples are only functions of the most recent sample value). The realization in the early 1990’s(Gelfand and Smith 1990[10]) that one particular MCMC method, the Gibbs sampler, is very widelyapplicable to a broad class of Bayesian problems has sparked a major increase in the application ofBayesian analysis, and this interest is likely to continue expanding for some time to come. ” “MCMC methods have their roots in the Metropolis algorithm (Metropolis and Ulam 1949[11], andMetropolis et al. 1953[11]), an attempt by physicists to compute complex integrals by expressing themas expectations for some distribution and then estimate this expectation by drawing samples from thatdistribution. The Gibbs sampler (Geman and Geman 1984[12]) has its origins in image processing. It isthus somewhat ironic that the powerful machinery of MCMC methods had essentially no impact on thefield of statistics until rather recently. Excellent (and detailed) treatments of MCMC methods are found

    in Tanner (1996) and Chapter two of Draper (2000)[13]. ”

    When Monte Carlo calculations are performed one draws randomly from some distribution todetermine or evaluate a property of interest. The words Markov Chain refers to the fact that we are onlyconcerned with the most recent value sampled just before the sample you are about to take. That is weare not interested in history of samples prior to the most recent. How this is implemented and why itworks will be shown by direct example in the paragraphs that follow.

    Let me suggest a method for sampling from the joint distribution shown earlier in this note. i.e.

    Let me first sample for a value of N m by sampling the Gamma distribution for N m, i.e.

    1( )( | , ) , 1, 0

    ( )

    mm m m N m

    m m m m m mm

    N G N e

    This can be done with excel worksheets as follows. Begin with column labeled N m(i) that I start out with

    some initial number (2.0 in the example below). In the column f(Nm) calculate the entire prior joint

    distribution

    Step 1. Select an initial value for N m (say 2.0 for this example).

    Step 2. Using the value for N m one calculates the joint distribution

    Step3 . Then calculate in some manner a possible new estimate for N m, labeled Cond Nm in the

    spreadsheet. The rule I use for simplicity is that the new (conditional) value for N m must be chosen

    (1 ) 1 joint ( , | , , , , ) (1 ) ( ) ,m m m m m

    N s N n s N m m m m f R N n s R R N e

    (1 ) 1 joint ( , | , , , , ) (1 ) ( ) ,m m m m m N s N n s N m m m m f R N n s R R N e

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    28/44

    9/23/12

    28

    randomly from a distribution that is symmetric about the “old” value of N m (i.e. 2). I have used

    NORMINV(RAND(),Nm(i),sigma) to find Nm(cond). (See spreadsheet)

    Step 4. Using this N m(cond) you again evaluate the joint distribution but now you use the new

    conditional value of Nm.

    This was the Markov portion of the problem since we only used the immediate past iteration value of N m

    to help find a new value of N m(cond).

    Step 5. Now comes the Metropolis-Hastings part. I evaluate the ratio r = f joint(Nm(cond)/f joint(Nm(i)).

    Step 6. I gererate a random number between (0,1) call it RN and if r>RN then I set N m(i+1)=Nm(cond)

    otherwise I set N m(i+1)=Nm(i).

    Step 7. Now using this NEW value of N m(i+1) I proceed to calculate the values of R(i) shown in the next

    set of columns in the spreadsheet.

    Step 7a. I also set the N m(i) value in the next row down in the spreadsheet = N m(i+1) getting ready for

    another iteration.

    Step 8. Starting with some initial value of R(i) (in this example 0.950) I take this value and evaluate the

    joint distribution f(R(i), N m(i+1)).

    Step 9. I find some new possible value for R(cond) must as I had done for N m(i) and in this case I use

    R(cond)=NORMINV(RAND(),R(i),sigmaR) and I must of course be careful not to use any R(cond) value

    that is > 1 or RN1 the set R(i+1)=R(cond) otherwise set R(i+1)=R(i).

    Step 13. Set the next row R(i) value = R(i+1) in anticipation of the next iteration.

    ( ) ( )(1 ) 1 ( ) joint ( , ( ) | , , , , ) (1 ) ( ( )) ,m m m m m

    N cond s N cond n s N cond m m m m f R N cond n s R R N cond e

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    29/44

    9/23/12

    29

    Probability Density Function f(R) using MH & MCMC

    Histogram Beta

    R10.980.960.940.920.90.880.860.840.820.80.780.760.740.72

    0.052

    0.048

    0.044

    0.04

    0.036

    0.032

    0.028

    0.024

    0.02

    0.016

    0.012

    0.008

    0.004

    0

    Go back to step 1 but now one row down (the next iteration) and use Nm(i)=Nm(i+1) and R(i)=R(i+1).

    If you do this enough times (some discussion needed here on exactly how many iterations are enough)

    and take say the last 1000 iterations you will find a distribution for R that is stationary and converges to

    the posterior distribution of R given information including the effects of Nm. By the way you can alsoplot the posterior distribution of N m given information and influence of R. As I show you the real

    spereadsheet much of this will become clearer.

    Alpha = 25.64

    Beta = 2.53

    A fit to a known

    probability

    distribution is

    shown at left.

    Normally onewould use some

    type of smoothing

    kernel to plot the data and use what the iteration process has given.

    What is happening here? What does the Metropolis-Hastings rule do in helping us select which valuesof R (and N m) to keep and which ones to ignore? Consider the following graph of a typical distributionfunction (Dr. David King put this together for our AF customer).

    am = 2 .000 s igma Markov Chain Monte Carlo Fraction change = 54% n= 22 sigma Fraction change = 54%bm = 1. 414 0. 1 MH Test Mean Nm = 2.102 0.95 s = 21 0.955 0.005 MH test Mean R = 0.921

    I te ra tion Nm( i) f (Nm) Cond Nm f (condNm) r at io f (cond) /f (o ld ) RN Nm( i+1) change R(i ) f (R(j ),Nm( i+1) R(cond) f (R(cond) ,Nm( i+1) f (cond) /f (o ld ) RN R(i+1) change1 2.000 0.00278 2.090 0.00268 0.963 0.177 2.090 1 0.950 0.011 0.957 0.011 1.003 0.936 0 .957 12 2.090 0.00269 2.196 0.00257 0.955 0.194 2.196 1 0.957 0.011 0.956 0.011 1.000 0.957 0 .956 13 2.196 0.00257 2.123 0.00265 1.033 0.052 2.123 1 0.956 0.011 0.952 0.011 1.000 0.003 0 .952 14 2.123 0.00265 2.238 0.00252 0.950 0.471 2.238 1 0.952 0.011 0.951 0.011 0.998 0.944 0 .951 15 2.238 0.00251 2.326 0.00241 0.960 0.616 2.326 1 0.951 0.011 0.953 0.011 1.002 0.879 0 .953 16 2.326 0.00242 2.248 0.00251 1.037 0.693 2.248 1 0.953 0.011 0.951 0.011 0.998 0.426 0 .951 17 2.248 0.00250 2.190 0.00257 1.027 0.136 2.190 1 0.951 0.011 0.955 0.011 1.003 0.318 0 .955 18 2.190 0.00258 2.146 0.00263 1.020 0.234 2.146 1 0.955 0.011 0.968 0.010 0.934 0.187 0 .968 19 2.146 0.00245 2.030 0.00258 1.051 0.783 2.030 1 0.968 0.011 0.963 0.011 1.042 0.778 0 .963 1

    10 2.030 0.00269 2.143 0.00256 0.953 0.729 2.143 1 0.963 0.011 0.967 0.011 0.967 0.486 0.967 1

    11 2.143 0.00248 2.116 0.00251 1.012 0.701 2.116 1 0.967 0.011 0.969 0.010 0.987 0.167 0.969 112 2.116 0.00247 2.158 0.00243 0.981 0.123 2.158 1 0.969 0.010 0.963 0.011 1.053 0.634 0.963 1

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    30/44

    9/23/12

    30

    In the above figure one notionally sees that the MCMC process is very straight forward you evaluate the joint distribution and old values and proposed new values and you ask the question should I keep theproposed new value or stick with the old value. So you move up or down the above curve depending onthe value of the ratio of distributions under new vs. old values and a random number. We clearly willkeep the new values if the ratio of the distributions is > 1. Note: If we did only this then we would neverfill in the distribution since you would not sample any values “downhill” from the place you tested. Thisis shown in a simple example on the next page.

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    31/44

    9/23/12

    31

    Bayesian Reliability Codes: RBRT2There are two codes available for RMS use and they are RBRT2 which was developed for the AF on acontract and the second one is called SRFYDO and it was produced at Los Alamos (LANL). RBRT2 wasdeveloped for handling 41 different components. Information on prior distributions for all 41compionents is required. In addition since RBRT2 was designed to evaluate different testing modes(flight vs. ground) data was provided on the stress of the test environment compared to the flight testenvironment. In principle this information could be provided in many ways but the most convenientway was in terms of a stress factor that was part of an excel spreadsheet of input values. The use ofRBRT2 is a full day’s course in itself and I only want to show some of its results for the test cases we ran.Here are the equations used for RBRT2. Since we only had pass/fail data and we needed to find someway to account for different test environments we chose to model this using Binary Logistic Regression(BLR)

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    32/44

    9/23/12

    32

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    33/44

    9/23/12

    33

    Appendix A. Assumptions Used in Bayesian Reliability.

    It is important to understand some basic concepts and assumptions.

    Reliability is defined by MIL-STD-721C to be "the probability that an item can perform its intendedfunction for a specified interval under stated conditions." If the stated condition is a mission, then

    reliability refers to "the ability of an item to perform its required functions for the duration of a specifiedmission profile." The mission profile is defined to be "a time-phased description of the events andenvironments an item experiences from initiation to completion of a specified mission, to include thecriteria of mission success or critical failures." If the item is redundant, portions may fail during themission, but the function must continue without mission interruption. If the item is not redundant, thedefinition implies failure-free performance under the stated conditions for the specified interval.

    For the practical application of these definitions, several points are worth noting.

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    34/44

    9/23/12

    34

    1. Reliability is an expression of our confidence in the ability of the system to carry out a mission, and isultimately expressed as a probability, a number that is always between 0 and 1. There is NOT asingle number (often called a “point estimate”) to represent the reliability of a lot of units in thefield or any single unit for that matter . To correctly represent reliability we must talk about adistribution of possible reliabilities, f(R).

    That is, one can find a probability that a unit picked randomly from a lot will have a probability ofsuccess (i.e. reliability) greater that some desired value R, e.g., 1-F(R) or within some range ofvalues. For example, the probability that a unit will have a reliability greater than 0.90 might be 80%.This is a probability (80%) of a probability (0.90 or 90%) and though it sounds abstract, and may beconfusing, this is the correct way to think about and ask questions about reliability. When translatedinto actual experience, it is a statement that we expect the system to succeed with some probability(0.90) with some confidence level (80%).

    2. Reliability distributions depend on a specified system and a specified mission. When either thesystem or mission is changed, so does the reliability distribution.

    3. Remember reliability is NOT a single fixed value that characterizes a population ; it has adistribution of many possible values, f(R).

    4. All possible values of reliability for a system/subsystem/component are < 1. A perfectly reliablesystem is impossible but the distribution of possible reliabilities can be made very high, and usuallyat great expense if done post design. One can (hopefully) design system configurations and usesound engineering practices, reliable components and de- rating techniques to achieve “very highprobabilities of having high reliability” for a given mission.

    5. One measures system failures by observing very large numbers of actual mission attempts and

    calculating the fraction of those that succeed. This information goes into a likelihood function.Rarely are large numbers of tests feasible and these tests alone do not give a full answer, even iftesting large numbers is possible. Alternative strategies involving smaller sample sizes, simulated orpartial missions, engineering analysis, or sometimes simply engineering judgments must beemployed and can be folded into the reliability results when using Bayesian probability techniques.

    6. A credible estimate of the reliability distribution, even if difficult to make, is essential at each stageof system development. Experience has long shown that if reliability distributions are not carefullyestimated and if timely actions are not taken, the reliability of the completed system will seldom begood enough, and the choice will be between a useless product and one that must be modified at amuch higher price than the cost of taking appropriate action during development.

    7. Note: In practice one uses fault tree analyses and simulations to try and remove from the productdesign all possible failure modes that have significant probabilities of occurrence. The productmanufacturing and fabrication process usually introduces new failure modes into complex systems(roughly 80% of all failures in complex systems are due to design errors and/or human errors andpoor manufacturing practices). These failure modes are next to impossible to model, and manytimes failures during the testing phase spotlight these modes. Once found these modes 1) can be

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    35/44

    9/23/12

    35

    designed out the product or, 2) the assembly process can be changed to remove those modes.Seldom are all the failure modes found and even some of those found are too expensive to fix. Thusone lives with the finite probability that certain failures will occur in some fraction of the systemsused. To believe all systems can be made perfectly reliable flies in the face of both history andmathematics. This is why one should always consider some range of possible reliabilities for anycomplex system when trying to quantify mission success. See reliability growth (AMSAA-Crowmethods, http://www.amsaa.army.mil/ReliabilityTechnology/Projection.html , http://www.weibull.com/RelGrowthWeb/Crow-AMSAA_(N.H.P.P.).htm , http://www.barringer1.com/nov02prb.htm ) for instructions in using the analysis method.

    http://www.amsaa.army.mil/ReliabilityTechnology/Projection.htmlhttp://www.amsaa.army.mil/ReliabilityTechnology/Projection.htmlhttp://www.amsaa.army.mil/ReliabilityTechnology/Projection.htmlhttp://www.weibull.com/RelGrowthWeb/Crow-AMSAA_(N.H.P.P.).htmhttp://www.weibull.com/RelGrowthWeb/Crow-AMSAA_(N.H.P.P.).htmhttp://www.barringer1.com/nov02prb.htmhttp://www.barringer1.com/nov02prb.htmhttp://www.barringer1.com/nov02prb.htmhttp://www.weibull.com/RelGrowthWeb/Crow-AMSAA_(N.H.P.P.).htmhttp://www.amsaa.army.mil/ReliabilityTechnology/Projection.html

  • 8/17/2019 Bayesian Statistics Applied to Reliability Analysis

    36/44

    9/23/12

    36

    Appendix B. Examples of Prior, Likelihood and Posterior distributions forPass/Fail Data

    The above graphs illustrate the case where the tes