probability and statistics with reliability, queuing and...
TRANSCRIPT
Copyright © 2006 by K.S. Trivedi 1
Probability and Statistics with Reliability, Queuing and Computer
Science Applications: Second editionby K.S. Trivedi
Publisher-John Wiley & Sons
Chapter 3: Continuous Random Variables Dept. of Electrical & Computer Engineering
Duke UniversityEmail: [email protected]
URL: www.ee.duke.edu/~kst
Copyright © 2006 by K.S. Trivedi 2
Definitions
Distribution function:
If is a continuous function of x, then X is a continuous random variable.
: grows only by jumps Discrete rv: both jumps and continuous growth Mixed rv
)(xFX
)(xFX
)(xFX
Copyright © 2006 by K.S. Trivedi 3
Note
We will also allow defective distributions. Defective distributions, also known as improper distributions will be covered later and are very useful in computer science applicationsThese distributions satisfy F1, F2 and a modified version of F3:
Unless otherwise specified, we will assume all distributions to be non-defective
Copyright © 2006 by K.S. Trivedi 4
Definitions (Contd.)
Equivalence:
CDF (Cumulative Distribution Function)
Probability Distribution Function (PDF)
but avoid this name as it can be confused with
pdf (prob. density function)
Distribution function
or FX(t) or F(t) )(xFX
Copyright © 2006 by K.S. Trivedi 5
probability density function (pdf)
X : continuous rv, then,CDF and pdf can be derived from each other
pdf properties:
Copyright © 2006 by K.S. Trivedi 6
Definitions (Continued)
Equivalence: pdf
probability density function
density function
density
f(t) = dtdF
,)(
)()(
0∫∫
=
=∞−
t
t
dxxf
dxxftF
for a non-negativerandom variable
Copyright © 2006 by K.S. Trivedi 7
Example 3.1Random variable X : time (years) to complete a project
fX clearly satisfies property (f1). To be a pdf, it must also satisfy (f2),
Prob. of completing project in less than 4 months,
Copyright © 2006 by K.S. Trivedi 8
Exponential Distribution
Arises commonly in reliability & queuing theory.A non-negative continuous random variable.It exhibits memoryless property.Related to (discrete) Poisson distributionOften used to model
Interarrival times between two IP packets (or voice calls)Service time distributionTime to failure, time to repair etc.
Copyright © 2006 by K.S. Trivedi 9
Exponential Distribution
The use of exponential distribution is an assumption that needs to be validated based on experimental data; if the data does not support the assumption, other distributions may be usedFor instance, Weibull distribution is often used to model time to failure; Markov modulated Poisson process is used to model arrival of IP packets
Copyright © 2006 by K.S. Trivedi 10
Exponential DistributionMathematically (CDF and pdf are given as):
Also
Copyright © 2006 by K.S. Trivedi 11
CDF of exponentially distributed random variable with λ = 0.0001
12500 25000 37500 50000
Copyright © 2006 by K.S. Trivedi 12
Exponential Density Function (pdf)
Copyright © 2006 by K.S. Trivedi 13
Memoryless property
Assume X > t, i.e., We have observed that
the component has not failed until time t.
Let Y = X - t , the remaining (residual)
lifetime
y
Y
etXP
tyXtPtXtyXP
tXyYPtyG
λ−−=>
+≤<=
>+≤=>≤=
1)(
)()|(
)|()|(
Copyright © 2006 by K.S. Trivedi 14
Memoryless property
Thus GY(y|t) is independent of t and is identical to the original exponential distribution of X.
The distribution of the remaining life does not depend on how long the component has been operating.
Its eventual breakdown is the result of some suddenly appearing failure, not of gradual deterioration.
Copyright © 2006 by K.S. Trivedi 15
Memoryless property
Copyright © 2006 by K.S. Trivedi 16
Only Continuous Distribution with Memoryless property
Taking the limit as t approaches zero,
X is a nonnegative R.V. with Memoryless property:
Copyright © 2006 by K.S. Trivedi 17
Only Continuous Distribution with Memoryless property
Solution to the differential equation is given by
where K is the const. and
since and denoting by constant
Therefore X must have the exponential distribution.
Copyright © 2006 by K.S. Trivedi 18
Example 3.2
A discrete rv Nt : number of jobs arriving to a file server in the interval (0, t]Nt be Poisson distributed (parameter = λt )X : time to next arrival.
Therefore,
X is exponentially distributed with parameter λ
Copyright © 2006 by K.S. Trivedi 19
Example 3.3Web server: time to next request is randomAverage rate of requests, λ = 0.1 reqs/sec.Number of request arrivals per sec is Poisson distributedOr inter-arrival times are EXP(λ). Therefore,
Copyright © 2006 by K.S. Trivedi 20
Reliability as a Function of Time
Reliability R(t): prob. that no failure occurs during the interval (0,t). Let X be the lifetime of a component subject to failures.
Let N0= total no. of components (fixed); Ns(t)= surviving ones; Nf(t)= no. failed by time t.
Copyright © 2006 by K.S. Trivedi 21
Definitions (Contd.)
Equivalence:
Reliability
Complementary distribution function
Survivor function
R(t) = 1 -F(t)
Copyright © 2006 by K.S. Trivedi 22
Failure Rate or Hazard Rate
Instantaneous failure rate: h(t)(#failures/time unit)
As a special case let the rv X be EXP( λ). Then the failure rate is time or age independent:
This is the only continuous distribution with a constant failure rate (CFR)
Copyright © 2006 by K.S. Trivedi 23
Hazard Rate and the pdf
h(t) Δt = conditional prob. of system failing in (t, t + Δt] given that it has survived until time t.
f(t) Δt = unconditional prob. of system failing in (t, t + Δt].Analogous to difference between:
probability that someone will die between 90 and 91, given that he lives to 90probability that someone will die between 90 and 91
)(1)(
)()()(
tFtf
tRtfth
−==
Copyright © 2006 by K.S. Trivedi 24
Reliability from Failure Rate
In the general case, reliability R(t) can be related to the hazard rate in the following wayUsing simple calculus the following applies to any rv,
Copyright © 2006 by K.S. Trivedi 25
Failure-Time DistributionsRelationships
1)(log)(
)(1)(1)(
1)(11)(
)()(')('1
0
0
0
)(
)(
0
)(
tRdtd
duuf
tfetFduuf
etRduuf
ethtRtF
e
t
duuh
t
duuht
duuh
t
t
t
−
∫−
∫−−
∫−
∫
∫∫
∞
−∞
−
−
f(t) F(t)
f(t)
F(t)
R(t)
R(t)
h(t)
h(t) ))(1()('tF
tF−
Copyright © 2006 by K.S. Trivedi 26
DFR IFR
Decreasing failure rate Increasing fail. rate
h(t)
t
CFR(useful life)
(burn-in-period) (wear-out-phase)
Bathtub curveDFR phase: Initial design, constant bug fixesCFR phase: Normal operational phaseIFR phase: Aging behavior
Copyright © 2006 by K.S. Trivedi 27
Weibull Distribution
Frequently used to model fatigue failure, ball bearing failure etc. (very long tails)
Reliability:Weibull distribution is capable of modeling DFR (α < 1), CFR (α = 1) and IFR (α >1) behavior.α is called the shape parameter and λ is the scale parameter.
( ) 0 ≥= − tetR tαλ
αλtetF −−=1)(
Copyright © 2006 by K.S. Trivedi 28
Weibull Distribution (alternate form)Some texts use a slightly different form for Weibull:
Reliability:In this text we will use the definition on the previous slide
( ) 0 ) ( ≥= − tetR t αλ
αλ )(1)( tetF −−=
Copyright © 2006 by K.S. Trivedi 29
Example 3.4Life time X : Weibull distributed withObservation: 15% components last 90 hrs, but fail before 100 hrs., i.e.,
Find scale parameter λ for this Weibull distribution:
Copyright © 2006 by K.S. Trivedi 30
Failure rate of the Weibull distribution with various values of α and λ = 1
5.0
1.0 2.0 3.0 4.0
Copyright © 2006 by K.S. Trivedi 31
Three parameter Weibull Distribution
Sometimes a more complex version of Weibull is used so that the image of the random variable is in the interval (θ,∞):
Copyright © 2006 by K.S. Trivedi 32
Infant Mortality Effects in System Modeling
Bathtub curves
Early-life period
Steady-state period
Wear out period
Failure rate models
Copyright © 2006 by K.S. Trivedi 33
Early-life PeriodAlso called infant mortality phase or reliability growth phase or decreasing failure rate (DFR phase).Caused by undetected hardware/software defects that are being fixed resulting in reliability growth.Can cause significant prediction errors if steady-state failure rates are used.Availability models can be constructed and solved to include this effect.DFR Weibull Model can be used.
Copyright © 2006 by K.S. Trivedi 34
Steady-state Period
Failure rate much lower than in early-life period.Either constant (CFR) (age independent) or slowly varying failure rate.Failures caused by environmental shocks.Arrival process of environmental shocks can be assumed to be a Poisson process.Hence time between two shocks has exponential distribution.
Copyright © 2006 by K.S. Trivedi 35
Failure rate increases rapidly with age (IFR phase).Properly qualified electronic hardware do not exhibit wear out failure during its intended service life (as per Motorola).Applicable for mechanical and other systems.Again (IFR) Weibull Failure Model can be used for capturing such behavior.
Wear out Period
Copyright © 2006 by K.S. Trivedi 36
We use a truncated Weibull Model
Infant mortality phase modeled by DFR Weibull and the steady-state phase by the exponential.
0 2,190 4,380 6,570 8,760 10,950 13,140 15,330 17,520Operating Times (hrs)
Failu
re-R
ate
Mul
tiplie
r
76543210
Failure Rate Models
Copyright © 2006 by K.S. Trivedi 37
Failure Rate Models (cont.)
This model has the form:
where:steady-state failure rate
is the Weibull shape parameterFailure rate multiplier =
hCth
SS
W t=
= −α1)(
760,8760,81
>≤≤
tt
( ) == hhC SSW ,11
αhth SSW )(
Copyright © 2006 by K.S. Trivedi 38
Failure Rate Models (cont.)There are several ways to incorporate time dependent failure rates in availability models.The easiest way is to approximate a continuous function by a decreasing step function.
2,190 4,380 6,570 10,950 13,140 15,330 17,520Operating Times (hrs)
Failu
re-R
ate
Mul
tiplie
r
76543210
8,7600
h1
h2hSS
Copyright © 2006 by K.S. Trivedi 39
Failure Rate Models (contd.)
Here the discrete failure-rate model is defined by:
The approximation can be improved by taking smaller time steps.
hhhth
ss
W
===
2
1)(
760,8760,8380,4380,40
≥<≤<≤
ttt
Copyright © 2006 by K.S. Trivedi 40
HypoExponential (HYPO)HypoExp: multiple Exp stages in series.2-stage HypoExp denoted as HYPO(λ1, λ2). The density, distribution and hazard rate function are:
HypoExp is an IFR as its h(t): 0 min{λ1, λ2}Disk service time may be modeled as a 3-stage Hypoexponential as the overall time is the sum of the seek, the latency and the transfer time.
Copyright © 2006 by K.S. Trivedi 41
HypoExponential pdf and CDF
Hypo(1,5)
Hypo(1,5) pdf Hypo(1,5) CDF
Copyright © 2006 by K.S. Trivedi 42
HypoExponential used in software rejuvenation models
Failure probable
state Failed state
Time to failure: Hypo-exponential distribution
Increasing failure rate aging
• Preventive maintenance is useful only if failure rate is increasing
Robust state
• A simple and useful model of increasing failure rate:
Copyright © 2006 by K.S. Trivedi 43
Erlang DistributionSpecial case of HYPO: All stages have same rate.
[X > t] = [Nt < r] (Nt : no. of stresses applied in (0,t] and Nt is Poisson (parameter: λt). This interpretation gives,
Copyright © 2006 by K.S. Trivedi 44
Erlang Distribution
If we set the parameter r=1, we get the exponential distributionErlang distribution can be used to approximate the deterministic variable, since if the mean is kept same but number of stages are increased, the pdf approaches the delta (impulse) function in the limit.
Copyright © 2006 by K.S. Trivedi 45
Erlang density function
If we vary r keeping r/λ constant, pdf of r-stage Erlang approaches an impulse function at r/ λ.
Copyright © 2006 by K.S. Trivedi 46
Erlang Cumulative Distribution Function
And the cdf approaches a step function at r/λ. In other words r-stage Erlang can approximate a deterministic variable.
Copyright © 2006 by K.S. Trivedi 47
Gamma Random Variable
A basic distribution of statistics for non-negative variables (see Section 3.9 and Chapter 10)Gives distribution of time required for exactly rindependent events to occur, assuming events take place at a constant rate (p. 131 of text). Used frequently in queuing theory, reliability theoryExample: Distribution of time between re-calibrations of instrument that needs re-calibration after r uses; time between inventory restocking, time to failure for a system with cold standby redundancy (Ex. 3.25)Erlang, exponential, and chi- square distributions are special cases.
Copyright © 2006 by K.S. Trivedi 48
Gamma Random Variable
Gamma density function is,
Γ(α) = (α-1) Γ(α-1); Γ(1/2)=√πBecause Γ(1)=1, it follows that Γ(r)=(r-1) Γ(r-1)=...=(r-1)! So gamma with an integer valued shape parameter is the ErlangdistributionGamma with shape parameter α = ½ and scale parameter λ= n/2 is known as the chi-square random variable with n degrees of freedom.
Copyright © 2006 by K.S. Trivedi 49
Gamma distribution: failure rate
Gamma distribution can capture all three types failure rate behavior, viz. DFR, CFR or IFR depending on the value of the shape paramter α
α = 1: CFRα <1 : DFRα >1 : IFR
Copyright © 2006 by K.S. Trivedi 50
Gamma density function
Copyright © 2006 by K.S. Trivedi 51
HyperExponential Distribution (HyperExp)Hypo or Erlang have sequential Exp( ) stages.When there are alternate Exp( ) stages it becomes Hyperexponential.
CPU service time may be modeled by HyperExp.In workload based software rejuvenation model we found the sojourn times in many workload states have this kind of distribution.
Copyright © 2006 by K.S. Trivedi 52
Hyper Exponential Vs. Exponential CDF
Distribution of measured CPU service time may be better described by the HyperExp( ) as compared to the EXP( ) distribution.
Copyright © 2006 by K.S. Trivedi 53
Log-logistic DistributionLog-logistic can model more complex failure rate behavior than simple CFR, IFR, DFR cases.
For, κ > 1, the failure rate first increases with t ; after momentarily leveling off, it decreases with time. This is known as the inverse bath tub shape curve.Useful in modeling software reliability growth .
Copyright © 2006 by K.S. Trivedi 54
Log-logistic failure rate
Copyright © 2006 by K.S. Trivedi 55
Gaussian (Normal) Random Variable
A basic distribution of statistics. Many applications arise fromcentral limit theorem (average of values of n observations approaches normal distribution, irrespective of form of originaldistribution under quite general conditions).Consequently, appropriate model for many, but not all, physical phenomena.Example: Distribution of physical measurements on living organisms, intelligence test scores, product dimensions, averagetemperatures, and so on.Many methods of statistical analysis presume normal distribution.In a normal distribution, about 68% of the values are within onestandard deviation of the mean and about 95% of the values are within two standard deviations of the mean.
Copyright © 2006 by K.S. Trivedi 56
Gaussian (Normal) Random VariableBell shaped and symmetrical pdf – intuitively pleasing!Central Limit Theorem: sum of a large number of mutually independent rv’s (having arbitrary distributions) starts following Normal distribution as n
μ: mean, σ: std. deviation, σ2: variance (N(μ, σ2))μ and σ completely describe the rv. This is significant in statistical estimation/signal processing/communication theory etc.Mean, median and mode are all equal; infinite range
Copyright © 2006 by K.S. Trivedi 57
Normal Density with parameter μ=2 and σ=1
Copyright © 2006 by K.S. Trivedi 58
Normal Distribution (contd.)
Failure rate h(t) follows IFR behavior.Hence, normal distribution is suitable for modeling long-term wear or aging related failure phenomena.
See page 138-140 for Examples.
Copyright © 2006 by K.S. Trivedi 59
Failure rate for Normal distribution
h(t) for normal distribution is IFR
Copyright © 2006 by K.S. Trivedi 60
Infinitely many normal pdfs
By changing the two parameters, we can get infinitely many normal densities
Copyright © 2006 by K.S. Trivedi 61
Normal Distribution (contd.)
No closed form for the CDF; how do we determine P(a< X <b)?Answer: use tables after a transformation to standard normalN(0,1) is called standard normal distribution.X ~ N(μ, σ2)) then Z=(X- μ)/σ is N(0,1)N(0,1) is symmetric i.e.
f(x)=f(-x)F(-z) = 1-F(z).
Copyright © 2006 by K.S. Trivedi 62
Example 3.5
X: amplitude of an analog signal at a detectorX has a normal distribution N(200,256)Find P(X > 240)
Copyright © 2006 by K.S. Trivedi 63
Example 3.5 (contd.)
Find P(X>240 |X>210)
Copyright © 2006 by K.S. Trivedi 64
Example 3.6X: Wearout phase lifetime of a subsystem is normal N(105,106) (in hour units)
Find R9,000(500) and R11,000(500)
Copyright © 2006 by K.S. Trivedi 65
Example 3.6 (contd.)Similarly,
Copyright © 2006 by K.S. Trivedi 66
Uniform Random Variable
Unif(a,b) pdf constant over the interval (a,b) and CDF is the ramp function
All (pseudo) random number generators generate random deviates of Unif(0,1)distribution; that is, if a large number of random variables are generated and their empirical distribution function is plotted, it will approach this distribution in the limit.
Copyright © 2006 by K.S. Trivedi 67
Uniform density function
Uniform pdf – Unif(a, b)
Copyright © 2006 by K.S. Trivedi 68
Uniform distribution
The distribution function is given by:
⎪⎩
⎪⎨
⎧
≤
<≤−−
<
=
xb
bxaabax
ax
xF
,1
,
,0
)(
Copyright © 2006 by K.S. Trivedi 69
Uniform CDF
Copyright © 2006 by K.S. Trivedi 70
Erlang to approximate Uniform
Uniform random variable is sometimes approximated by a Erlang random variableWe will see an example of this in Chapter 8In the next two slides, pdfs and CDFs of Unif(0,1) and 3-stage Erlang with parameter λ=6 are compared
Copyright © 2006 by K.S. Trivedi 71
Comparison of probability density functions (pdf)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
00.
060.
120.
180.
24 0.3
0.360.
420.
48
0.54 0.
60.
660.
72
0.780.
84 0.9
0.961.
02
time
pdf 3-stage Erlang pdf
U(0,1) pdf
Copyright © 2006 by K.S. Trivedi 72
Comparison of cumulative distribution functions (cdf)
0
0.2
0.4
0.6
0.8
1
1.2
00.0
60.1
20.1
80.2
4
0.3 0.36
0.42
0.48
0.54
0.6 0.66
0.72
0.78
0.84
0.9 0.96
1.02
time
cdf 3-stage Erlang cdf
U(0,1) cdf
Copyright © 2006 by K.S. Trivedi 73
Pareto Distribution
Also known as the power law or long-tailed distribution also, heavy-tailed distribution.Found to be useful in modeling of
CPU time consumed by a request.Web file size on internet servers.Number of data bytes in FTP bursts.Thinking time of a Web browser.
Copyright © 2006 by K.S. Trivedi 74
Pareto Distribution (Contd.)
The density is given by
The Distribution is given by
And the failure rate is given by
0,,,)( 1 >≥= −− kkxxkxf αα αα
Copyright © 2006 by K.S. Trivedi 75
The CDF of Pareto Distribution
Copyright © 2006 by K.S. Trivedi 76
The pdf of Pareto Distribution
Copyright © 2006 by K.S. Trivedi 77
Log-normalPermits representation of random variable whose logarithm follows normal distribution. Model for a process arising from many small multiplicative errors. Appropriate when the value of an observed variable is a random proportion of the previously observed value.In the case where the data are log-normally distributed, the geometric mean acts as a better data descriptor than the mean.The more closely the data follow a lognormal distribution, the closer the geometric mean is to the median
Example: Repair time distribution; life distribution of sometransistor types. pdf is given by:
Copyright © 2006 by K.S. Trivedi 78
Defective Distribution
Recall that Example:
This defect (also known as the mass at infinity) could represent the probability that the program will not terminate (1-pc). Continuous part can model completion time of program; we will see many examples in later chapters. There can also be a mass at origin.
.ondistributilexponentiadefective)1()(,,1 aisepxFthenpIf x
cXcλ−−=<
Copyright © 2006 by K.S. Trivedi 79
The CDF of a Defective random variable
Copyright © 2006 by K.S. Trivedi 80
Functions of Random Variables
Often, rv’s need to be transformed/operated upon.
Y = Φ (X) : so, what is the distribution or the density of Y ?Example: Y = X2
Example: Y= -λ-1ln(1-X)
Copyright © 2006 by K.S. Trivedi 81
Example 3.8Distribution for Y = Φ(X) = X2
Copyright © 2006 by K.S. Trivedi 82
Example 3.9In Example 3.8, assume X to be N(0,1):
Using result from Example 3.8:
Which is also known as chi-square distribution with 1 degree of freedom
Copyright © 2006 by K.S. Trivedi 83
Example 3.10Let X be uniformly distributed, Unif(0,1)Then, if Y = -λ-1 ln(1-X) is EXP(λ).
This transformation is used to generate a random variate (or deviate) of the Exp(λ) distribution
Copyright © 2006 by K.S. Trivedi 84
Theorem: pdf for a transformed RV
Copyright © 2006 by K.S. Trivedi 85
Random Variate Generation
Random variate is defined as a typical value sampled from a given distribution. If we take a large number (ideally infinite) of them and plot a histogram, it will approach the original pmf or pdf.Goal: Study methods of generating random deviates of a given distribution, assuming a routine to generate uniformly distributed random numbers is available.Note that distribution of interest can be discrete or continuous.
Copyright © 2006 by K.S. Trivedi 86
Some generation MethodsSome popular methods of generating random variate are:
Inverse Transform MethodConvolution MethodDirect Transformation of Normal Distribution.Acceptance-Rejection Method
Copyright © 2006 by K.S. Trivedi 87
Inverse Transform
Based on the following idea:If F(x) is strictly monotonic distribution function and U is uniformly distributed over the interval (0, 1).
Then the new random variable X=F-1(U) has the the CDF F.
Method:A random number u from a uniform distribution over (0, 1) is generated and then the F is inverted to generate random deviate x of X. F-1(u) gives the required value of x.
Copyright © 2006 by K.S. Trivedi 88
Example 3.11Generate random variate x with distribution F=FXLet, Y = F(X) FY(u) = FY(F –1(u)) = u, or, Y=F(X) has pdf,
Hence, to generate random variate (deviate) with distribution F,
1. Generate random number u2. Find x = F-1(u) and x will be a random deviate with distribution F3. If x = λ-1 ln(1-u), then x will be a random deviate of EXP(λ)
distribution.Inversion can be done in closed form, graphically or
using a table
Copyright © 2006 by K.S. Trivedi 89
Inverse Transform
Variates of Exponential, Uniform, Weibull, Pareto, Rayleigh, Triangular, Log-logistic and many others can be generated by this method. Variates of empirical and discrete distributions like Bernoulli and Geometric can also be generated using this idea.It is most useful when the inverse of the CDF, F(.) can be easily computed in closed form although a numerical or tabular method can also be used.
Copyright © 2006 by K.S. Trivedi 90
Some ExamplesExponential Distribution
CDF
where λ is its failure rate (1/λ is the mean).Random Variate x
where u is drawn from uniform distribution Unif(0,1).Since (1-u) is also a random number , use the simpler formula
( ) 1 , 0xF x e xλ−= − ≥
ln(1 )uxλ−
= −
λ)ln(ux −=
Copyright © 2006 by K.S. Trivedi 91
Exponential variate
Next we will show that if enough variates are sampled then the sample of generated numbers is sufficient to describe the pdf of the distribution.We see that as we increase number of observations the plot becomes closer and closer to the theoretical pdfpdf of exponential distribution with mean 1 is plotted Three cases are taken with 10, 100 and 1000 observations.
0)( ≥= − tforetf t
Copyright © 2006 by K.S. Trivedi 92
For 1000 observations
0
0.2
0.4
0.6
0.8
1
1.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Theoretical Exp pdf
0
0.2
0.4
0.6
0.8
1
1.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
For 10 Observations
0
0.2
0.4
0.6
0.8
1
1.2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
For 100 Observations
0
0.2
0.4
0.6
0.8
1
1.2
1 3 5 7 9 11 13 15
Copyright © 2006 by K.S. Trivedi 93
Some Examples: Example 3.12
Weibull DistributionCDF
where λ is the scale parameter and α is the shape parameter.Random Variate x
where u is drawn from uniform distribution Unif(0,1).Simplified formula
αλxX exF −−=1)(
1
ln(1 )uxα
λ− −⎛ ⎞= ⎜ ⎟
⎝ ⎠
α
λ
1
)ln(⎟⎠⎞
⎜⎝⎛−=
ux
Copyright © 2006 by K.S. Trivedi 94
Extensions to Example 3.12Write down the random deviate formual for the alternate for of the Weibull Distribution
And for the three parameter Weibull distribution:CDF
CDF αλ )(1)( xX exF −−=
Copyright © 2006 by K.S. Trivedi 95
Some Examples: ParetoPareto Distribution
CDF
where k>0 is location parameter and α is shape parameter.Random Variate x
where u is drawn from uniform distribution Unif(0,1).Simplified Formula
kx
kxxkxFX
≤=
≥⎟⎠⎞
⎜⎝⎛−=
0
1)(α
1
(1 )
kxu α
=−
αu1
kx =
Copyright © 2006 by K.S. Trivedi 96
Some Examples: RayleighRayleigh Distribution
CDF
where σ2 is the variance.Random Variate x
where u is drawn from uniform distribution Unif(0,1).Simplified Formula
otherwisexexF x
X
001)(
22 2/
=≥−= − σ
22 ln(1 )x uσ= − −
)ln(2 2 ux σ−=
Copyright © 2006 by K.S. Trivedi 97
Some Examples: Log-logistic
Log-Logistic DistributionCDF
where λ>0 is the scale parameter and κ>0 is shape parameter.Random Variate x
where u is drawn from uniform distribution Unif(0,1).
κλ )(111)(
xxFX +
−=
κ
λ
1
11
⎟⎠⎞
⎜⎝⎛−
=u
ux
Copyright © 2006 by K.S. Trivedi 98
Random variate Table
Name Density f(x)
F(x) X=F-1(u) Simpl. form
Expo
Weibull
Pareto
Rayleigh
01 >− − xe xλ0>− xe xλλ ln(1 )uxλ−
= −
αλxe−−10>− xe xαλλ1
ln(1 )uxα
λ− −⎛ ⎞= ⎜ ⎟
⎝ ⎠
α
⎟⎠⎞
⎜⎝⎛−
xk1 1
(1 )
kxu α
=−
1
)ln(λ
ux −=
α
λ
1
1
)ln(⎟⎟⎠
⎞⎜⎜⎝
⎛−=
ux
α1
u
kx =
0122 2/ ≥− − xe x σ 22 ln(1 )x uσ= − − )ln(2 2 ux σ−=
0,,1
>>−−
kkxxk
αα αα
⎥⎥⎦
⎤
⎢⎢⎣
⎡⎟⎠⎞
⎜⎝⎛−
2
2 21exp
σσxx
Copyright © 2006 by K.S. Trivedi 99
Random variate Table
Name Density f(x)
F(x) X=F-1(u) Simplified form
Log-LogisticCauchy
Triangular
κλ )(111
t+−
⎟⎠⎞
⎜⎝⎛+σπxarctan1
21
⎟⎠⎞
⎜⎝⎛ − )
21(tan uπσ ( )uπσ tan)( 22 σπ
σ+x
⎟⎠⎞
⎜⎝⎛ −
ax
a12 )11( ua −−
⎟⎟⎠
⎞⎜⎜⎝
⎛−
axx
a 22 2 )1( ua −
0])(1[
)(2
1
≥+
−
ttt
κ
κ
λλλκ κ
λ
1
11
⎟⎠⎞
⎜⎝⎛−
=u
ux κ
λ
1
11
⎟⎠⎞
⎜⎝⎛−
=u
ux
Copyright © 2006 by K.S. Trivedi 100
For Discrete distributions
We want to generate X having pmf
and distribution
If u is the uniform random number then
This is as
,...1,0),( === jxXPp jj
∑=
=j
iij pxF
0)(
)()(, 1 jjj xFuxFifxx ≤<= −
j
j
ii
j
iij ppUpPxXP =≤<== ∑∑
=
−
=
)()(1
1
1
Copyright © 2006 by K.S. Trivedi 101
For discrete distributionsBernoulli distributionCDF with parameter (1-q)
Inverse function is given by
By generating u from Unif(0,1) function, we can get Bernoulli random variate from above.
1110
00)(
≥=<≤=
<=
xxq
xxFX
1100)(1
≤<=≤<=−
uqquuF
Copyright © 2006 by K.S. Trivedi 102
For Geometric DistributionsGeometric pmf is given byDistribution is given bySo geometric random variate j satisfies
Since (1-u) is also uniform random number it becomes
1)1( 1 ≥− − jpp j
jpjF )1(1)( −−=
⎥⎥
⎤⎢⎢
⎡−−
=
−−
>=
−<−=
−≤−<−
−−≤<−−−
−
)1log()1log(
})1log()1log(:min{
}1)1(:min{)1(1)1(
)1(1)1(11
1
pu
puii
upixpup
pup
i
jj
jj
⎥⎥
⎤⎢⎢
⎡− )1log(
)log(p
u
•Hence Geometricrandom variate x
Copyright © 2006 by K.S. Trivedi 103
Example 3.13
Scaling a random variable XLet fX(x) = λe-λx and Y = rX
Hence, Y is also EXP( ) with parameter λ/rThus the exp distribution is closed under a multiplication by a scalar
Copyright © 2006 by K.S. Trivedi 104
Example 3.14Distribution for Y = Φ(X) = eX, given X is N(μ, σ2)
Random variable Y is said to have a log-normal distribution.Repair times are often found to follow this distribution
Copyright © 2006 by K.S. Trivedi 105
Jointly Distributed RVs
Joint Distribution Function:
Independent rv’s: iff the following holds:
Independent rv’s: iff the following holds:
Copyright © 2006 by K.S. Trivedi 106
Joint Distribution Properties
Copyright © 2006 by K.S. Trivedi 107
Joint Distribution Properties (contd.)
Copyright © 2006 by K.S. Trivedi 108
Order statistics, ‘k of n’, TMR
Copyright © 2006 by K.S. Trivedi 109
Order Statistics: ‘k of n’X1 ,X2 ,..., Xn iid (independent and identically distributed) random variables with a common distribution function F and common density f.Let Y1 ,Y2 ,...,Yn be random variables obtained by permuting the set X1 ,X2 ,..., Xn so as to be in
increasing order.
To be specific:
Y1 = min{X1 ,X2 ,..., Xn} and
Yn = max{X1 ,X2 ,..., Xn}
Copyright © 2006 by K.S. Trivedi 110
Order Statistics: k of n (Continued)
The random variable Yk is called the kth
ORDER STATISTIC.
If Xi is the lifetime of the ith component in a system of n components. Then:
Y1 will be the overall series system lifetime.
Yn will denote the lifetime of a parallel system.
Yn-k+1 will be the lifetime of an k-of-n system.
Copyright © 2006 by K.S. Trivedi 111
Order Statistics: k of n (Continued)
To derive the distribution function of Yk, we note that
the probability that exactly j of the Xi's lie in (-∞ ,y]and (n-j) lie in (y, ∞) is (n Bernoulli trials; p=F(y)):
∞<<∞−−⎟⎠
⎞⎜⎝
⎛=
∞<<∞−−⎟⎠
⎞⎜⎝
⎛
∑=
−
−
yyFyFyF
yyFyF
n
kj
jnjn
jY
jnjn
j
k,)](1[)()(
hence
)](1[)(
Copyright © 2006 by K.S. Trivedi 112
Overview:General iid Random Variables
1Y
1n kY − +
1
1
( )
( )[1 ( )]
n kYn
j n j
j n k
F yn
F y F yj
− +
−
= − +
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
nY
1| ( ) 1 ( )
( )[1 ( )]
n kk n Yn
j n j
j k
R t F tn
R t R tj
− +
−
=
= −⎛ ⎞
= −⎜ ⎟⎝ ⎠
∑
Let Y1 ,,..., Yn denote the order statistics of the random variables X1 ,..., Xn,, which are iid with common distribution function F.
Copyright © 2006 by K.S. Trivedi 113
Overview:General iid Random Variables
1Y
1n kY − +
1
1
( )
( )[1 ( )]
n kYn
j n j
j n k
F yn
F y F yj
− +
−
= − +
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
nY ( ) [ ( )]n
nYF y F y=
( ) 1 ( )
1 [ ( )] 1 [1 ( )]nparallel Y
n n
R t F t
F t R t
= −
= − = − −
1| ( ) 1 ( )
( )[1 ( )]
n kk n Yn
j n j
j k
R t F tn
R t R tj
− +
−
=
= −⎛ ⎞
= −⎜ ⎟⎝ ⎠
∑
Let Y1 ,,..., Yn denote the order statistics of the random variables X1 ,..., Xn,, which are iid with common distribution function F.
Copyright © 2006 by K.S. Trivedi 114
Overview:General iid Random Variables
1Y
1n kY − +
1
1
( )
( )[1 ( )]
n kYn
j n j
j n k
F yn
F y F yj
− +
−
= − +
⎛ ⎞= −⎜ ⎟
⎝ ⎠∑
1( ) 1 [1 ( )]n
YF y F y= − −
( ) [ ( )]n
nYnY F y F y=
1( ) 1 ( )
[1 ( )] [ ( )]series Y
n n
R t F t
F t R t
= −
= − =
( ) 1 ( )
1 [ ( )] 1 [1 ( )]nparallel Y
n n
R t F t
F t R t
= −
= − = − −
1| ( ) 1 ( )
( )[1 ( )]
n kk n Yn
j n j
j k
R t F tn
R t R tj
− +
−
=
= −⎛ ⎞
= −⎜ ⎟⎝ ⎠
∑
Let Y1 ,,..., Yn denote the order statistics of the random variables X1 ,..., Xn,, which are iid with common distribution function F.
Copyright © 2006 by K.S. Trivedi 115
Reliability of a k out of n system
Series system:
Parallel system:
Minimum of n EXP random variables is special case of Y1 = min{X1,…,Xn} where Xi~EXP(λi) so Y1~EXP(Σ λi)
Thus the exponential is closed under series composition but not under the parallel composition .
))(1(1)](1[1)(
)()]([)(
)](1[)]()[()(
1
1
tRortRtR
tRortRtR
tRtRtR
i
n
i
nparallel
i
n
i
nseries
jnjn
kj
njkofn
−Π−−−=
Π=
−=
=
=
−
=∑
Applications of order statistics
Copyright © 2006 by K.S. Trivedi 116
Example 3.16Series system lifetime distributionith component’s life time distribution ~ EXP(λi)
Lifetime distribution of series system of components with each component having EXP( ) distribution is also EXP(λs) with
Copyright © 2006 by K.S. Trivedi 117
Example 3.17 (Parts count method)Assuming that times to failure of all chip types are exponentially distributed with the following failure rates:
Copyright © 2006 by K.S. Trivedi 118
Example 3.18Hence series system of Example 3.17 has a constant failure rate.What about a parallel system of n such components:
Copyright © 2006 by K.S. Trivedi 119
Example 3.20Arrivals from n sources : si generates Ni(t) tasks in time t . Ni (t) Poisson with parameter λitXi :time between two successive arrivals from si has EXP(λi) distribution.Total no. of jobs is also Poisson with rate parameter The jobs arrive in the pooled stream with interarrival time,
Copyright © 2006 by K.S. Trivedi 120
An interesting case of order statistics occurs when we consider the Triple Modular Redundant (TMR) system (n = 3 and k = 2). Y2 then denotes the time until the second component fails. We get:
)(2)(3)( 32 tRtRtRTMR −=
R(t)
R(t)
R(t)
Voter
Triple Modular Redundancy (TMR)
Copyright © 2006 by K.S. Trivedi 121
TMR (Continued)
Assuming that the reliability of a single component is given by,
we get:tt
TMR eetR λλ 32 23)( −− −=
tetR λ−=)(
Copyright © 2006 by K.S. Trivedi 122
TMR (Continued)
In the following figure, we have plotted RTMR(t)vs. t as well as R(t) vs. t. Also graphs have been plotted for comparison between TMR and TMR/Simplex using SHARPE GUI. A step by step procedure has been shown.We see that TMR improves reliability over the simplex for short mission times (defined by t < ln 2/λ); for longer mission times, TMR has lower reliability than simplex
Copyright © 2006 by K.S. Trivedi 123
TMR (Continued)
Copyright © 2006 by K.S. Trivedi 124
Copyright © 2006 by K.S. Trivedi 125
Copyright © 2006 by K.S. Trivedi 126
Copyright © 2006 by K.S. Trivedi 127
Copyright © 2006 by K.S. Trivedi 128
Copyright © 2006 by K.S. Trivedi 129
Copyright © 2006 by K.S. Trivedi 130
Copyright © 2006 by K.S. Trivedi 131
Copyright © 2006 by K.S. Trivedi 132
Copyright © 2006 by K.S. Trivedi 133
Copyright © 2006 by K.S. Trivedi 134
Copyright © 2006 by K.S. Trivedi 135
Workstations & File server (WFS) Example – RBD Approach
Copyright © 2006 by K.S. Trivedi 136
The WFS Example
Computing system consisting of:A file-serverTwo workstationsComputing network connecting them
Copyright © 2006 by K.S. Trivedi 137
Workstation 1
Workstation 2
File Server
RBD for the WFS Example
Copyright © 2006 by K.S. Trivedi 138
Rw(t): workstation reliability
Rf (t): file-server reliability
System reliability R(t) is given by:
R(t) = [1 - (1 - Rw(t))2] Rf (t)
Note: applies to any time-to-failure distributions
RBD for the WFS Example
Copyright © 2006 by K.S. Trivedi 139
Snapshot of the GUI
Copyright © 2006 by K.S. Trivedi 140
bindlambdaW 0.0001lambdaF 0.0003endblock wfs1* each component is non-restorable and has exp time to fail distcomp Workstation exp(lambdaW)comp FileServer exp(lambdaF)parallel work Workstation Workstationseries sys work FileServerend* define function R(t) for reliability at time tfunc R(t) 1-tvalue(t;wfs1)* vary the time t from t=0 to 10000 in steps of 1000 hours and print R(t) loop t,0,10000,1000
expr R(t)endend
Copyright © 2006 by K.S. Trivedi 141
exp
0
0.2
0.4
0.6
0.8
1
1.2
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
time
Rel
iabi
lity
exp
R(t) vs. time
Copyright © 2006 by K.S. Trivedi 142
Example 3.21A system with n workstations and m file servers
System Operational: k workstation & j file serversWS reliability is Rw(t) and FS reliability is Rf(t)Assuming all devices fail independently,
Copyright © 2006 by K.S. Trivedi 143
2 Control and 3 Voice Channels Example
control
control
voice
voice
voice
Copyright © 2006 by K.S. Trivedi 144
Each control channel has a reliability Rc(t)
Each voice channel has a reliability Rv(t)
System is up if at least one control channel and
at least 1 voice channel are up.
Reliability:
Description
]))(1(1][))(1(1[)( 32 tRtRtR vc −−−−=
Copyright © 2006 by K.S. Trivedi 145
Reliability block diagram model in SHARPE GUI
Copyright © 2006 by K.S. Trivedi 146
Define the components in the block diagram model
Copyright © 2006 by K.S. Trivedi 147
Output selection
Copyright © 2006 by K.S. Trivedi 148
Results from SHARPE
Copyright © 2006 by K.S. Trivedi 149
Plot definition
Copyright © 2006 by K.S. Trivedi 150
Reliability vs. time
Copyright © 2006 by K.S. Trivedi 151
Definition of another plot
Copyright © 2006 by K.S. Trivedi 152
Reliability vs. lambda
Copyright © 2006 by K.S. Trivedi 153
2c3v as a Fault Tree
or
c1
and and
c2 v1 v2 v3
•Structure Function:
32121 vvvcc ⋅⋅+⋅=φ
2 Control and 3 Voice Channels Example
Copyright © 2006 by K.S. Trivedi 154
Fault Tree Example (contd.)
Reliability of the system:
)33)(2(
]))(1(1][))(1(1[)(322
32
tttttvc
vvvcc eeeee
tRtRtRλλλλλ −−−−− +−−=
−−−−=
,)()( Assume tv
tc
vc etRandetR λλ −− ==
Copyright © 2006 by K.S. Trivedi 155
Fault tree input in SHARPE GUI
Copyright © 2006 by K.S. Trivedi 156
Example 3.22Shuffle exchange network (SEN) with N = 2n inputs
(N/2) switching elements/stage; log2N such stages
Single switch element
8 x 8 SEN
Copyright © 2006 by K.S. Trivedi 157
Example 3.22 (contd.)We are interested in finding the reliability RSEN(t) of this SEN given individual switch reliability rSE(t)
Copyright © 2006 by K.S. Trivedi 158
Example 3.23SEN+ has an extra stage of N/2 switching elements to increase reliability; 00 01 has two paths.
rSE(t): time-dependent reliability of an SE SEN: (N/2) log2N = 4 elements
Copyright © 2006 by K.S. Trivedi 159
Example 3.23 (contd)SEN+ reliability in this case is worse that SEN
For 8X8 case, SEN+ reliability works out to be better than SEN8x8 SEN+
Copyright © 2006 by K.S. Trivedi 160
Overview: Exponential iid Random Variables
1Y
1n kY − +
1( ) 1 [exp( )]
1 exp( )
nYF y y
n y
= − −λ
= − − λ
( ) [1 exp( )]n
nYnY F y y= − −λ
1( ) 1 ( )
exp( )series YR t F t
n t
= −
= − λ
( ) 1 ( )
1 [1 exp( )]nparallel Y
n
R t F t
t
= −
= − − −λ
Let Y1 ,,..., Yn denote the order statistics of the random variables X1 ,..., Xn,, which are iid with common distribution function F(x)=1 – exp(-λx).
1 EXP( )Y nλ
1
1
( )
[1 exp( )]
exp( ( ) )
n kYn
j
j n k
nj
F y
y
n j y
− +
= − +
⎛ ⎞= − −λ⎜ ⎟⎝ ⎠× − − λ
∑ 1| ( ) 1 ( )n kk n YR t F t− +
= −
Copyright © 2006 by K.S. Trivedi 161
Overview: Exponential iid Random Variables
1Y
1n kY − +
nY
Let Y1 ,,..., Yn denote the order statistics of the random variables X1 ,..., Xn,, which are iid with common distribution function F(x)=1 – exp(-λx).
HYPO( ,( 1) ,..., )
nY nn
λ− λ λ
1
HYPO( ,( 1) ,..., )
n kYn
n k
− +
λ− λ λ
1( ) 1 [exp( )]
1 exp( )
nYF y y
n y
= − −λ
= − − λ
( ) [1 exp( )]n
nYF y y= − −λ
1( ) 1 ( )
exp( )series YR t F t
n t
= −
= − λ
( ) 1 ( )
1 [1 exp( )]nparallel Y
n
R t F t
t
= −
= − − −λ
1 EXP( )Y nλ
1
1
( )
[1 exp( )]
exp( ( ) )
n kYn
j
j n k
nj
F y
y
n j y
− +
= − +
⎛ ⎞= − −λ⎜ ⎟⎝ ⎠× − − λ
∑ 1| ( ) 1 ( )n kk n YR t F t− +
= −
Copyright © 2006 by K.S. Trivedi 162
Overview: ExponentialIndependent Random Variables
1Y
1n kY − +
11
1
( ) 1 exp( )
1 exp( )
n
Y ii
nii
F y y
y=
=
= − −λ
= − − λ
∏∑
nY
1
1
( ) 1 ( )exp( )
series Yn
ii
R t F tt
=
= −
= − λ∑
Let Y1 ,,..., Yn denote the order statistics of the independent random variables X1 ,..., Xn,. The distribution function of Xi is F(xi)=1 – exp(λix).
1 1EXP( )n
iiY
=λ∑
Complicated… Complicated… Complicated…
1
1
( ) ( )
[1 exp( )]
n
n
Y ii
n
ii
F y F y
y=
=
=
= − −λ
∏
∏ 1
( ) 1 ( )
1 [1 exp( )]nparallel Y
n
ii
R t F t
t=
= −
= − − −λ∏Complicated…
Copyright © 2006 by K.S. Trivedi 163
Sum of Random VariablesZ = Φ(X, Y)
For the special case, Z = X + Y
The resulting pdf is (assuming independence),
Convolution integral (modify for the non-negative case)
Copyright © 2006 by K.S. Trivedi 164
Convolution (non-negative case)
Z = X + Y, X & Y are independent random variables (in this case, non-negative)
The above integral is often called the convolution of fX and fY. Thus the density of the sum of two non-negative independent, continuous random variables is the convolution of the individual densities.
∫ −=t
YXZ dxxtfxftf0
)()()(
Copyright © 2006 by K.S. Trivedi 165
Example 3.24: Multithreaded program performance
Three independent computer tasks τ1, τ2 and τ3
Precedence relationship: τ3 has to wait for both τ1and τ2 to completeT1, T2 and T3 : respective random execution timesTotal execution time = max{T1, T2 }+ T3 =M + T3
T1 and T2 ~Unif(t1- t0 , t1+ t0); T3 ~Unif(t3- t0 , t3+ t0)Find Probability that T > t1+ t3
Copyright © 2006 by K.S. Trivedi 166
Example 3.24 (contd)The pdfs are:
Copyright © 2006 by K.S. Trivedi 167
Example 3.24 (contd)
Copyright © 2006 by K.S. Trivedi 168
Example 3.24 (contd)
We need to find P(T > t1+ t3); ‘A’ : shaded area
Copyright © 2006 by K.S. Trivedi 169
Reliability Modeling Examples
Sums of exponential random variables appear naturally in reliability modeling
Cold-standby redundancyWarm-standby redundancyHot-standby redundancyTriple Modular Redundancy (TMR)TMR/Simplexk-out-of-n Redundancy
Copyright © 2006 by K.S. Trivedi 170
Two component system
With respective lifetime random variable, X and Y, assumed independent
Series system (Z=min{X,Y})Parallel System (Z=max{X,Y})Cold standby: the lifetime Z=X+Y
Copyright © 2006 by K.S. Trivedi 171
Cold standby (standby redundancy)
Lifetime ofActiveEXP(λ)
Lifetime ofSpareEXP(λ)
Assumptions (to be relaxed later):• Detection & Switching perfect; • Spare does not fail.
EXP(λ) EXP(λ)
Total lifetime 2-Stage Erlang
tettR λλ −+= )1()(
X Y
Copyright © 2006 by K.S. Trivedi 172
Cold standby derivationX and Y are both EXP(λ) and independent.
Then
0,
)(
20
2
0
)(
>=
=
=
−
−
−−−
∫
∫
tte
dxe
dxeetf
t
tt
txtx
Z
λ
λ
λλ
λ
λ
λλ
Copyright © 2006 by K.S. Trivedi 173
Cold standby derivation (Continued)
Z is two-stage Erlang Distributed
0,)1()(1)(
)1(1)()(0
≥+=
−=
+−==
−
−∫
tettFtR
etdzzftF
t
t tZZ
λ
λ
λ
λ
Copyright © 2006 by K.S. Trivedi 174
Convolution: r-stage Erlang
The general case of r-stage Erlang Distribution
When r sequential phases have independent
identical exponential distributions, then the resulting
random variable is known as r-stage (or r-phase)
Erlang and is given by:
Copyright © 2006 by K.S. Trivedi 175
Convolution: Erlang (Continued)
EXP(λ) EXP(λ) EXP(λ)
)!1()(
1
−=
−−
rettf
trr λλ
tr
k
k
ekttF λλ −
−
=∑−=
1
0 !)(1)(
Copyright © 2006 by K.S. Trivedi 176
Standby Sparing Example 3.26System with n processors whose lifetimes are iid, following an exponential distribution with failure rate λ. Two modes
Only 1 of n is active, others are cold standbyAll n are active, working in parallel
We can see that:
since
However, parallel arrangement delivers more capacity.
nttn
k
k
eekt )1(1!)(1
0
λλλ −−−
=
−−≥∑
Copyright © 2006 by K.S. Trivedi 177
Derivation of the resultLet X1, X2, …, Xn be the times to failure random variables of the n processorsAt time Y1=min {X1, X2, …, Xn}, one processor has failed and remaining (n-1) are workingComputing capacity will also drop to (n-1),
Copyright © 2006 by K.S. Trivedi 178
Derivation of the result (contd)
From the diagram, Cn is the area under the curve and we wish to find distribution for Cn.
First find distribution for Yj+1-YjAssume that all processor lifetimes are EXP(λ), then we assert, (Yj+1-Yj)~EXP[(n-j) λ].Assume Y0 = 0,
Hence, assertion is true for j=0.After j procs have failed, the residual lifetimes are W1, W2, …, Wn-j , each of which is EXP(λ) due to the memoryless property of the exponential distribution.
(Y1-Y0)=Y1=min{X1, X2, …, Xn} ~EXP(nλ)
Copyright © 2006 by K.S. Trivedi 179
Derivation of the result (contd2)
(Yj+1-Yj) is then given by,(Yj+1-Yj) = min{W1, W2, …, Wn-j}
(Yj+1-Yj) ~ EXP[(n-j) λ] using result of Example 3.16Using the result of Example 3.13,
(n-j) (Yj+1-Yj) ~ EXP(λ).Therefore, Cn is the sum of n independent identically distributed exponential rv’s or Cn is n-stage Erlang.Thus the total computing capacity delivered before failure has the same distribution in both the modes of operation.
Copyright © 2006 by K.S. Trivedi 180
Warm standby•With Warm spare, we have:
•Active unit time-to-failure: EXP(λ) •Spare unit time-to-failure: EXP(γ)
2-stage hypoexponential distribution
EXP(λ+ γ) EXP(λ)
Copyright © 2006 by K.S. Trivedi 181
Warm standby derivationFirst event to occur is that either the active or the spare will fail. Time to this event is min{EXP(λ),EXP(γ)} which is EXP(λ + γ).Then due to the memoryless property of the exponential, remaining lifetime is still EXP(λ). Hence system lifetime has a two-stage hypoexponential distribution with parameters λ1 = λ + γ and λ2 = λ .
Copyright © 2006 by K.S. Trivedi 182
Warm standby derivation (Continued)
X is EXP(λ1) and Y is EXP(λ2) with λ1 ≠ λ2;X and Y are independent.Then
This is the density of the 2-stage hypo-exponential distribution with parameters λ1and λ2.
1 2
2 1
( )1 2
0
1 2 1 2
1 2 2 1
( )
.
tx t x
Z
t t
f t e e dx
e e
−λ −λ −
−λ −λ
= λ λ
λ λ λ λ= +λ − λ λ − λ
∫
Copyright © 2006 by K.S. Trivedi 183
Hot standby (Active/Active)
•With hot spare, we have:•Active unit time-to-failure: EXP(λ) •Spare unit time-to-failure: EXP(λ)
2-stage hypoexponential
EXP(2λ) EXP(λ)
Copyright © 2006 by K.S. Trivedi 184
TMR and TMR/simplexas hypoexponentials
EXP(3λ) EXP(λ)
EXP(3λ) EXP(2λ)
TMR/Simplex
TMR
Copyright © 2006 by K.S. Trivedi 185
Hypoexponential: general case
Z = , where X1 ,X2 , … , Xr are mutually independent
and Xi is exponentially distributed with parameter λi
where
Then Z is a r-stage hypoexponentially distributed
random variable.
∑=
r
iiX
1
EXP(λ1) EXP(λ2) EXP(λr)
jiforji ≠≠ λλ
Copyright © 2006 by K.S. Trivedi 186
Hypoexponential: general case
See Page 174, Theorem 3.4 of the Text.
Density function:
Copyright © 2006 by K.S. Trivedi 187
‘k of n’ system lifetime, as a hypoexponential
EXP((n-1)λ) EXP((k-1)λ)EXP(kλ)EXP(nλ)
YnYn-k+2Yn-k+1Y2Y1
... ... EXP(λ)
At least, k out of n units should be operational
for the system to be up. Here failure rate of
each unit is λ.
Copyright © 2006 by K.S. Trivedi 188
‘k of n’ with warm spares
EXP(nλ+(s-1)μ) EXP(nλ)EXP(nλ
+ μ)EXP(nλ
+sμ)... ... EXP(kλ)
At least, k out of n + s units should be operational for the
system to be up. Initially n units are active and s units are
warm spares. The failure rate of a unit when active is λ
and the failure rate of a unit when spare is μ.
Copyright © 2006 by K.S. Trivedi 189
Sums of Normal Random Variables
X1, X2, .., Xn are mutually independent normal rv’s, then, the rv Z = (X1+ X2+ ..+Xn) is also normal with
⇒ The sum of mutually independent normal random variables is also normal.
X1,..., Xn are independent standard normal. Then ffff follows the gamma distribution or the χ2 distribution with n degrees of freedom.
Copyright © 2006 by K.S. Trivedi 190
Example 3.34A sequence of independent, identically distributed random variables, X1,X2, . . . , Xn, is known as a random sample of size n.In many problems of statistical sampling theory, it is reasonable to assume that the underlying distribution is the normal distribution.Thus let
Then from last slide, we obtain
Copyright © 2006 by K.S. Trivedi 191
Example 3.34 (contd)
Copyright © 2006 by K.S. Trivedi 192
Example 3.35
.are independent, and
Y = X12 + X2
2 is exp distributed with parameter 0.5
Variables to polar co-ordinates
Copyright © 2006 by K.S. Trivedi 193
Example 3.36Assume that X1,X2, . . . , Xn are mutually independent identically distributed normal random variables such that
Then has the standard normal distribution.
Therefore,
has the the χ2 distribution with n degrees of freedom.The random variable
may be used as an estimator of the variance .
Copyright © 2006 by K.S. Trivedi 194
Example 3.37However, the mean of the population μ is often unknown.
can then be estimated from the sample variance
It can be shown that the random variable
has the χ2 distribution with n–1 degrees of freedom.
Copyright © 2006 by K.S. Trivedi 195
Example 3.39Assume that X1,X2, . . . , Xn are mutually independent identically distributed normal random variables such that
Then has the standard normal distribution.
Also has the distribution. Therefore,