university of karlsruhe (th) department of mathematics

87
University of Karlsruhe (TH) Department of Mathematics Institute of Stochastics Script Stochastic Methods in Industry

Upload: others

Post on 03-Jun-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: University of Karlsruhe (TH) Department of Mathematics

University of Karlsruhe (TH)Department of MathematicsInstitute of Stochastics

Script

Stochastic Methods in Industry

Page 2: University of Karlsruhe (TH) Department of Mathematics

This script was written by Ana Aleksieva in the authority of the University of Karl-sruhe according to the lecture prof. P. R. Parthasaraty, Ph. D. from the IIT Mandras(India), did in the winter semester 2007/08 as a visiting professor at the University ofKarlsruhe.

2

Page 3: University of Karlsruhe (TH) Department of Mathematics

Contents

1 Background 51.1 Basic Elements of a Queueing Model . . . . . . . . . . . . . . . . . . . . 51.2 Queues with Combined Arrivals and Departures . . . . . . . . . . . . . . 61.3 Probability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.5 Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.6 Markov Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221.7 Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231.8 Birth and Death Processes . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2 Birth and Death Queueing Models 292.1 Model M/M/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.2 Model M/M/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.3 Model M/M/1 with Finite Source . . . . . . . . . . . . . . . . . . . . . . 332.4 Model M/M/c . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.5 Model M/M/c/N (Loss and Delay System) . . . . . . . . . . . . . . . . 352.6 Model M/M/c with a Limited Source k (Repairman Problem) . . . . . . 352.7 Model M/M/c/c (Loss System) . . . . . . . . . . . . . . . . . . . . . . . 362.8 Model M/M/∞ (Infinite Server Queue) . . . . . . . . . . . . . . . . . . . 362.9 Self Repair . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.10 Impatient Customers (Limited Waiting Time) . . . . . . . . . . . . . . . 372.11 Density-Dependent Service . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3 Time-Dependent Analysis 393.1 Model M/M/1/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.1.1 Method 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.1.2 Method 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.2 Model M/M/1/N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.3 State Dependent Finite Model . . . . . . . . . . . . . . . . . . . . . . . . 423.4 Pure Birth Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.5 Simple Death Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.6 Simple Birth Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.7 Model M/M/1 (Busy Period) . . . . . . . . . . . . . . . . . . . . . . . . 453.8 Model M/M/∞ (Infinite Server Queue, Self-Service System) . . . . . . . 46

3

Page 4: University of Karlsruhe (TH) Department of Mathematics

Contents

4 Non-Birth-Death Queueing Models 494.1 Model MX/M/∞ (Bulk Arrival Queues) . . . . . . . . . . . . . . . . . . 494.2 Model MX/M/1 (Bulk Arrival Queues) . . . . . . . . . . . . . . . . . . . 494.3 Two-Station Series Model with Zero Queue Capacity (Sort of a Network

Model) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.4 Model M/Hyperexponential/1/r . . . . . . . . . . . . . . . . . . . . . . . 534.5 Processor Model with Failures . . . . . . . . . . . . . . . . . . . . . . . . 534.6 Bulk Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.6.1 M/M(1, 2)/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544.6.2 M/M(1, k)/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5 Cost Models 595.1 Model 1: Optimal Service Rate µ . . . . . . . . . . . . . . . . . . . . . . 595.2 Model 2: Optimal Number of Servers . . . . . . . . . . . . . . . . . . . . 60

6 Network Models: Communication Network, Manufacturing Network 616.1 Two-Server Queue in Tandem . . . . . . . . . . . . . . . . . . . . . . . . 616.2 Jackson Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.3 Closed System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

7 Non-Markovian Queues 677.1 Model M/G/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

7.1.1 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707.2 Model M/G/∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 717.3 Priority Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 727.4 Model G/M/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.5 Model G/G/1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

8 Deterministic Inventory Models 798.1 EOQ(Economic Order Quantity) Model . . . . . . . . . . . . . . . . . . . 808.2 EOQ Model with Finite Replenishment . . . . . . . . . . . . . . . . . . . 808.3 EOQ Model with Storages . . . . . . . . . . . . . . . . . . . . . . . . . . 818.4 EOQ Model with Storages and Finite Replenishment . . . . . . . . . . . 828.5 Price Break . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 828.6 Multiitem EOQ with Storage Limitation . . . . . . . . . . . . . . . . . . 838.7 Dynamic EOQ Model with no Setup Cost . . . . . . . . . . . . . . . . . 848.8 Dynamic EOQ Model with Setup Costs . . . . . . . . . . . . . . . . . . . 84

9 Probabilistic Inventory Models 859.1 ”Probabilitized” EOQ Model . . . . . . . . . . . . . . . . . . . . . . . . . 859.2 Probabilistic EOQ Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 859.3 Single-Period without Setup Cost . . . . . . . . . . . . . . . . . . . . . . 869.4 Single-Period with Setup Cost(s− S Policy) . . . . . . . . . . . . . . . . 87

4

Page 5: University of Karlsruhe (TH) Department of Mathematics

Chapter 1

Background

1.1 Basic Elements of a Queueing ModelThe principal actors in a queuing situation are the customer and the server. The inter-action between the customer and the server is related to the period of time the customerneeds to complete his service. Thus, from the standpoint of the customers-arrivals,we are interested in the time intervals that separate the successive arrivals. Also, inthe case of service, it is the service time per customer that counts in the analysed ar-rivals and service time distributions. These distributions may represent situations wherecustomers arrive and are served individually(e.g, banks or supermarkets). In other sit-uations, customers may arrive and/or be served in groups(e.g, restaurants). The lattercase is normally referred as bulk queues.

The manner of choosing customers from the waiting line to start service definesthe service discipline. The most common, and apparently fair, discipline is the FCFSrule(first come, first served). In the queues LCFS(last come, first served) and SIRO(servicein random order), customers, arriving at a facility may be put in priority queues, suchtha,t those with a higher priority will receive preference to start service first.

The facility may include more than one server, and then it could allow as many cus-tomers as the number of servers to be served simultaneously. The facility may comprisea number of series stations through which the customer may pass before the service iscompleted(e.g, processing of a product on a sequence of machines). In this case, waitinglines may not be allowed between the stations. The resulting situations are normallyknown as queues in series or tandem queues. The most general design of a service fa-cility includes both series and parallel processing stations. This is what we call networkqueues.

In certain situations, only a limited number of customers may be allowed, possiblybecause of space limitations(e.g, cars allowed in a drive-in bank). Once the queue fillsto capacity, the service for newly arriving customers is denied and they may not jointhe queue. The calling source may be capable of generating a finite number of cus-tomers or(theoretically) infinitely many customers. In a machine shop with a total ofM machines, the calling source before any machine breaks down consists of M potentialcustomers. Once a machine is broken, it becomes a customer and hence incapable of

5

Page 6: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

generating new calls until it is repaired.A ”human” customer may jockey from one waiting line to another hoping to reducing

his or her waiting time. Some ”human” customers also may balk from joining a waitingline all together because they anticipate a long delay, or they may renege after being inthe queue for a while because the wait has been too long.

The basic elements of a queueing model depend on the following factors:1. Arrivals distribution(single or bulk arrivals).2. Service-time distribution(single or bulk service).3. Design of service facility(series, parallel or network stations).4. Service discipline(FCFS, LCFS, SIRO) and service priority.5. Queue size(finite or infinite).6. Calling source(finite or infinite).7. Human behavior(jockeying, balking and reneging).

1.2 Queues with Combined Arrivals and DeparturesA notation that is particularly suited for summarizing the main charecteristics of parallelqueues has been universally standardized in the following form

(a/b/c) : (d/e/f)

where the symbols a, b, c, d, e and f stand for basic elements of the model as follows:a ≡ arrivals distributionb ≡ service time(or departures) distributionc ≡ number of parallel servers(c=1, 2, ..., ∞)The standard notation replaces the symbols a and b for arrivals and departures by

the following codes:M ≡ Poisson(or Markovian) arrival or departure distribution (or equivalently expo-

nential interarrival or service-time distribution)D ≡ constant or deterministic interarrival or service timeEk ≡ Erlangian or Gamma distribution of interarrival or service time distribution

with parameter k.GI ≡ general independent distribution of arrivals(or interarrival time)G ≡ general distribution of departures(or service time)Under steady-state conditions we shall be interested in determining the following

basic measures of performance.pn ≡ (steady-state) probability of n customers in a systemLs ≡ expected number of customers in a systemLq ≡ expected number of customers in a queueWs ≡ expected waiting time in a system (time in the queue + service time)Wq ≡ expected waiting time in a queue

6

Page 7: University of Karlsruhe (TH) Department of Mathematics

1.2 Queues with Combined Arrivals and Departures

By definition, we haveLs =

∞∑n=0

npn,

Lq =∞∑n=c

(n− c)pn

andLs = λWs,

Lq = λWq.

When customers arrive with rate λ but not all arrivals can join the system (this canhappen, for example, when there is a limit on the maximum number of members in thesystem), the equations must be modified by redefining λ to include only those customersthat actually join the system. Thus letting

λeff = effective arrival rate for those, who join the system

we haveLs = λeffWs,

Lq = λeffWq,

Ws = Wq + 1µ,

Ls = Lq + 1µ,

λeff = µ+ (Ls − Lq).We can determine all the basic measures of performance in the following order:

pn → Ls =∞∑n=0

npn → Ws = Lsλ→ Wq = Ws −

1µ→ Lq = λWq.

Resource Utilization and Traffic Intensity .Resource Utilization represents the fraction of time a server is engaged in providing

service as defined below:

Utilization(ρ) = time a server is occupiedtime a server is available .

For example, consider a queueing system with m servers (m ≥ 1). Let λ denote theaverage arrival rate and µ denote the average service rate of each server. If we observethe system in an arbitary interval (t, t+T ), then each server on the average will serve

average number of arrivals in (t, t+T)average number of customers, served by m servers = λT

mµ.

7

Page 8: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

Therefore,

ρ =λTmµ

T= λ

mµ.

It is clear from the definition that ρ is dimensionless and should be less than unity inorder for a server to cope with the service demand, or in other words for the system tobe stable.

Flow Conversation Law .For a stable queueing system, the rate of customers entering the system should equal

to the rate of customers leaving the system if we observe it for a sufficient long periodof time; that is λout = λin. This is beacuse if λout > λin, then there will be a steadybuild-up of customers and the system will eventually become unstable. On the otherhand, if λout < λin, then customers are created within the system. This notion of flowconversation is useful when we wish to calculate throughput in queueing networks.

Little’s Formula .Before we start examining the stochastic behaviour of a queueing system, let us first

state a very simple and yet powerful result that governs its steady-state performancemeasures. L = λW where L is the average number of customers in the queueing system,W is the average waiting time of a customer in the queueing system, and λ is the averagearrival rate of customers to the queueing system. This result is true for any queueingsystem with the proper interpretation of L, λ and W .

1.3 Probability TheoryDefinition 1.1. σ - field of subsets A of a set Ω.

A non-empty collection of subsets A of a set Ω is called a σ - field of subsets of Ω, ifthe following two properties hold:

(i) If Λ is in A, then Λc is also in A.(ii) If Λn is in A, n = 1, 2, ... then ∪∞n=1Λn and ∪∞n=1Λn are both in A.

Definition 1.2. Probability SpaceA probability measure P on a σ- field of subsets A of a set Ω is a real-valued function

with domain A, satisfying the following properties:(i) P (Ω) = 1(ii) P (Λ) ≥ 0, for all Λ in A.(iii) If Λn, n = 1, 2, ... are mutually disjoint sets in A, i.e., for all i and j, Λi ∩Λj = Ø

if i 6= j, then P (∪∞n=1Λn) = ∑∞n=1 P (Λn).

A probability space, denoted by (Ω, A, P ), represents a set Ω, a σ- field of subsets Aof a set Ω, and a probability measure P defined on A.

8

Page 9: University of Karlsruhe (TH) Department of Mathematics

1.3 Probability Theory

Definition 1.3. Discrete Random Variable A discrete random variable X on a proba-bility space (Ω, A, P ) is a function X with domain Ω and a range, a finite or countablyinfinite subset x1, x2, ... of the real numbers R such that ω : X(ω) = xi is an event forall i.

Definition 1.4. Probability Mass Function (pmf):A real-valued function p depends on R by pi = P (X = i) is called the pmf of the

discrete random variable X which satisfies the following properties:(i) pi ≥ 0 for all iεR.(ii) iεR : pi 6= 0 is a finite or countably infinite subset of R. Let S = i1, i2, ...

denote this set.Then(iii) ∑kεS pk = 1.

Example Poisson random variableA discrete random variable X is said to be a Poisson random variable with parameter

λ > 0 if its pmf is given by

pi =

λie−λ

i! i = 0, 1, 2...0 otherwise

(1.1)

clearly, pi ≥ 0 for all i and ∑∞i=0 pi = 1.

Definition 1.5. Distribution FunctionA function F (t), −∞ < t < ∞, defined by F (t) = P (X ≤ t) = ∑

i≤t pi is called thedistribution function of the random variable X.

Example Let X be a discrete random varible with pmf : pi = P (X = i) = 1π2

6i2

,i = 1, 2, ... Then F (x) = 6

π2∑∞i=1

1π2 ε(x− k), where

ε(ξ) =

1 ξ ≥ 00 otherwise

Definition 1.6. Continuous Random VariableLet X be a random variable defined on (Ω, A, P ) with a distribution function F ,

that is, F (x) = P (X ≤ x). Then X is said to be a continuous random variable if Fis absolutely continuous, i.e., if there exists a nonnegative function f(x), such that forevery real number x we have F (x) =

∫ x−∞ f(t)dt.

The function f is called the probability density function (pdf) of the random variableX.

Properties of F and f :(i) 0 ≤ F (x) ≤ 1 for all x;

9

Page 10: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

(ii) F (x) is a non-decreasing function of x;(iii) F (−∞) = 0 and F (∞) = 1;(iv) f(x) ≥ 0 and satisfies

limx→∞F (x) = F (∞) =∫ ∞−∞

f(t)dt = 1.

Let a, bεR with a < b, then

P (a < x ≤ b) = F (b)− F (a) =∫ b

af(t)dt.

Example Let X be a continuous random variable which pdf is given by

f(x) =

x 0 < x ≤ 12− x 1 ≤ x < 20 otherwise

Clearly, f ≥ 0. Also,∫∞−∞ f(x)dx =

∫ 10 xdx+

∫ 21 (2− x)dx = 1. The distribution function

F of X is given by

F (x) =

0 x ≤ 0∫ x

0 udu = x2

2 0 < x ≤ 1∫ 10 udu+

∫ x1 (2− u)du = 2x− x2

2 − 1 1 < x ≤ 21 x > 2

Definition 1.7. Mean or Expected value of a discrete random variableLet X be a discrete random variable with pmf pi = P (X = xi), i = 1, 2, ... If∑∞i=1 |xi|pi <∞ we say that the expected value of X exists and write E[X] = ∑∞

i=1 xipi.If the sum is not finite, then we say that E[X] does not exist.

Example Let X be a discrete random variable with pmf

pj = P (X = (−1)j+1 3jj

) = 23j , j = 1, 2, ...

Clearly, pj > 0 and ∑∞j=1 pj = 1. ∑∞i=1 |xi|pi = 2∑∞i=11i

=∞.Therefore, E[X] does not exist.

Example Let X be a Poisson random variable with parameter λ. Then its pmf isgiven by pj = e−λ λ

j

j! , j = 0, 1, 2...Since X takes only non-negative integer values, ∑∞i=1 |xi|pi = ∑∞

i=1 xipi = λ.

10

Page 11: University of Karlsruhe (TH) Department of Mathematics

1.3 Probability Theory

Result If X is a nonnegative integer-valued random variable, then

E[X] =∞∑i=1

P (X ≥ i),

if it is provided that the series on the right hand side converges.

Definition 1.8. Probability Generating Function (pgf) or z-transformLet X be a discrete random variable and let pk = P (X = k), k = 0, 1, 2, ... Then the

function defined by

PX(z) =∞∑k=0

pkzk, |z| ≤ 1, i.e. PX(z) = E[zX ]

is called the pgf of X.

Example Let X be a Poisson random variable with parameter λ. The pgf of X isgiven by

PX(z) =∞∑k=0

e−λλk

k! zk = e−λ(1−z)

Result If X and Y are two independent discrete random variables, then

PX+Y (z) = PX(z) + PY (z).

Result Superposition of Poisson random variables is again a Poisson random vari-able.

Definition 1.9. Mean or Expected value of a continuous random variableLet X be a random variable of the continuous type and has pdf f . If

(∗)∫ ∞−∞|x|f(x)dx <∞

we say that the expected value of X exists and write E[X] =∫xf(x)dx. If (*) is not

satisfied, then we say that E[X] does not exist.

Example Let the pdf of X be given by f(x) = 1π

11+x2 (Cauchy pdf). Then,

∫ ∞−∞|x|f(x)dx = 1

π

∫ ∞−∞

|x|1 + x2 =∞.

Therefore, E[X] does not exist.

11

Page 12: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

Example LetX be an exponentially distributed random variable with parameter µ > 0.Then its pdf is given by

f(x) = µe−µx, µ > 0, x > 0,

E[X] =∫ ∞

0xµe−µxdx = 1

µ.

Properties of the Expectation Let X and Y be any two random variables. Then(i) E[aX + b] = aE[X] + b, a, b are constants;(ii) E[X + Y ] = E[X] + E[Y ].

In fact, if Xi, i = 1, 2, ...n are iid random variables, then E[∑ni=0 Xi] = ∑n

i=0 E[Xi],if E[Xi] exists, for i = 1, 2, .....n.

Moment If E[Xn] exists for some positive integer n, we call E[Xn] the n-th momentof X about the origin and it is given by

E[Xn] =

∫∞−∞ x

nf(x)dx, f is continuous∑i x

ni P (X = xi), f is discrete

Example Let X be a random variable with pdf

f(x) =

2x3 , x ≥ 10, x < 1

,

thenE[X] =

∫ ∞1

2x2dx = 2

but E[X2] =∫∞

12xdx does not exist.

Example Let X have the uniform distribution on the first N natural numbers, that is,P (X = k) = 1

N, k = 1, 2, ..., N. Clearly, moments of all orders exist. In particular,

E[X] =N∑k=1

k

N= N + 1

2

E[X2] =N∑k=1

k2

N= (N + 1)(2N + 1)

6 .

Variance Let X be a discrete or continuous random variable. If E[X2] exists, we callE[X − E[X]]2 the variance of X, and we write σ2 = V ar[X] = E[X − E[X]]2. Thequantity σ is called the standard deviation of X. The other way of expressing thevariance of X is V ar[X] = E[X2]− (E[X])2 which is obtained by the properties of theexpectation.

12

Page 13: University of Karlsruhe (TH) Department of Mathematics

1.3 Probability Theory

Example Let X be a Poisson random variable with parameter λ > 0. Then

E[X] =∞∑k=0

ke−λλk

k! = λ

andE[X2] =

∞∑k=0

k2e−λλk

k! = λ(λ+ 1).

Therefore, V ar[X] = E[X2]− (E[X])2 = λ.

Property of the variance Function Let X and Y be any two random variables. ThenV ar[aX + bY ] = a2V ar[X] + b2V ar[Y ], where a, b are constant.

From now on we restrict our study to continuous random variables.

Joint Distribution The joint distribution F (x, y) of a two-dimensional random variable(X, Y ) is defined as

F (X, Y ) = P (X ≤ x, Y ≤ y) =∫ x

−∞

∫ y

−∞f(u, v)dvdu, (1.2)

x, yεR2 if f exists. The function f is called the joint pdf of (X, Y ) satisfying

F (∞,∞) = limx→∞,y→∞

∫ x

−∞

∫ y

−∞dvdu =

∫ ∞−∞

∫ ∞−∞

dvdu = 1

andf(x, y) = ∂2F (x, y)

∂x∂y

Example Let (X, Y ) be a two dimensional random variable which pdf is given by

f(x, y) =

e−(x+y) 0 < x, y <∞0 otherwise

ThenF (x, y) =

∫ x

0

∫ y

0f(u, v)dvdu = (1− e−x)(1− e−y).

Therefore

F (x, y) =

(1− e−x)(1− e−y) 0 < x, y <∞0 otherwise

Definition 1.10. Marginal pdfsIf (X, Y ) is a random variable with pdf f , then

fX(x) =∫ ∞−∞

f(x, y)dy and fY (y) =∫ ∞−∞

f(x, y)dx

are called the marginal pdf’s of X and Y respectively. Clearly, fX(x) ≥ 0 and fY (y) ≥ 0and

∫∞−∞ fX(x)dx = 1 and

∫∞−∞ fY (y)dy = 1.

13

Page 14: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

Example Let (X, Y ) be a two dimensional random variable which pdf is given by

f ∗(x, y) =

2 0 < x < y < 10 otherwise

Then the marginal pdf fX(x) of X is given by

fX(x) =∫ 1

y=x2dy =

2(1− x) 0 < x < 10 otherwise

and the marginal pdf fY (y) of Y is given by

fY (y) =∫ 0

x=12dx =

2y 0 < y < 10 otherwise

Definition 1.11. Conditional pdf’sLet f be the pdf of (X, Y ). Then at every point (x, y), at which f is continuous,

fY (y) > 0 and is continuous, the conditional pdf of X given Y = y, is defined by

fX|Y (x|y) = f(x, y)fY (y) .

Similarly, the conditional pdf of Y given X = x, is defined by

fY |X(y|x) = f(x, y)fX(x) ,

if fX(x) > 0.

Example Let (X, Y ) be a two dimensional random variable which pdf is the givenabove f∗. Then, the conditional pdfs are given by

fX|Y (x|y) = f(x, y)fY (y) = 1

y, 0 < x < y.

fY |X(y|x) = f(x, y)fX(x) = 1

1− x, x < y < 1.

Conditional Expectations The conditional expectation of X, given Y = y, is definedby

E[X|Y ] =∫ ∞−∞

xfX|Y (x|y)dx,

if fY (y) > 0.Similarly, the conditional expectation of Y , given X = x, is defined by

E[Y |X] =∫ ∞−∞

yfY |X(y|x)dy,

if fX(x) > 0.

14

Page 15: University of Karlsruhe (TH) Department of Mathematics

1.3 Probability Theory

Example Let (X, Y ) be a two dimensional random variable which pdf is again f∗.Then the conditional expectations E[X|Y ] and E[Y |X] are given by

E[X|Y ] =∫ y

0xfX|Y (x|y)dx = y

2 , 0 < y < 1

andE[Y |X] =

∫ 1

xyfY |X(y|x)dy = 1 + x

2 , 0 < x < 1

Result E[X] = E[E[X|Y ]], if E[X] exists.

Conditional Variance Conditional variance of a random variable X given Y is definedas

V ar[X|Y ] = E[X2|Y ]− (E[X|Y ])2;

E[X2|Y ] =∫ y

0x2fX|Y (x|y)dx = y2

3 , 0 < y < 1;

and

E[Y 2|X] =∫ 1

xy2fY |X(y|x)dy = 1 + x+ x2

3 , 0 < x < 1;

V ar[X|Y ] = E[X2|Y ]− (E[X|Y ])2 = y2

12 , 0 < y < 1;

V ar[Y |X] = E[Y 2|X]− (E[Y |X])2 = (x− 1)2

12 , 0 < x < 1.

Theorem V ar[X] = E[V ar[X|Y ]] + V ar[E[X|Y ]]

Independence Two random variables X and Y are said to be independent if

f(x, y) = fX(x)fY (y), (x, y)εR2

Example Let (X, Y ) be a two dimensional random variable which pdf is given by

f(x, y) =

e−(x+y) 0 < x, y <∞0 otherwise

Clearly, fX(x) = e−x, fY (y) = e−y and hence f(x, y) = fX(x)fY (y). Thus, X and Y areindependent.

15

Page 16: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

Example Let (X,Y) be a two dimensional random variable which pdf is f∗. Then

fX(x) =

2(1− x) 0 < x < 10 otherwise

and

fY (y) =

2y 0 < y < 10 otherwise

and hence f(x, y) 6= fX(x)fY (y). Thus, X and Y are not independent.

Result If X and Y are independent, then fX|Y (x|y) = fX(x) and fY |X(y|x) = fY (y).

Definition 1.12. Independent and Identically Distributed (iid) Random VariablesA sequence Xn, n = 1, 2, ..., of random variables are said to be iid random variables

with common probability law of X, if Xn is an independent sequence of randomvariables and the distribution of Xn, n = 1, 2, ..., is the same as the distribution of X.

Let X and Y be independent random variables with pdf ′s fX(x) and fY (y), anddistribution functions F (x) and G(x), respectively. Then the pdf fZ(z) of the randomvariable Z = X + Y is

fZ(z) =∫ ∞−∞

fX(z − ξ)fY (ξ)dξ =∫ ∞−∞

fX(ξ)fY (z − ξ)dξ

The distribution function H(z) of Z = X + Y is given by

H(z) =∫ ∞−∞

f(z − ξ)fY (ξ)dξ =∫ ∞−∞

G(z − ξ)fX(ξ)dξ.

Example A random variable X is said to be uniformly distributed on an interval (a, b)if its pdf is given by

fX(x) =

1b−a a < x < b

0 otherwiseLet X and Y be independent and uniformly distributed over (0, 1). Then fX(x) = 1, 0 <x < 1 and fY (y) = 1, 0 < y < 1. Let Z = X + Y . Then fZ(z) =

∫ 20 fX(x)fY (z − y)dx.

The possible values of the integrand are 0 and 1. The integrand takes the value 1, if0 ≤ x ≤ 1 and 0 ≤ z − x ≤ 1, i.e., z − 1 ≤ x ≤ z. If 0 ≤ z ≤ 1, then the integrand hasvalue 1, if 0 ≤ x ≤ z and 0 otherwise.

Therefore, fZ(z) =∫ x

0 1dx = z, 0 ≤ z ≤ 1 . If 1 < z ≤ 2, then the integrand has value1, if z − 1 ≤ x ≤ 1 and 0 otherwise. Therefore, fZ(z) =

∫ 1z−1 1dx = 2− z, 1 < z ≤ 2.

Thus

fZ(z) =

z 0 ≤ z ≤ 12− z 1 < z ≤ 20 otherwise

16

Page 17: University of Karlsruhe (TH) Department of Mathematics

1.3 Probability Theory

Let Xn be a sequence of iid random variables with distribution function F . Thenthe distribution function of X1+X2+...+Xm is F∗m,i.e., P (X1+X2+...+Xm ≤ x) = F∗m,where F∗m is the m-fold convolution of F itself.

Erlang Random Variable A random variable En is said to have Erlang distributionwith parameters n(positive integer) and λ(> 0), if En is the sum of n exponentiallydistirbuted random variables with parameter λ. The pdf of En is f(t) = λntn−1e−λt

(n−1)! andits mean is n

λ.

Hyper-exponential Random Variable A random variable Hn is said to have Hyper-exponential distribution if Hn = αiXλi with probability αi, such that ∑n

i=1 αi = 1 andXλi is an exponentially distirbuted random variable with parameter λi, for i = 1, 2, ..., n.The pdf f(t) of Hn is given by α1(λ1e

−λ1t) + ... + α1(λne−λnt) and its mean is given byα1λ1

+ ...+ αnλn

.

Memoryless or Markov Property of the Exponential Distribution Let X be anexponentially distributed random varible with parameter λ. Then,

P (X > a+ b|X > a) = P (X > a+ b,X > a)P (X > a) = P (X > a+ b)

P (X > a) =

= e−λ(a+b)

e−λa= e−λb = P (X > b)

If we think of X as the lifetime of some instrument, then the above equation states thatthe probability that the instrument lives at least s+ t hours, given that it has survivedt hours, is the same as the initial probability that it lives at least s hours. In otherwords, if the instrument is alive at time t, then the distribution of its remaining life isthe originial lifetime distribution.

Now we give some problems from Probability theory.

1) Find a sharp bound for P (A⋂B) and P (A⋃B).

2) n numbers are chosen and multiplied. Find the probability, that the last digit of theproduct is 1, 3, 7, 9.

3) Let A,B,C be independent uniform (in (0, 1)) random variables. Find the probabilitythat the polynomial Ax2 +Bx+ C = 0 has real roots.

4) A stick of length a is broken in 3 pieces. What is the probability that they form atriangle?

5) LetX1 andX2 are independent r.v.s andX1εPoi(λ1), X2εPoi(λ2). Prove thatX1+X2follows Binomial distribution.

17

Page 18: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

6) X1, X2, ..., Xn are independent variables and Xi has a cdf Fi(x) for all i = 1, 2, ....Prove that Y = Max(X1, ..., Xn) has a cdf ∏n

i=1 Fi(x).Now we are going to consider briefly 3 of the basic distributions, which we will use

in our course.1) Uniform distribution

f(x) = 1b− a

, a < x < b,

F (x) = x− Ab− a

, a < x < b.

If we take the unitary interval (0, 1), then f(x) = 1 and F (x) = x and this way weobtain the standard normal distribution N(0, 1).

2) Exponential distribution

f(x) = αe−αx, 0 < x,

F (x) = 1− e−αx, 0 < x.

We can easily calculate E(x) = 1α

and V ar(x) = 1α2 . The most improtant property

of the Exponential distribution is the so called

Lack of memmory property (Markov property) .

P (X > x+ y|X > y) = P (X > x)

3) Erlang distribution (Gamma distribution)

f(x) = αn

(n− 1)!e−αxxn−1, 0 < x.

Here we have E(x) = nα

and V ar(x) = nα2 .

If X1, ..., Xn are iid exponential variables, than X1 + ... + Xn follows Erlang distri-bution.If X1, ..., Xn are iid standard N(0, 1) variables, than X2

1 +...+X2n follows a Chi-square

distribution.

1.4 Stochastic ProcessesStochastic ProcessDefinition 1.13. A stochastic process X(t), t ∈ T is a collection of random variables,defined on a common probability space. For each t of T , X(t) is called state of thestochastic process.

18

Page 19: University of Karlsruhe (TH) Department of Mathematics

1.5 Markov Chain

Classficication .It is typical to think of t as time, T as a set of points in time, and X(t) as the value

or state of the stochastic process at time t. We classify stochastic processes accordingto time and say that they are discrete time or continuous time, depending on whetherT is discrete (finite or countably infinite) or continuous.

Counting Process

Definition 1.14. A stochastic process is said to be a Counting Process if (Xt)t≥0 rep-resents the total number of events that have occured up to time t. Hence, a countingprocess (Xt)t≥0 must satisfy

i) X(t) ≥ 0;

ii) X(t) is integer-valued;

iii) If s < t, then X(t) ≤ X(s) for s, t real numbers;

iv) For s < t, X(t)−X(s) equals the number of events that have occured in the interval(s, t).

1.5 Markov ChainMarkov Chain

Definition 1.15. A discrete time stochastic process Xn, n = 0, 1, 2... is said to be aMarkov Chain if the conditional distribution of any future state Xn+1, given the paststates X0, X1, X2, ..., Xn−1 and the present state Xn, is independent of the past statesand depends only on the present state. That is

P (Xn+1 = j|X0 = i0, X1 = i1, ...Xn−1 = in−1, Xn = i) =

= P (Xn+1 = j|Xn = i),

for all states i0, i1, ..., in−1, i, j and all n ≥ 0.

Stationary or Homogeneous Transition Probability

Definition 1.16. In the above defintion if the transition probability P (Xn+1 = j|Xn =i) is independent of n then it is called a stationary or homogeneous transition probabilityand we will denote this probability by pij. The value pij denotes the probability thatthe process which is in state i will make a transition into state j. Therefore, pij ≥ 0,i, j ≥ 0; ∑∞j=0 pij = 1. The transition matrix P is given by

19

Page 20: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

P = ((pij)) =

p00 p01 ... ...p10 p11 ... ...... ... ... ...... ... ... ...

Example 1. Let X1, X2, ... Xn are independent idntically distributed random variablesand let Sn = X1+ X2+ ...+ Xn. Sn+1=Sn+Xn depends only on Sn and therefore Snis a Markov chain.

Example 2. (Gambler’s ruin) There are two players A and B and P (A winning agame) = p. Together they have N Euro to play. If A winns, he gets 1 Euro from B andif A loses, he gives 1 Euro to B. We denote

Xn = the amount, which A has at the end of the n-th game.

If Xn=i, then Xn+1=i+ 1 with probability p, and Xn+1=i− 1 with probability q=1− p.Then the transition probability matrix is

P = ((pij)) =

q 0 p 0 ...0 q 0 p ...... ... ... ... ...... ... q 0 p... ... ... ... 1

Example 3. (Branching processes) Let ue have a generation process. Denote

Xn = number of persons at the n-th generation,

ξn = number of offsprings of the n-th generation.

Then Xn+1=ξ1+ξ2+...+ξm depends only on Xn and therefore Xn is a Markov chain.

Example 4. (Discrete queue) At every unit of time one customer is served, if availableand a random number ξ of customers join the system at one unit of time. Let us

P (ξ = i) = ai, i = 0, 1, 2, ...

If Xn = number of waiting customers at the beginning of the n-th period, then Xn+1

depends only on Xn. If Xn+1 = j =

Xn − 1 + ξ i ≥ 1ξ i = 1

. Then the transition probability

matrix is

20

Page 21: University of Karlsruhe (TH) Department of Mathematics

1.5 Markov Chain

P = ((pij)) =

a0 a1 a2 a3 ...a0 a1 a2 a3 ...0 a0 a1 a2 ...0 0 a0 a1 ...... ... ... ... ...

Example 5. There are 6 balls, 3 red and 3 black, distributed in 2 urns equally. Atevery time instant one ball is taken from each urn and they are interchanged. Let Xn

denote the number of red balls at the end of the n-th draw in urn A. Obviously, Xnis a Markov chain. We will now determine the elements of the transition probabilitymatrix pij = P (Xn+1 = j|Xn = i) and j taked values i− 1, i, i + 1, where i=0, 1, 2, 3.Easily we obtain the transition probability matrix

P = ((pij)) =

0 1 0 019

49

49 0

0 49

49

49

0 49

49

19

Definition 1.17. A transition probability P = ((pij)) is said to be doubly stochastic, ifthe column sum is also 1.

Definition 1.18. State j is said to be accessible from state i if for some n ≥ 0 , pnij ≥ 0.

Definition 1.19. Two states i and j are said to communicate if i is accesible from jand j is accessible from i, i.e. for some m, n > 0, such that pmij ≥ 0, pnji ≥ 0.

Definition 1.20. State i is said to have a period d if pnii = 0 whenever n is not divisibleby d, and d is the greatest integer with this property. If pnii = 0 for all n > 0, then theperiod of i is defined to be infinite.

Definition 1.21. A state with period 1 is said to be aperiodic.

Definition 1.22. A chain is aperiodic if all its states are aperiodic.

Definition 1.23. A Markov chain is said to be irreducible if all of its states communicate,that is, if there exists an n such that pnij ≥ 0 for every i and j.

Definition 1.24. Define fnjj as the probability, that a chain starting at state j returnsfor the first time to j in n transitions. Hence the probability that the chain ever returnsto j is

fjj =∞∑n=1

fnjj.

Definition 1.25. If fjj = 1, then j is said to be a recurrent state.

21

Page 22: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

Definition 1.26. If fjj < 1, j is said to be a transient state.

Definition 1.27. When fjj = 1, mjj=∑∞n=1 nf

njj is the mean recurrence time.

If mjj <∞, then j is known as a positive recurrent, while if mjj=∞, we say that j is anull recurrent state.

Definition 1.28. A non-periodic chain which is irreducible and positive recurrent issaid to be ergodic.

Stationary probabilities

Definition 1.29. Let a Markov chain is irreducible. Then

limn→∞

pnij = πj, πtP = πt, π0 + π1 + ... = 1.

For j=0, 1,..., πj are called stationary probabilities.

1.6 Markov ProcessIndependent and Stationary Increments

Definition 1.30. We say, that a stochastic process has independent increments, if forall t1 < t2 < ... < tn we have that X(t2) − X(t1), X(t3) − X(t2), ..., X(tn) − X(tn−1)are independent. If further X(ti)−X(ti−1) depends only on ti − ti−1 and not on ti−1 orX(ti−1), then the process has stationary independent increments.

Markov process

Definition 1.31. A process X(t) is said to be Markov process, if

P (a < X(t) ≤ b|X(tn) = xn, X(tn−1) = xn−1, ..., X(t0) = x0) =

= P (a < X(t) ≤ b|X(tn) = xn)

for all n, where t0 < t1 < ... < tn < t.

Martingale

Definition 1.32. If E(X(tn)) < ∞ and E(X(tn))|X(tn−1), ..., X(t0)) = X(tn−1), thenX(t) is a martingale.

22

Page 23: University of Karlsruhe (TH) Department of Mathematics

1.7 Poisson Process

1.7 Poisson ProcessPoisson processDefinition 1.33. A counting process X(t), t ≥ 0 is said to be a Poisson process withparameter λ(> 0) if

i) X(0) = 0;

ii) The process has stationary and independent increments;

iii) P (X(h) = 1) = λh + o(h) for a time-interval (t, t + h) (then the probability, thatno event occurs is 1− λh+ o(h));

iv) P (X(h) ≥ 2) = o(h)(events occur one at a time).

The probability generating function of X(t) is

P (s, t) = E(sX(t)) = P (X(t) = 0) + sP (X(t) = 1) + s2P (X(t) = 2) + ....

We also have

P (s, t+ h) = E(sX(t+h)) = E(sX(t))(1− λh) + o(h) + sE(sX(t))(λh+ o(h)) == P (s, t)(1− λh) + sE(sX(t))(λh+ o(h)).

Now we getlimh→0

P (s, t+ h)− P (s, t)h

= −λ(1− s)P (s, t)

i.e.∂P (s, t)∂t

= −λ(1− s)P (s, t).

We assumed, that X(0) = 0 and then P (s, 0) = 1. So, the probability generatingfunction has Poisson distribution

P (s, t) = e−λ(1−s)t.

Then Pn(t) = P (X(t) = n) = e−λt (λt)nn! , for n = 0, 1, 2, ....

For a Poisson process X(t)t≥0 with parameter λ(> 0) we get:

1) E(X(t)) = λt and V ar(X(t)) = λt;

2) Let Ti is the time of occurence of the i-th event. If t < 0, then

P0(t) = P (X(t) = 0) = e−λt,

i.e. the probability, that the first event occurs after time t is P0(t) = P (T1 > t) andthen P (T1 ≤ t) = 1 − e−λt. Therefore T1 follows an exponential distribution withmean 1

λ. If we shift the origin to Ti, as above, Ti+1−Ti follows an exponential distri-

bution with the same mean 1λ

and T1, T2− T1, T3− T2, ... are independent identicallydistributed exponential variables with the same mean 1

λ;

23

Page 24: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

3) Ti = (Ti − Ti−1) + (Ti−1 − Ti−2) + ...+ (T2 − T1) + T1 follows an Erlang distributionwith parameters (i, λ);

4) Let u < t, k < n; for k = 0, 1, 2, ..., n we have

P (X(u) = k|X(t) = n) = P (X(u) = k ∩X(t) = n)P (X(t) = n) =

= P (X(u) = k)P (X(t− u) = n− k)P (X(t) = n) =

e−λu(λu)kk!

e−λ(t−u)(λ(t−u))n−k(n−k)!

e−λt(λt)nn!

=

=(n

k

)(u

t

)k(1− u

t

)n−k.

X(u)|X(t) follows Binomial distribution, i.e. B(X(t), ut) and E(X(u)|X(t)) = X(t)u

t,

V ar(X(u)u|X(t)

t) = X(t)

t.

5) Let X1(t) and X2(t) be independent Poisson processes with parameters λ1 and λ2respectively. Knowing, that E(sX1(t)) = e−λ1(1−s)t, E(sX2(t)) = e−λ2(1−s)t, we get

E(sX1(t)+X2(t)) = e−λ1(1−s)te−λ2(1−s)t = e−(λ1+λ2)(1−s)t.

Therefore X1(t) + X2(t) is also a Poisson process with parameter λ1 + λ2. Now letus calculate

P (X1(t) = n1|X1(t) +X2(t) = n) = P (X1(t) = n1 ∩X1(t) +X2(t) = n)P (X1(t) +X2(t) = n) =

= P (X1(t) = n1)P (X2(t) = n− n1)P (X1(t) +X2(t) = n) =

e−λ1t(λ1t)n1n1!

e−λ2t(λ(n−n1))n−k(n−n1)!

e−(λ1+λ2)t((λ1+λ2)t)nn!

=

=(n

n1

)(λ1

λ1 + λ2

)n1( λ2

λ1 + λ2

)n−n1

.

So, X1(t)|X1(t) + X2(t) follows Binomial distribution, i.e. B(X1(t) + X2(t), λ1λ1+λ2

)and E(X1(t)|X1(t) +X2(t)) = λ1

λ1+λ2(X1(t) +X2(t)).

We generalise to X1(t)|X1(t) + X2(t) + ... + Xn(t) and it follows Binomial distri-bution, while X1(t), X2(t), ..., Xn(t)|X1(t) + X2(t) + ... + Xn(t) follows Multinomialdistribution.

6) Denote N(t) - the number of events up to time t, T1 is again the time of occurrenceof the first event. Then

P (u < T1 < u+ M u|N(t)) = P (u < T1 < u+ M u,N(t) = 1)P (N(t) = 1) =

24

Page 25: University of Karlsruhe (TH) Department of Mathematics

1.8 Birth and Death Processes

= (e−λuλ M ue−λ(t−u)

e−λtλt= M u

t,

where 0 < u < t. So, given that N(t) = 1, T1 follows uniform distribution in theinterval (0, t). Generalize, that if we have n events, which occur at times T1, T2, ..., Tncorrespondingly, the probability distribution function of T1, T2, ..., Tn is n!

tn.

Example 1. Customers arrive at a supermarket at the rate of 20/hr(20 customers perhour). Given, that 100 customers have arrived before 10 a.m., find:

a) the probability, that 2 customers will arrive in the next 5 min.;

b) the probability, that the next customer will arrive before 1 min.;

c) the expected number of customers, which will arrive in the next 2 hr.?

Substituting in the formulas, we get:

a) P (X(5) = 2) = e−13 5 5

32! ;

b) P (T1 ≤ 1) = 1− e− 13 .1;

c) E(number of customers arriving in the next 2 hr.)= 13 .120 = 40 customers.

1.8 Birth and Death ProcessesHere we can consider birth as an arrival, and death as a departure.

Birth and Death Processes (BDP) and Kolmogorov Equations

Definition 1.34. Let X(t) be a Markov process with a parameter λ ≥ 0 and withstationary transition probabilities, i.e. P (X(u) = m|X(t) = n) depends only on t − uand does not depend on X(u) and u, where m,n ≥ 0, and 0 < u < t. Let us denote

Pij(t) = P (X(t) = j|X(u) = i)

and assume:

1) Pii+1(h) = λih+ o(h) (birth);

2) Pii−1(h) = µih+ o(h) (death);

3) Pii(h) = 1− (λi + µi)h+ o(h) (no birth and no death occure);

4) P (more than one birth and/or death occurs) = o(h);

25

Page 26: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

5) λ0 ≥ 0, λ1, λ2, ... > 0, µ0 = 0, µ1, µ2, ... > 0.Then X(t) is called a Birth and Death Process (BDP).

It is easy to obtain, that

P (X(t+ h) = n) = P (X(t+ h) = n|X(t) = n+ 1)P (X(t) = n+ 1)+

+P (X(t+ h) = n|X(t) = n− 1)P (X(t) = n− 1)+

+P (X(t+ h) = n|X(t) = n)P (X(t) = n) + o(h)

where limh→∞o(h)h

= 0.Noting Pn(t) = P (X(t) = n), we can write the above expression this way:

Pn(t+ h) =

= Pn+1(t)(µn+1h+ o(h)) + Pn−1(t)(λn−1h+ o(h))+

+Pn(t)(1− λnh− µnh+ o(h)) + o(h).

From here we get

Kolmogorov equations

Definition 1.35. The system of the following equationsP′n(t) = µn+1Pn+1(t)− (λn + µn)Pn(t) + λn−1Pn−1(t), n = 1, 2, ...P′0(t) = µ1P1(t)− λ0P0(t).

is called Kolmogorov equations.

Special cases which we are going to consider

1) Poisson process

2) Pure birth process

3) Pure death process

4) Pure(simple) birth and death process

5) M/M/1 queue

6) M/M/1 queue with a busy period

7) M/M/∞ queue

26

Page 27: University of Karlsruhe (TH) Department of Mathematics

1.8 Birth and Death Processes

Solving the Kolmogorov equations, we obtain the time-dependent probabilities Pn(t).First we study time-independent steady state probabilities, taking P ′n(t) = 0.

Now from the Kolmogorov equations we haveµn+1Pn+1 − λnPn = µnPn − λn−1Pn−1

µ1P1 − λ0P0 = 0

Iterating here, we obtain:

µnPn − λn−1Pn−1 = µn−1Pn−1 − λn−2Pn−2 = ... = µ1P1 − λ0P0 = 0.

ThereforeµnPn = λn−1Pn−1, Pn = λn−1

µnPn−1 = ... = λ0λ1...λn−1

µ1µ2...µnP0.

But we know, that P0 + P1 + ... = 1, and then

P0(1 + λ0

µ1+ λ0λ1

µ1µ2+ ...) = 1, P0 = 1

1 +∑n=1→∞

λ0λ1...λn−1µ1µ2...µn

.

It is obvious, that this process converges when ∑n=1∞ λ0λ1...λn−1

µ1µ2...µn<∞.

27

Page 28: University of Karlsruhe (TH) Department of Mathematics

Chapter 1 Background

28

Page 29: University of Karlsruhe (TH) Department of Mathematics

Chapter 2

Birth and Death Queueing Models

2.1 Model M/M/1For this queueing model there is only one server and the arrival and the service timesare exponentially distributed with means 1

λand 1

µcorrespondingly. So, we have

λn = λ, n = 0, 1, 2, ...and µn = µ, n = 1, 2, 3, ....

The steady-state probabilities P0, P1, ... are calculated as follows:

Pn = λ0λ1...λn−1

µ1µ2...µnP0 = λn

µnP0 = ρnP0,

where ρ = λµ

is the traffic intensity and P0 we find this way:

P0 = 11 +∑∞

j=1λ0λ1...λj−1µ1µ2...µj

ρj= 1

1 + ρ+ ρ2 + ...= 1− ρ,

where ρ < 1, i.e. λ < µ. Therefore, Pn = (1− ρ)ρn, n = 0, 1, 2, ...Let us denote

Ns = system size (the number of the people in the whole system),

Nq = queue size (the number of the people in the queue),

Ts = total time spent by a customer in the system,

Ts = waiting time spent by a customer in the queue.

We already know, that Pn = P (Ns = n) = (1−ρ)ρn, n = 1, 2, ... and ∑∞n=0 Pn = 1. ThenNs follows a geometric distribution. The average number of customers, waiting in thesystem is given by:

E(Ns) = Ls =∞∑n=1

nPn = (1− ρ)ρ∞∑n=1

nρn−1 = (1− ρ)ρ(1− ρ)2 = ρ

1− ρ

29

Page 30: University of Karlsruhe (TH) Department of Mathematics

Chapter 2 Birth and Death Queueing Models

We observe, that

P (a person has to wait) = 1− P0 = 1− (1− ρ) = ρ.

E(Ns) = E(Ns|Ns > 0)P (Ns > 0) + E(Ns|Ns = 0)P (Ns = 0)

P (Ns|Ns > 0) = E(Ns)P (Ns > 0) = ρ/(1− ρ)

ρ= 1

1− ρ

P (Ns ≥ n) = (ρn + ρn+1 + ...)(1− ρ) = ρn

1− ρ(1− ρ) = ρn, n = 1, 2, ...

Let us calculate

E(Ns(Ns − 1)) =∞∑n=2

n(n− 1)Pn = (1− ρ)ρ2∞∑n=2

n(n− 1)ρn−2 =

= (1− ρ)ρ22(1− ρ)3 = 2ρ2

(1− ρ)2 .

Now we can find the variance of the system’s size

V ar(Ns) = E(Ns(Ns − 1)) + E(Ns)− E(Ns)2 =

= 2ρ2

(1− ρ)2 + ρ

1− ρ −ρ2

(1− ρ)2 = ρ

(1− ρ)2 .

The probability, that there is noone in the queue is given by

P (Nq = 0) = P (Ns = 0) + P (Ns = 1) = 1− ρ+ ρ(1− ρ) = 1− ρ2.

The expected number of customers in the queue is

E(Nq) = Lq =∞∑n=1

(n− 1)Pn =∞∑m=0

mPm+1 =∞∑m=0

mρm+1(1− ρ) = ρ2

1− ρ.

Hence, knowing Ls and Lq we obtain the well-famous

Little’s formulaLs = Lq + q.

i.e.Ls = Lq + 1− P0.

For the pdf of Ts we get

(pdf of Ts) =∞∑n=0

(pdf ofTs|Ns = n)P (Ns = n) =

30

Page 31: University of Karlsruhe (TH) Department of Mathematics

2.1 Model M/M/1

=∞∑n=0

(pdf of ξ1 + ...+ ξn+1)ρn(1− ρ) =∞∑n=0

µn+1 e−µx

n! xn+1−1ρn(1− ρ) =

= e−µxµ(1− ρ)∞∑n=0

(µxρ)nn! = µ(1− ρ)e−µ(1−ρ)x, x > 0,

where ξ1, ..., ξn+1 are iid exponential variables with mean 1/µ. Then the pdf of the totaltime spent in the system by a customer follows an expopential distribution with mean

1µ(1−ρ) .

The expected waiting time in the system is

Ws = E(Ts) = 1µ(1− ρ)

and thenLs = λWs.

For Ts = Tq + ξ, where ξ is the service time, applying to the both sides of this equationthe operator E(expectancy), we obtain

Ws = Wq + 1µ,Wq = 1

µ(1− ρ) −1µ

= ρ

µ(1− ρ)

Example 1. For λ0 = 1, λ1 = 2, λ2 = 5 and µ1 = 3, µ1 = 1, µ2 = 2, we calculate

P1 = 13P0, P2 = 1

3 .21P0,

13 .

21 .

52P0,

P0

(1 + 1

3 + 12 + 2

3 + 53

), P0 = 1

113

= 311 .

ThereforeP1 = 1

11 , P2 = 211 , P3 = 5

11 .

Here the mean is 1 111 + 2 2

11 + 3 511 = 20

11 , and the variance

12 111 + 22 2

11 + 32 511 − (20

11)2.

Example 2. Let us take λ0 = λ1 = ... = 1, µ1 = 1.3, µ2 = 2.3, .... Then

Pn = (1/3)n/n!1 +∑∞

n=1(1/3)nn!

= e−13

(1/3)nn! .

31

Page 32: University of Karlsruhe (TH) Department of Mathematics

Chapter 2 Birth and Death Queueing Models

Example 3. Arrivals come at a telephone booth in a Piosson process with mean inter-arrival time 10 min. The convesation time is exponential with mean 3 min. Actually,we have that λ = 1/10, µ = 1/3 and then ρ = 3/10 = 0.3. Now we find, that

a) P(an arrival has to wait)= 1− (1− ρ) = ρ = 0.3;

b) The average non empty queue length = 11−ρ = 10

7 ;

c) If a customer has to wait for at least 3 min, another booth will be installed. Then

Wq = Lqλ

= ρ′2

(1− ρ′)λ′ = λ′

µ2(µ− λ′) = 3, ρ′ = λ′

µ.

Then λ′ = 0.16/min = 9.6/hr.

d)P (Tq > 10) =

∫ ∞10

λ(1− ρ)e−µx(1−ρ)dx = 0.03;

e) P (Ts > 10) =∫∞

10 µ(1− ρ)e−µx(1−ρ)dx = 0.01;

f) The fraction of time the system is in use = 1− P0 = ρ = 0.3.

Example 4. There are 2 repairmen - slow and fast. The machines break down at therate of λ = 3/hr. The nonproductive time of one machine costs 50 euro/hr. The servicerates for the 2 repairmen are µs = 4/hr and µf = 6/hr correspondingly, the costs arecs = 30 euro/hr and cf = 50 euro/hr. Which repairman is preferred?

For the repairmen we obtain ρs = 34 , ρf = 1

2 . For the slow repairman we get:Ls,s = ρs

1−ρs = 3. For the fast repairman we get: Ls,f = ρf1−ρf

= 1. Let us assume that theworking day is 8 hr. Then the total cost for the slow repairmen is 8.3.50 + 30.8 = 1440euro. The total cost for the fast repairman is 8.1.50 + 50.8 = 800 euro. Hence, we preferthe fast repairman.

2.2 Model M/M/1/NThis queueing model has the capacity of N customers including the one in the servicefacility. If an arriving customer finds the system full, he does not wait and leaves thesystem. The arrival and the service rate for this queueing model are given by

λn = λ, n = 0, 1, ..., N − 1, µn = µ, n = 1, 2, ..., N.

Here again ρ = λµ, but ρ need to be less than 1.

Pn = ρnP0, n = 1, 2, ..., N.

32

Page 33: University of Karlsruhe (TH) Department of Mathematics

2.3 Model M/M/1 with Finite Source

P0 + P1 + ...+ PN = 1, P0(1 + ρ+ ρ2 + ...+ ρN) = 1,

P01− ρN+1

1− ρ = 1, P0 = 1− ρ1− ρN+1 (= 1

N + 1when ρ = 1).

Remark. If ρ < 1 and N →∞, then P0 = 1− ρ.

PN = ρN1− ρ

1− ρN+1 (= 1N + 1whenρ = 1) = P (the system is full)

1− PN = P (the system is not full)

Performance Measures

Ls =N∑n=1

nPn = (1− ρ)N∑n=1

nρn = ρ(1− ρn)1− ρ −NρN+1.

The effective arrival rate λ′ is the mean rate of the customers actually entering thesystem (because the system is finite and the arriving customers who find the system fullleave the system).

This is given by

λ′ =N−1∑n=0

λnPn = λN−1∑n=0

Pn = λ(1− PN) = λeff

The average waiting times in the system Ws and in the queue Wq can be calculatedusing Little’s formula and λ′, i.e.

Ws = Lsλ′,Wq = Lq

λ′.

The blocking probability is the probability that customers are blocked and not acceptedby the queueing system because the system’s capacity is full. This situation occurs inqueueing systems that have a finite or no waiting queue. In our model it happens whenthere are N customers in the system and hence the blocking probability is PN .

Example. Cars arrive at a car-wash facility with a rate 5 cars per hour. The washingtime is exponential with mean 10 min. Then λ = 5/hr, µ = 6/hr, ρ = 5/6. We obtain

Ls = ρ

1− ρ = 5/61− 5/6 = 5;Lq = ρ2

1− ρ = 256 = 4.17.

2.3 Model M/M/1 with Finite SourceOne person is in charge with k machines. The operating time has rate λ and therepairtime has rate µ. Let X denote the number of machines with the repairman. IfX = n, k − n are operating, so that, the arrival rate is λ(k − n). We have, that

µn = µ, n = 1, 2, ..., k, because we have only one personλn = λ(k − n), n = 0, 1, ..., k − 1

33

Page 34: University of Karlsruhe (TH) Department of Mathematics

Chapter 2 Birth and Death Queueing Models

Pn = λ0λ1...λn−1

µ1µ2...µnP0 = k(k − 1)...(k − n+ 1)ρnP0 = k(n)ρ

nP0

P0 = 11 + kρ+ k(k − 1)ρ2 + ...+ k(k − 1)...(k − n+ 1)ρn

Ls =k∑

n=1nPn =

k∑n=1

(k− (k−n))Pn = k(1−P0)−k∑

n=1(k−n)k(k− 1)...(k−n+ 1)ρnP0 =

= k(1− P0)− 1ρ

(1− P0 − P1) = k(1− P0)− 1ρ

(1− P0 − kρP0) = k − 1− P0

ρ

λeff =k∑

n=0λnPn = λ

∑n=0

k(k − n)Pn = λ(k − Ls)

Example. One person is in charge with 3 machnines. It is given, that λ=2/day,µ=4/day. Then we obtain:

ρ = 1/2, P1 = 32P0, P2 = 3.2.14P0, P3 = 3.2.1.18P0

P0(1 + 32 + 3

2 + 34) = 1

P0 = 419 , P1 = 6

19 , P2 = 619 , P3 = 3

19

Ls = 2719 , λeff = 60

19 ,Ws = 920

Lq = Ls − (1− P0) = 1219 ,Wq = 1

5

2.4 Model M/M/c

In this model the arrivals form a Poisson process (the interarrival times are exponential)and the service time follows an exponential distribution. There are c servers. So, wehave:

λn = λ, n = 0, 1, 2, ...

µn =

nµ if n = 0, 1, ..., c− 1cµ if n = c, c+ 1, ...

and it is easy to obtain

Pn =

ρn

n!P0 n = 0, 1, ..., c− 1ρn

c!cn−cP0 n = c, c+ 1, ...

34

Page 35: University of Karlsruhe (TH) Department of Mathematics

2.5 Model M/M/c/N (Loss and Delay System)

P0 = 11 + ρ

1! + ρ2

2! + ...+ ρc−1

(c−1)! + ρc/c!1−ρ/c

P (a person has to wait) = P (number of customers ≥ c) = ρc/c!P0

1− ρ/c,

Lq =∞∑n=c

(n− c)Pn =∞∑n=c

(n− c)ρn−c+c

c!cn−c P0 = ρc

c!P0

∞∑m=0

m(ρ/c)m = ρc

c!ρ/c

1− (ρ/c)2P0

If we put c = 2, then P1 = ρP0, P2 = ρ2

2!P0 and

P (a person has to wait) = ρ2/2!P0

1− ρ/2 , P0 = 2− ρ2 + ρ

, Lq = ρ3

4− ρ2 ,ρ

2 ≤ 1.

2.5 Model M/M/c/N (Loss and Delay System)This model is finite.

λn = λ, n = 0, 1, ..., N − 1

µn =

nµ if n = 0, 1, ..., c− 1cµ if n = c, c+ 1, ..., N

We consider the cases:

P0 =

(∑c−1

n=0ρn

n! + ρc(1−ρ/c)N−c+1

c!(1−ρ/c)

)−1ρ/c 6= 1(∑c−1

n=0ρn

n! + ρcN−c+1c!

)−1ρ/c = 1

λeff = λ(1− PN), Ls = Lq + λeffµ

2.6 Model M/M/c with a Limited Source k (RepairmanProblem)

Hereλn = (k − n)λ, n = 0, 1, ..., k − 1

µn =

nµ if n = 0, 1, ..., c− 1cµ if n = c, c+ 1, ...

Pn =

(kn

)ρnP0 n = 0, 1, ..., c− 1(

kn

)ρnn!c!cn−cP0 n = c, c+ 1, ...

35

Page 36: University of Karlsruhe (TH) Department of Mathematics

Chapter 2 Birth and Death Queueing Models

2.7 Model M/M/c/c (Loss System)In comparisson to Model 5, here c < N and there is no queue (Lq = 0). For this model

λn = λ, n = 0, 1, ..., c− 1;µn = nµ, n = 1, 2, ..., c

Pn = ρn

n!P0, n = 0, 1, ..., c this is a finite model

P0 = 11 + ρ

1! + ρ2

2! + ...+ ρc

c!

λeff = λ(1− P0);Lq = 0,Wq = 0,Ws = 1/µ.

Actually, above we obtained the Erlang’s loss formula.

2.8 Model M/M/∞ (Infinite Server Queue)

λn = λ, n = 0, 1, ...;µn = nµ, n = 1, 2, ...

Pn = ρn

n!P0, n = 0, 1, ...

P0 = e−ρ = 11 + ρ

1! + ρ2

2! + ...and thenPn = e−ρρn

n! , Ls = ρ

2.9 Self RepairThere are N nodes either in ’on’, or in ’off’ position. If a node is in ’on’ position, itreceives messages, otherwise it sends messages. The receiving rate is λ and the sendingrate is µ.

λn = (N − n)λ;µn = nµ

Pn = λ0λ1...λn−1

µ1µ2...µnP0 =

(N

n

)ρnP0.

Therefore

P0 = 1(1 + ρ)N , Pn =

(N

n

)(ρ

1 + ρ

)n( 11 + ρ

)N−n, Ls = Nρ

1 + ρ

36

Page 37: University of Karlsruhe (TH) Department of Mathematics

2.10 Impatient Customers (Limited Waiting Time)

2.10 Impatient Customers (Limited Waiting Time)A customer joins an M/M/c queue, but after a random time following an exponentialdistribution with parameter ν he loses patience and leaves the system without gettingservice, if his service has not still started by then.

λn = λ, µn =

nµ if n = 1, ..., ccµ+ (n− c)ν if n = c+ 1, ...

The probabilities Pn are to be found numerically.

2.11 Density-Dependent Service

λn = λ, n = 0, 1, ...;µn = nαµ, n = 1, 2, ...

Here α > 0 is a pressure coefficient.

Example. The mean time a doctor spends with a patient is 24 min, if no other patientis waiting. This becomes 12 min, if he has other patients. Then

µ1 = 24, µ6 = 12 = 6αµ1, 6α = 2, α = 0.4,

Pn = 1(n!)αρ

nP0, P0 = 11 + 1

(1!)α + 1(2!)α + ...

.

37

Page 38: University of Karlsruhe (TH) Department of Mathematics

Chapter 2 Birth and Death Queueing Models

38

Page 39: University of Karlsruhe (TH) Department of Mathematics

Chapter 3

Time-Dependent Analysis

3.1 Model M/M/1/1The Kolmogorov equations for this model areP ′0(t) = µP1 − λP0

P ′1(t) = λP0 − µP1

with initial conditionsP0(0) = a, P1(0) = b, a+ b = 1.

3.1.1 Method 1Here we calculate

P ′0(t) = µ(1− P0)− λP0

P ′0(t) + (λ+ µ)P0 = µ

(e(λ+µ)tP0) = µe(λ+µ)t

e(λ+µ)tP0 = µ

λ+ µ(e(λ+µ)t − 1) + c

and then a = c.

P0(t) = µ

λ+ µ+(a− µ

λ+ µ

)e−(λ+µ)t

P1(t) = λ

λ+ µ−(a− µ

λ+ µ

)e−(λ+µ)t = λ

λ+ µ+(ν − λ

λ+ µ

)e−(λ+µ)t

If a = µλ+µ , then P0(t) = µ

λ+µ and it is independent of t. Also P0(t)→ µλ+µ , as t→∞.

39

Page 40: University of Karlsruhe (TH) Department of Mathematics

Chapter 3 Time-Dependent Analysis

3.1.2 Method 2For this method we take the Laplace transformsP0 − a = µP1 − λP0

sP1 − b = λP0 − µP1

(s+ λ)P0 − µP1 = a

−λP0 + (s+ µ)P1 = b

From this system of two equations, we obtain(s+ λ −µ−λ s+ µ

)(P0

P1

)=(ab

)

P0 =

∣∣∣∣∣a −µb s+ λ

∣∣∣∣∣∣∣∣∣∣s+ λ −µ−λ s+ µ

∣∣∣∣∣= A

s+ B

s+ λ+ µ,

A =

∣∣∣∣∣a −µc µ

∣∣∣∣∣λ+ µ

= µ(a+ b)λ+ µ

= µ

λ+ µ,

B =

∣∣∣∣∣a −µc −λ

∣∣∣∣∣−(λ+ µ) = aλ− bµ

λ+ µ= a− µ

λ+ µ.

From here we have two results1) ∣∣∣∣∣∣∣∣∣∣∣∣

s+ λ0 λ ... ... ... ...µ1 s+ λ1 + µ1 λ1 ... ... ...... µ2 s+ λ2 + µ2 λ2 ... ...... ... ... ... ... ...... ... ... ... ... s+ µN

∣∣∣∣∣∣∣∣∣∣∣∣=

= s

∣∣∣∣∣∣∣∣∣∣∣∣

s+ λ0 + µ1 λ1 ... ... ... ...µ1 s+ λ1 + µ2 λ2 ... ... ...... µ2 s+ λ2 + µ3 λ3 ... ...... ... ... ... ... ...... ... ... ... µN−1 s+ λN−1 + µN

∣∣∣∣∣∣∣∣∣∣∣∣;

40

Page 41: University of Karlsruhe (TH) Department of Mathematics

3.2 Model M/M/1/N

2) ∣∣∣∣∣∣∣∣∣∣∣∣

2cosθ 1 ... ... ... ...1 2cosθ 1 ... ...0 1 2cosθ 1 ... ...... ... ... ... ... ...... ... ... ... 1 2cosθ

∣∣∣∣∣∣∣∣∣∣∣∣= sin(N + 1)θ

sin θ =N∏k=1

(2 cos θ − 2 cos kθ

N + 1).

3.2 Model M/M/1/NThe Kolmogorov equations for this model are:

P ′0(t) = µP1 − λP0

P ′1(t) = µP2 − (λ+ µ)P1 + λP0

...

P ′N(t) = λPN−1 − µPN

Assume, that P0(0) = a0, P1(0) = a1, ..., PN(0) = aN , ∑Nk=1 ak = 1. Here again we

apply the Laplace transforms ˆP0(t)− a0 = µP1 − λP0

s ˆP1(t)− a1 = µP2 − (λ+ µ)P1 + λP0

...

s ˆPN(t)− aN = λ ˆPN−1 − µPN

and then the matrix equation iss+ λ −µ ... ... ...−λ s+ λ+ µ µ ... ...... ... ... ... ...... ... ... λ s+ µ

ˆP0(t)ˆP1(t)...ˆPN(t)

=

ˆa0(t)ˆa1(t)...ˆaN(t)

.

Using the Cramer’s rule, we obtain

P0 =

∣∣∣∣∣∣∣∣∣a0 µ ... ... ...a1 s+ λ+ µ µ ... ...... ... ... ... ...aN ... ... λ s+ µ

∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣∣s+ λ µ ... ... ...λ s+ λ+ µ µ ... ...... ... ... ... ...... ... ... λ s+ µ

∣∣∣∣∣∣∣∣∣

=

∣∣∣∣∣∣∣∣∣a0 µ ... ... ...a1 s+ λ+ µ µ ... ...... ... ... ... ...aN ... ... λ s+ µ

∣∣∣∣∣∣∣∣∣s∏Nk=1(s+ λ+ µ− 2

√λµ cos kπ

N+1)=

N∑k=1

Hke−skt

41

Page 42: University of Karlsruhe (TH) Department of Mathematics

Chapter 3 Time-Dependent Analysis

where s0 = 0, sj = λ+ µ− 2√λµ kπ

N+1 . It can be proved by induction, that∣∣∣∣∣∣∣∣∣λ0 + µ1 µ1 ... ... ...λ1 λ1 + µ2 µ2 ... ...... ... ... ... ...... ... ... ... λN−1 + µN

∣∣∣∣∣∣∣∣∣ = λ0λ1...λN−1 + λ0...λN−2µN + ...+ µ1µ2...µN

and then ∣∣∣∣∣∣∣∣∣λ1 + µ1 λ1 ... ... ...µ2 λ2 + µ2 λ2 ... ...... ... ... ... ...... ... ... µN µN

∣∣∣∣∣∣∣∣∣ = µ1µ2...µN

After that one can prove, that

P0 = lims→0

sP0 = 11 + λ0

µ1+ λ0λ1

µ1µ2+ ...+ λ0λ1...λN−1

µ1µ2...µN

.

3.3 State Dependent Finite Model

λn, n = 0, 1, ..., N − 1, µn, n = 1, 2, ..., NP ′0(t) = µ1P1 − λ0P0

P ′n(t) = µn+1Pn+1 − (λn + µn)Pn + λn−1Pn−1, n = 1, 2, ..., N − 1P ′N(t) = λN−1PN−1 − µNPN

Assume, that P (X(0) = n) = Pn(0) = an, ∑Nk=1 ak = 1. The above system could be

written in the form

P ′(t) = AP (t), P (0) =

a0a1...aN

and then P (t) = eAtP (0).

Now we apply the Laplace trnasforms ˆP0(t)− a0 = µ1P1 − λ0P0

s ˆPn(t)− an = µn+1 ˆPn+1 − (λn + µn)Pn + λn−1 ˆPn−1

s ˆPN(t)− aN = λN−1 ˆPN−1 − µN PN.

From here we obtain(s+ λ0)P0 − µ1P1 = a0

(s+ λn + µn)Pn − λn−1 ˆPn−1 − µn+1 ˆPn+1 = an

(s+ µN)PN − λN−1 ˆPN−1 = aN

42

Page 43: University of Karlsruhe (TH) Department of Mathematics

3.4 Pure Birth Process

Again using the Cramer’s rule, we can find ˆPn(s) for all n, replacing the n-th column inthe determinant of the system by the column with the free coefficients. Simplifiing thisexpression, we get

ˆPn(s) = ...

s(s− 1)...(s− sN)and therefore

Pn(t) =N∑j=0

Hjesjt(by splitting into partial fractions),

where0 = s0 > s1 > ... > sN ,

Pn(t)→ Hn as →∞.

The following equality holds

Hn(0) =λ0λ1...λn−1µ1µ2...µn

1 + λ0µ1

+ λ0λ1µ1µ2

+ ...+ λ0λ1...λN−1µ1µ2...µN

.

3.4 Pure Birth Process

λn = nλ, n = 1, 2, ...(µn = 0)

P ′n(t) = −nλPn(t) + (n− 1)λPn−1(t), Pa(0) = 1(at time 0 we have a persons),

(enλtPn(t))′ = (n− 1)λeλte(n−1)λtPn−1(t), n = a, a+ 1, ...

qn(t) = enλtPn(t), qa(0) = 1,

q′n(t) = (n− 1)λeλtqn−1(t), n = a, a+ 1, ...,

qa(t) = eaλtPa(t) = eaλte−aλt = 1,

e−aλt = (e−λt)a = (P (τ > t))a = (1− (1− e−λt))a is the survival function,

qn(t) = (n− 1)∫ t

0qn−1(t)deλt,

qa+1(t) = a∫ t

0deλt = a(eλt − 1),

qa+2(t) = a(a+ 1)∫ t

0(eλn − 1)deλn = a(a+ 1)(eλt − 1)2

2 =(a+ 1

2

)(eλt − 1)2,

qa+3(t) =(a+ 2

3

)(eλt − 1)3,

43

Page 44: University of Karlsruhe (TH) Department of Mathematics

Chapter 3 Time-Dependent Analysis

qa+n(t) =(a+ n− 1

n

)(eλt − 1)n, n = 0, 1, ...

Pa+n(t) = e−(n+a)λt(a+ n− 1

n

)(eλt − 1)n,

and thenPa+n(t) =

(a+ n− 1

n

)e−aλt(1− e−λt)n, n = 0, 1, ...

This follows a negative binomial distribution with Mean = a/ρ and V ariance = aq/p2

∞∑n=0

Pa+n(t) = e−aλt

(1− e−λt)a .

Therefore Mean = ae−λt

= aeλt, V ariance = a(1−e−λt)e−2λt = aeλt(eλt − 1).

3.5 Simple Death Process

µn = nµ, n = 1, 2, ...(λn = 0), Pa(0) = 1

and analogously as in the previous model we can obtain

Pn(t) =(a

n

)e−nµt(1− e−µt)a−n,

Mean = ae−λt,Variance = ae−λt(1− ae−λt).

3.6 Simple Birth Process

λn = nλ, µn = nµ, n = 0, 1, ...

We can show, that∂P

∂t= (λs− µ)(s− 1)∂P

∂s,

where P (s, t) = E(sX(t)) is the probability generating function

If we differentiate the above equality with respect to s and put s = 1, we get

∂m(t)∂t

= (λ− µ)m(t),m(t) = e(λ−µ)t, X(0) = 1.

44

Page 45: University of Karlsruhe (TH) Department of Mathematics

3.7 Model M/M/1 (Busy Period)

3.7 Model M/M/1 (Busy Period)At first we will give the definition and some properties of the Modified Bessel function,which we are going to use

Modified Bessel function

Definition 3.1. This is the function In(t) = ∑∞r=0

(t/2)2r+n

n!(r+n)! .

1) I−n(t) = In(t);

2) In−1(t) + In+1(t) = 2nIn(t)t

;

3) ∑n=−∞∞In(t)ζn = et/2(ζ + 1

ζ

);

4) L(nIn(αt)

t

)=(

αs+√s2−α2

).

Start with one customer at time t = 0. The duration of time a server is busy(i.e. tillthe queue becomes empty) is called busy period. The Kolmogorov equations are

P ′0 = µP1

P ′1 = µP2 − (λ+ µ)P1

...

P ′n = µPn+1 − (λ+ µ)Pn + λPn−1

P0(t) = P (the system is empty at time t) = P (τ ≤ t),

P ′0(t) = pdf of the busy period = µP1(t).

Assume, that P1(0) = 1. Taking the Laplace transform, we get

sP1 − 1 = µP2 − (λ+ µ)P1,

(s+ λ+ µ)P1 = 1 + µP2,

s+ λ+ µ = 1P1

+ µP2

P1,

P1 = 1s+ λ+ µ− µ P2

P1

,

sP2 − 1 = µP3 − (λ+ µ)P2 + λP1,

(s+ λ+ µ)P2 = µP3 + λP1,

45

Page 46: University of Karlsruhe (TH) Department of Mathematics

Chapter 3 Time-Dependent Analysis

s+ λ+ µ = µP3

P2+ λ

P1

P2,

P2

P1= λ

s+ λ+ µ− µ P3P2

.

Let us denotef(s) = 1

s+ λ+ µ− λµ

s+λ+µ− λµs+λ+µ−...

,

i.e.f(s) = 1

s+ λ+ µ− λµf(s) ,

Thenλµ(f(s))2 − (s+ λ+ µ)f(s) + 1 = 0,

f(s) =s+ λ+ µ−

√(s+ λ+ µ)2 − 4λµ2λµ ,

ˆP1(s) = 2s+ λ+ µ+

√(s+ λ+ µ)2 − 4λµ

,

L(e−(λ+µ)t) = 1s+ λ+ µ

,

P1(t) = 2Iαtate−(λ+µ)t

and thereforeP ′0(t) =

õ

λ

Iatte−(λ+µ)t, t > 0.

Taking the Laplace transform of P ′0(t) for s→ 0, we obtain

sP0 =

λ+µ+λ−µ = 1ρ

if ρ ≥ 12µ

λ+µ−λ+µ = 1ρ

if ρ < 1.

3.8 Model M/M/∞ (Infinite Server Queue, Self-ServiceSystem)

For this model we have

λn = λ, n = 0, 1, ...;µn = nµ, n = 1, 2, ...

46

Page 47: University of Karlsruhe (TH) Department of Mathematics

3.8 Model M/M/∞ (Infinite Server Queue, Self-Service System)

X(t+ h) =

X(t) + 1 with probability λh+ o(h)X(t) with probability 1− λh− µX(t)h+ o(h)X(t)− 1 with probability µX(t)h+ o(h)

P (ζ, t) = E(ζX(t))(pgf);

P (ζ, t+ h) = E(ζX(t)+1)(λh+ o(h))+

+E(ζX(t))(1− λh− µX(t)h+ o(h)) + E(ζX(t)−1)(µX(t)h+ o(h)) =

= λζP (ζ, t)h+ o(h) + P (ζ, t)− λP (ζ, t)h− µζ ∂P (ζ, t)∂ζ

+ o(h) + µ∂P (ζ, t)∂ζ

h+ o(h).

Therefore

∂P (ζ, t)∂t

= limh→0

P (ζ, t+ h)− P (ζ, t)h

= λζP (ζ, t)− µζ ∂P (ζ, t)∂ζ

+ µ∂P (ζ, t)∂ζ

and then we have a linear partial differential equation

∂P (ζ, t)∂t

= λ(ζ − 1)P (ζ, t) + µ(1− ζ)∂P (ζ, t)∂ζ

with initial conditions(which actually mean, that the system is initially empty)

P (ζ, 0) = 1,

E(ζX(0)) = E(ζ0) = 1, (X(0) = 0),∂P (ζ, t)∂t

+ µ(ζ − 1)∂P (ζ, t)∂ζ

= λ(ζ − 1)P (ζ, t).

To solve this equation, we use the method of characteristics

dt

1 = dζ

µ(ζ − 1) = dP

λ(ζ − 1)P

Now we have to generate 2 independent solutions. The solution of the equation

µdt

1 = dζ

ζ − 1

is(ζ − 1)e−µt = C1 (3.1)

The solution ofλdζ

µ= dP

P

47

Page 48: University of Karlsruhe (TH) Department of Mathematics

Chapter 3 Time-Dependent Analysis

isPe−ρζ = C2 (3.2)

and C2 = f(C1)Pe−ρζ = f((ζ − 1)e−µt) (3.3)

where f(.) is to be determined from the initial condition P (ζ, 0) = 1. Put t = 0 in (5.3).Then e−ρζ = f(ζ − 1) and

e−ρ(ζ+1) = f(ζ) (3.4)

Now we substitute (5.4) in (5.3).

Pe−ρζ = e−ρ(1+(ζ−1))e−µt

P (ζ, t) = e−ρ(1−ζ)(1−e−µt)

Note, that P (1, t) = 1. ThenX(t) follows a Poisson distribution withMean = ρ(1−e−µt)and V ariance = ρ(1− e−µt). If we put ∂P

∂t= 0, then P (ζ) = e−ρ(1−ζ).

48

Page 49: University of Karlsruhe (TH) Department of Mathematics

Chapter 4

Non-Birth-Death Queueing Models

4.1 Model MX/M/∞ (Bulk Arrival Queues)Customers arrive in a Poisson fashion with k arrivals with probability λakh+o(h), wherek = 1, 2, ... and ∑∞k=1 ak = 1. The servise times are exponentially distributed with mean1/µ. As in one previous model, we obtain the partial defferential equation

∂P

∂t= λ((A(ζ)− 1)P (ζ, t)) + µ(1− ζ)∂P

∂ζ

whereA(z) = a1z + a2z

2 + ...

Now we take the ζ-transform for the above expression (i.e. ζ → 1ζ), assume that steady

state exists and also P (1) = 1.

ρ(1− A(ζ))P = (1− ζ)∂P∂ζ

,1P

∂P

∂ζ= ρ

1− A(ζ)1− ζ ,

lnP = ρ∫ ζ

0

1− A(ζ)1− ζ dζ + C,P (ζ) = Ceρ

∫01 1−A(u)

1−u du.

But we know that P (1) = 1 and then

1 = Ceρ∫

01 1−A(u)1−u du,

P (ζ) = eρ∫

01 1−A(u)1−u du.

Note, that if A(ζ) = ζ, P (ζ) = e−ρ(1−ζ).

4.2 Model MX/M/1 (Bulk Arrival Queues)Let customers arrive in batches in a Poisson process with parameter λ. Assume, thatthe batch size X is a r.v. with P (X = k) = ak, k = 1, 2, ..., i.e. the probability that abatch of k arrivals arrives in an infinite time interval (t, t+ h) is λakh+ o(h).

49

Page 50: University of Karlsruhe (TH) Department of Mathematics

Chapter 4 Non-Birth-Death Queueing Models

Let A(ζ) = ∑∞k=1 akζ

k is the pgf of the arrivals X.

A(ζ) = A(1) + (ζ − 1)A′(1) + (ζ − 1)2

2 A′′(1) + ...

and A(1) = 1. The mean is A′(1) and the second moment is A′′(1). The variance isgiven by A′′(1) + A′(1)− (A′′(1))2. Then

1− A(ζ)1− ζ = A′(1)− 1− ζ

2! A′′(1) + ...

Assume, that the steady-state solutions exist. Then we get

µP1 − λP0 = 0,

µP2 − λP1 + λa1P0 = 0,

µP3 − (λ+ µ)P2 + λ(a1P1 + a2P0) = 0,

µP4 − (λ+ µ)P3 + λ(a1P2 + a2P1 + a3P0) = 0,

...

Define P (ζ) = P0 + P1ζ + P2ζ2 + ...-the pgf of P . We multiply the equations with

ζ0, ζ1, ... and after we add them, we obtain

µ(P1 + P2ζ + ...)− λP0 − (λ+ µ)(P1ζ + P2ζ2 + ...)+

+λ(a1P0ζ + (a1P1 + a2P0)ζ2 + (a1P2 + a2P1 + a3P0)ζ3 + ...) = 0.

Now we operate on the last term

λ(a1ζ(P0 + P1ζ + ...) + a2ζ2(P0 + P1ζ + ...) + a3ζ

2(P0 + P1ζ + ...) + ...) = λA(ζ)P (ζ),

µ

ζ(P (ζ)− P0)− λP0 − (λ+ µ)(P (ζ)− P0) + λA(ζ)P (ζ) = 0,

P (ζ)(µζ− (λ+ µ) + λA(ζ)) = µ

ζP0 + λP0 − (λ+ µ)P0,

P (ζ)(µ− (λ+ µ)ζ + λζA(ζ)) = µ(1− ζ)P0,

P (ζ) = µ(1− ζ)P0

µ(1− ζ)− λζ(1− A(ζ)) = P0

1− ρζ (1−A(ζ))1−ζ

,

P (ζ)(1− ρζ(A′(1) + ζ − 12 A′′(1) + ...)) = P0,

P0 = 1− ρA′(1)

50

Page 51: University of Karlsruhe (TH) Department of Mathematics

4.2 Model MX/M/1 (Bulk Arrival Queues)

and steady-state exists if ρA′(1) = 1.

P ′(1)(1− ρA′(1)) + P (1)(−ρA′(1)− A′′(1)2 ) = 0,

P ′(1) =ρA′(1) + A′′(1)

21− ρA′(1) .

For the mean system size we obtain

Ls = ρ(2A′(1) + A′′(1))2(1− ρA′(1)) .

Then stedy state exist if ρA′(1) < 1. Let us consider the following special cases

1) M/M/1 QueueA(ζ) = ζ, A′(1) = 1, A′′(1) = 0

P (ζ) = 1− ρ1− ρζ → Pn = (1− ρ)ρn, n = 1, 2, ...

Ls = ρ22(1− ρ) = ρ

1− ρ

2)A(ζ) = qζ + pζ2, p+ q = 1

a1 = q, a2 = p,A′(1) = q, A′′(1) = 2p1− A(ζ) = 1− qζ − pζ2 = (1− ζ) + pζ(1− ζ)

1− A(ζ)1− ζ = 1 + pζ

P (ζ) = 1− ρq1− ρζ(1 + pζ)

Ls = ρ(2q + 2p)2(1− ρq) = ρ

1− ρqIf in this case q = 1, we get case 1).

3)M r/M/1 Queue For this model ar = 1, where r is a fixed batch size.A(ζ) = ζr, A′(1) = r, A′′(1) = r(r − 1)

and therefore

P (ζ) = 1− ρr1− ρζ(1 + ζ + ...+ ζn−1) ;Ls = ρ(2r + r(r − 1))

2(1− ρr) = ρr(r + 1)2(1− ρr)

If r = 1, we get case 1). This can be treated as M/Er/1 queue, where Er=Erlangservice=sum of r iid exponential r.vs.

51

Page 52: University of Karlsruhe (TH) Department of Mathematics

Chapter 4 Non-Birth-Death Queueing Models

4)ar = a(1− a)r−1, r = 1, 2, ...

This has lack of memory property.

A(ζ) = aζ∞∑r=1

(1− a)r−1ζr−1 = aζ

1− (1− a)ζ

A(ζ)(1− (1− a)ζ) = aζ

A′(ζ)(1− (1− a)ζ)− A(ζ)(1− a) = a

Put ζ = 1.Then A′(1) = 1/a, A′′(1) = a/(2(1− a)) and

P (ζ) = 1− ρA′(1)1− ρζ aζ

1−(1−a)ζ= (1− ρ

a)(1− (1− a)ζ)

∞∑n=0

(1− a+ ρa)nζn

Example. Consider a multistage machine line process which produces an assembly inquality. Number of defectives per item is 1 or 2. The interarrival times are exponentialand λ1 = 1/hr, λ2 = 2/hr, µ = 6/hr. So, we have λ1 + λ2 = 3/hr, a1 = 1/3, a2 = 2/3.

A(ζ) = ζ + 2ζ2

3 , A′(1) = 5/3, A′′(1) = 4/3, ρ = 1/2,

ρA′(1) = 1/2.5/3 = 5/6 < 1;Ls = 1/2(10/3 + 4/3)2(1− 5/6) = 7.

4.3 Two-Station Series Model with Zero QueueCapacity (Sort of a Network Model)

A customer, arriving for service, must go through station 1 and station 2. An enteringcustomer first will go to station 1; he will go to station 2, if it is empty or will waiton station 1 till station 2 becomes empty. A customer will enter the system as long asstation 1 is empty. There is no queue in froint of station 1 or station 2. λ is the arrivalrate, µ1 is the service rate at station 1, µ2 is the service rate at station 2.

Here we give all possible states and their interpretations

(0, 0)− No customer in the system

(1, 0)− one customer in station 1 and no customer in station 2(0, 1)− no customer in station 1 and one in station 2

(1, 1)− both receive service(b, 1)− one customer in station 1 finished service, but station 2 is busy

52

Page 53: University of Karlsruhe (TH) Department of Mathematics

4.4 Model M/Hyperexponential/1/r

Therefore

λP00 = µ2P01

µ1P10 = λP00 + µ2P11

(µ2 + λ)P01 = µ1P10 + µ2Pb1

(µ1 + µ2)P11 = λP01

µ2Pb1 = µ1P11

From these equations we obtain

P01 = λµ2P00

P11 = λµ1+µ2

P01 = λ2

µ2(µ1+µ2)P00

Pb1 = µ1µ2P11 = λ2

µ22(µ1+µ2)P00

P10 =(λµ1

+ µ2µ1

λ2

µ1+µ2

)P00

and then

P00 = 1(1 + λ

µ2+ λ

µ1+ µ2λ2

µ1µ2(µ1+µ2) + λ2

µ2(µ1+µ2) + λµ1

+ µ2λ2

µ1µ2(µ1+µ2)

)

Ls = P01 + P10 + 2(P11 + P01)

4.4 Model M/Hyperexponential/1/rNote If X1, X2, ..., Xn are inedependent exponentially distributed r.vs. with parame-ters µi, i = 1, 2, ..., n, then X1 + X2 + ... + Xn follows a hyperexponential distribution.For this model we have

λP0 = µrPr

µ1P1 = λP0

µ2P2 = µ1P1

...

µr−1Pr−1 = µrPr

and then Pr = λµrP0, P0 = 1

1+ λµ1

+ λ

µ2+...+ λµr

.

4.5 Processor Model with FailuresConsider a single processor with an infinite waiting norm. λ is the arrival rate and µ isthe service rate. The processor fails at rate ν and when failed all the customers are lost.

53

Page 54: University of Karlsruhe (TH) Department of Mathematics

Chapter 4 Non-Birth-Death Queueing Models

The Kolmogorov equations for this system are

λP0 = µP1 + ν(P1 + P2 + ...) = µP1 + ν(1− P0)(λ+ ν)P0 = µP1 + ν

(λ+ ν + µ)P1 = µP2 + λP0

(ρ+ θ)P0 = P1 + θ, ρ = λ/µ, θ = ν/µ

(ρ+ θ + 1)P1 = ρP0 + P2

(ρ+ θ + 1)Pn = Pn+1 + Pn−1, n = 1, 2, ...

Assume, that the solution is in this form:

Pn = βnP0, n = 1, 2, ...

Thereforeβn+1 − (ρ+ θ + 1)βn + ρβn−1 = 0

β2 − (ρ+ θ + 1)β + ρ = 0

β =ρ+ θ + 1−

√(ρ+ θ + 1)2 − 4ρ2

P0 = 1− β, Ls = β

1− β .

4.6 Bulk Service4.6.1 M/M(1, 2)/1Customers arrive at a single server queue with a Poisson rate λ. The server serves 2customers at a time, if available, otherwise - 1 at a time. Denote the states

0′ = the system is empty0 = the system is busy, but the queue is emptyn = there are n customers in the queue

The Kolmogorov equations for the system areλP0′ = µP0

(λ+ µ)P0 = µP1 + µP2 + λP0′

(λ+ µ)Pn = µPn+2 + λPn−1, n = 1, 2, ...

Assume, that Pn = βnP0, n = 1, 2, .... Then we get:

µβn+2 − (λ+ µ)βn + λβn−1 = 0

54

Page 55: University of Karlsruhe (TH) Department of Mathematics

4.6 Bulk Service

β3 − (ρ+ 1)β + ρ = 0, ρ = λ

µ

β3 − β − ρ(β − 1) = 0(β − 1)(β2 + β)− ρ(β − 1) = 0

β2 + β = ρ

β =√

1 + 4ρ− 12

Note, that β < 1 if and only if ρ < 2. Therefore P0′ = P0/ρ, Pn = βnP0, P0 = 11/ρ+1/(1−β) ,

Lq = 2ρ(2−ρ)4+ρ+ρ

√1+4ρ .

4.6.2 M/M(1, k)/1Rouche’s Theorem If f(ζ) and g(ζ) are analytic inside and on a simple closed curvec and |g(ζ)| < |f(ζ)| on c, then f(ζ) and f(ζ) + g(ζ) have the same number of rootsinside c. For this system the Kolmogorov equations are

λP0′ = µP0

(λ+ µ)P0 = λP0′ + µ(P1 + µP2 + ...+ Pk)(λ+ µ)Pn = λPn−1 + µPk+n, n = 1, 2, ...

From the last equation above, we obtain

µζk+1 − (λ+ µ)ζ + λ = 0.

Here we put f(ζ) = (λ + µ)ζ, g(ζ) = µζk+1 + λ. f(ζ) and g(ζ) are analytic inside andon a unit circle c and |g(ζ)| < µ + λ = |f(ζ)| on |ζ| = 1. f(ζ) + g(ζ) has only one rootinside c. Let α be this root. Then Pn = αnP0, n = 1, 2, ..., P0′ = P0/ρ, P0 = 1

1/ρ+1/(1−α) ,Lq = ρα

(1−α)(1−α+ρ) .Note, that αk+1 − (ρ+ 1)α + ρ = 0, αk+1 − α− ρ(α− 1) = 0.And then αk + αk−1 + ...+ α = ρ, α < 1 is real and ρ < k.

Some interesting problems

1) Let f(t) be the pdf of a positive r.v. X. Its Laplace transform is

E(e−sX) =∫ ∞

0e−stf(t)dt = ϕ(s)

and−ϕ′(0) =

∫ ∞0

tf(t)dt = α1 = ”µ”, α1 = −ϕ′(0).

55

Page 56: University of Karlsruhe (TH) Department of Mathematics

Chapter 4 Non-Birth-Death Queueing Models

Example.

f(t) = αe−αt, t > 0, ϕ(s) = α

α + s, ϕ′(0) = − α

α2 = − 1α.

The mean is 1α

. We know, that the busy period has density P0′ = µP1. Using the Laplacetransform we get

ˆP1(s) =s+ λ+ µ−

√(s+ λ+ µ)2 − 4λµ2λµ .

Then the pdf of the busy period has Laplace transform

ϕ(s) = 12λ(s+ λ+ µ−

√(s+ λ+ µ)2 − 4λµ)

ϕ′(0) = 12λ(1− 2(s+ λ+ µ)

2√

(s+ λ+ µ)2 − 4λµ) = 1

2λ(1− λ+ µ

µ− λ) = 1

λ− µ.

Hence, the mean busy period is 1/(µ− λ) = 1/(µ(1− ρ)).Consider an M/M/1 queue. Let E[N ] be the average number of customers during a

busy period. Only one of these will see the system empty.

P0 = 1E[N ] , E[N ] = 1

P0= 1

1− ρ

Suppose T is the duration of the busy period, µ is the service rate. Then

µE[T ] = E[N ], E[T ] = 1µ(1− ρ) .

2) Departure Process of an M/M/1 Queue(output process) Interdeparture time ==service time(if there are customers)+arrival time(if the system is empty). The pdf ofthe interdeparture time is

µ

s+ µρ+ (µe−µt ∗ λe−λt)(1− ρ)

where by * we denote convolution. The Laplace transform of the interdeparture timeis

µ

s+ µρ+ λ

s+ λ

µ

s+ µ(1− ρ) = λ

s+ λ.

So, the interdeparture time has the same Laplace transform as the interarrival time.This result is true for general queueing models. It can be proved by time reversability,i.e. using that X(t) and X(−t) are identically distributed.

56

Page 57: University of Karlsruhe (TH) Department of Mathematics

4.6 Bulk Service

3) Residual lifetime Denote Tn= time of the n-th arrival in a Poisson process withparameter λ. We have, that the Excess lifetime=E[t] = TN(t)+1 − t. If E[t] > x, noevents happen between t and t+ x.

P (E[t] > x) = e−λx;P (E[t] ≤ x) = 1− e−λx

and it means, that E[t] also follows exponential distribution.

4) Consider a M/M/1 queue. If n customers are in the system, an arriving customerjoins the queue with probability λn+1

n+2 . The service rate is µ again.

µPn+1 = λn+ 1n+ 2Pn,

Pn+1 = ρn+ 1n+ 2Pn = ρ2n+ 1

n+ 2n

n+ 1Pn−1,

Pn = ρn

n+ 1P0.

ThenP0 = 1

− ln(1− ρ) , ρ < 1

Ls =∞∑n=1

nPn =∞∑n=1

nρn

n+ 1P0 =∞∑n=1

(1− 1n+ 1)ρnP0 = ρP0

1− ρ − (1− P0) = 11− ρP0 − 1.

5) We show that Ws is smaller in a M/M/1 model with service rate 2µ, than in aM/M/2 model with service rate µ.

We know, that Ws = 1µ(1−ρ) in M/M/1 model with service rate µ. Ws = 4

µ(4−ρ2) inM/M/2 model with rate µ. Ws = 1

2µ(1− λ2µ ) = 1

2µ−λ in M/M/1 model with service rate2µ. Now it is easy to observe, that 1

2µ−λ <4

µ(4−ρ2) if and only if λ < 2µ, which is valid.

6) Consider a M/M/∞ queue with servers, numbered 1, 2, .... When arrives, a cus-tomer chooses the lowest number that is free. We are going to find the duraction Pn ofthe time, that a server c is free.

We know, that for a M/M/c/c (loss) system

P (all servers are busy) = πc = ρc/c!1 + ρ+ ...+ ρc/c!

rc = arrival rate to servers c+ 1, c+ 2, ...λc = arrival rate to server c

rc = λPc

57

Page 58: University of Karlsruhe (TH) Department of Mathematics

Chapter 4 Non-Birth-Death Queueing Models

λc = rc−1 − rc = λ(Pc−1 − Pc)

Fraction of time that server c is busy = λcµ

Fraction of time that server c is free = 1− λcµ

7a) Suppose, that two streams of customers arrive in a Poisson fashion with rates λ1and λ2 correspondingly.

E[zNi(t)] = e−λit(1−z), i = 1, 2,

N1 and N2 are independent and then

E[zN1(t)+N2(t)] = e−(λ1+λ2)t(1−z),

which means, that N1(t) +N2(t) ∼ Poi(λ1 + λ2).So, one can find P (N1(t)−N2(t)) = ε(ε is an integer), using the Bessel function.

7b) A system consists of 2 subsystems. With probability p we route a customer to thefirst system, and with probability 1− p−to the second one.

P (N1(t) = j) =∞∑n=j

P (N1(t) = j,N(t) = n) =

=∞∑n=j

P (N1(t) = j|N(t) = n)e−λt(λt)nn! =

∞∑n=j

(n

j

)pj(1− p)n−j e

−λt(λt)nn! =

= e−λt∞∑n=j

(λtp)jj!

(λt(1− p))n−j(n− j)! = e−λtp

(λtp)jj!

The last equality means, that N(t) is also a Poisson process with rate λp.

58

Page 59: University of Karlsruhe (TH) Department of Mathematics

Chapter 5

Cost ModelsThe objective of a queueing cost model is to determine the level of service (either theservice rate or the number of servers) that balances the two conflicting cost: the cost ofoffering the service and the cost of delay in offering the service. The two types of costsare in conflict because an increase in one automatically causes reduction in the other.

Let x= service level=service rate(µ) or number of servers(c). Then the cost modelcan be represented as

ETC(x) = EOC(x) + EWC(x)

whereETC(x)=expected total cost per unit timeEOC(x)=expected cost of operating per unit timeEWC(x)=expected cost of waiting per unit timeThe simplest forms for EOC and EWC are the following linear equations:EOC(x) = C1x,EWC(x) = C2Ls,

whereC1=Marginal cost per unit of x per unit timeC2=Cost of waiting per unit time per(waiting) customerThe following two examples illustrate the use of the cost model. The first example

assumes x to equal the service rate µ, and the second assumes x to equal the number ofparallel servers c.

5.1 Model 1: Optimal Service Rate µ

Consider a M/M/1 queue with arrival rate λ and service rate µ. Assume, µ is control-lable.

C1=cost per unit increase in µ per unit timeC2=cost of waiting customer

ETC(µ) = Expected cost of waiting and service per unit time, given µ =

= C1µ+ C2Ls = C1µ+ C2λ

µ− λ.

59

Page 60: University of Karlsruhe (TH) Department of Mathematics

Chapter 5 Cost Models

Now we have that

∂ETC(µ)∂µ

= 0→ C1 − C2λ

(µ− λ)2 = 0→ µ = λ+√C2λ

C1.

5.2 Model 2: Optimal Number of ServersConsider a M/M/c model

C1=Cost of an additional server per unit timeLs(c)=Expected number of customers in the system given c serversC2=cost of waiting customer

ETC(c) = cC1 + C2Ls(c)

Here c is discrete. We have to find c, such thatETC(c− 1) ≥ ETC(c)ETC(c+ 1) ≥ ETC(c)

i.e.Ls(c)− Ls(c+ 1) ≤ C1

C2≤ Lsc− 1− Ls(c).

60

Page 61: University of Karlsruhe (TH) Department of Mathematics

Chapter 6

Network Models: CommunicationNetwork, Manufacturing NetworkWe dealt with a single isolated queueing system. It is a natural extension for us nowto look at collection of interactive queueing systems, networks of queues, where thedepartures of some queues form the arrivals of others. The analysis of a queueingnetwork is much more complicated and involved due to the interactions among variousqueues and we have to examine them as a whole. The state of one queue is generallydependent of the others because of feedback loops. From the network topology pointof view, queueing networks can be categorized into two generic classes, namely, openqueueing networks and closed queueing networks. These queueing networks can befurther classified into two major classes, namely networks with single-class customersknown as Jackson networks and networks with multi-class customers known as BCMP(Baskett Chandy Muntz Palacios) Networks. Yet another classification of queueingnetworks with multi-class customers is called mixed networks, which are closed withrespect to some customers classes and open with respect to others. In this chapterwe will concentrate only on open queueing networks and closed queueing networks. Inparticular, we are interested in applying the Network-Pade approximation technique tocalculate the normalizing constants arising in closed queueing networks with single-classand multi-class customers.

Open Queueing Networks In an open queueing network, customers arrive from ex-ternal sources outside the domain of interest, go through several queues or even revisit aparticular queue more than once and finally leave the system. The total sum of arrivalrates is equal to the total departure rate under steady state conditions. These networksare good models for analyzing circuit-switching and packet-switching data networks.

6.1 Two-Server Queue in TandemRecall, that in a M/M/1 queue the departure process follows a Poisson distribution withthe same parameter λ like the arrival process. Using reversibility we can prove thatthe probability distribution of the number of departures in (0, t) in steady state is the

61

Page 62: University of Karlsruhe (TH) Department of Mathematics

Chapter 6 Network Models: Communication Network, Manufacturing Network

same as the probability distribution of the number of arrivals in (0, t) and these two areindependent. This result is called Burke’s theorem. Let us denote

n1=number of customers with server 1n1=number of customers with server 2The Burke’s theorem gives, that the probability distributions of n1 and n2 are inde-

pendent. The Kolmogorov equations for this model are:

λP00 = µ2P01

(λ+ µ1)P10 = λP00 + µ2P11

(λ+ µ2)P01 = µ1P00 + µ2P02

(λ+ µ1)Pn10 = µ2Pn11 + λPn1−10, n = 0, 1, ...(λ+ µ2)P0n2 = µ1P1n2−1 + µ2P0n2+1, n = 1, 2, ...(λ+ µ1 + µ2)Pn1n2 = µ1Pn1+1n2−1 + µ2Pnn2+1 + λPn1−1n2 , n1, n2 = 1, 2, ...

For P (n1, n2) = P1(n1)P2(n2), substituting in the above system and denoting ρ1 = λ/µ1and ρ2 = λ/µ2, we get

ρ2P1(0)P2(0) = P1(0)P2(1)(1 + 1

ρ1)P1(n1)P2(0) = 1

ρ2P1(n1)P2(1) + P1(n1 − 1)P2(0)

(1 + 1ρ2

)P1(0)P2(n2) = 1ρ1P1(1)P2(n2 − 1) + P1(0)P2(n2 + 1)

(1 + 1ρ1

+ 1ρ2

)P1(n1)P2(n2) = 1ρ1P1(n1 + 1)P2(n2 − 1) + 1

ρ2P1(n1)P2(n2 + 1) + P1(n1 − 1)P2(n2)

and thenP1(n1) = ρ1P1(n1 − 1) = ρ2

1P1(n1 − 2) = ... = ρn11 P1(0).

But we know, that ∑∞n1=0 P1(n1) = 1 and then P1(0) = 1− ρ1, which implies, that

P1(n1) = (1− ρ1)ρn11 , ρ1 < 1

Analogously we obtainP2(n2) = (1− ρ2)ρn2

2 , ρ2 < 1and therefore

P (n1, n2) = (1− ρ1)ρn11 (1− ρ2)ρn2

2

Ls =∑n1

∑n2

(n1 + n2)P (n1, n2) = ρ1

1− ρ1+ ρ2

1− ρ2,Ws = Ls

λ

This can be generalised to c servers:

P (n1, n2, ..., nc) =c∏i=1

ρnii (1− ρi), ρi < 1.

And then we have

62

Page 63: University of Karlsruhe (TH) Department of Mathematics

6.2 Jackson Network

Burke’s Theorem for c servers Departure from M/M/c model is also Poisson and itsrate is the same as the arrival rate.

6.2 Jackson NetworkThere are k servers. Customers arrive from otside the system to server i, i = 1, 2, ..., k inaccordance with independent Poisson process at rate ni. Once a customer is served byserver i, he joins the queue in froint of server j with probability pij. Hence ∑j pij ≤ 1.

1−∑j pij = P (a customer departs the system after he/she is served by server i);λj=total arrival rate to server j;rj=outside arrival rate;

Traffic equations

λj = rj +k∑i=1

λipij, j = 1, 2, ..., k.

Let ρj = λjµj< 1.

P (nj customers with server j, j = 1, 2, ..., k) =k∏j=1

(λjµj

)nj

(1− λj

µj

)

Ls =k∑j=1

ρj1− ρj

, ρj < 1,Ws = Ls∑rj

The solution of the system of the traffic equations is unique. We can write the systemin this way:

Λ = R + ΛP

where

Λ =

λ1λ2...λk

, R =

r1r2...rk

, P = ((pij))

The matrix I − P is nonsingular and then

Λ(I − P ) = R,Λ = (I − P )−1R

Remark The traffic equations have a unique solution.

63

Page 64: University of Karlsruhe (TH) Department of Mathematics

Chapter 6 Network Models: Communication Network, Manufacturing Network

Example. There are 2 servers. We know, thatr1 = 4, r2 = 5, µ1 = 8, µ2 = 10,p11 = 0, p12 = 1/2, p21 = 1/4, p22 = 0. So, given this

data, we obtain the system of traffic equationsλ1 = 4 + 1

4λ2

λ2 = 4 + 14λ1

which has solution λ1 = 6, λ2 = 8. Therefore Ls = 68−6 = 7,Ws = 7/9.

Closed Queueing Networks A closed queueing network is one in which customersneither arrive at nor depart from the system. The existing customers in the networksimply circulate through various queues and may revisit a particular queue more thanonce as in case of open queueing networks. These networks are good models for analyzingwindow-type network flow controls as well as CPU job scheduling problems. We analyzequeueing networks in the following order:

- Single class customers– Markovian queues in tandem(no feedback)– Open queueing networks- Open acyclic networks(no feedback)- General open cyclic networks(feedback)– Closed queueing networks(feedback)- Multi-class customers–Open queueing networks(feedback)– Closed queueing networks(feedback)What we mean by feedback is: customers who are finishing service in one node

can join the same node immediately or after visiting some other node based on someprobability law, whereas, in networks with no feedback, a customer can visit a node atmost once.

6.3 Closed SystemThere are m customers moving among a system of k servers. The service rates areµi, i = 1, 2, ..., k.

pij=P(a customer, who is in node i will go to node j after completing his service)∑kj=1 pij = 1 and P = ((pij)) is the transition probability matrix. Assume, that P

is irreducible. Then the stationary probabilities π = (π1, ..., πk) exist and satisfy theequation

πP = π, i.e.πj =k∑i=1

πipij

64

Page 65: University of Karlsruhe (TH) Department of Mathematics

6.3 Closed System

and ∑ki=1 πj = 1. If λm(j) = total arrival rate to node j, λm = ∑

λm(j), we obtain

λm(j)λm

=∑i

λm(i)λm

pij.

For this model rj = 0, because no customers come from outside.From the above equalities we get

πj = λm(j)λm

,

which implies, thatλm(j) = λmπj

There is an interesting result:

P (nj customers at server j, j = 1, 2, ..., k) = Kk∏j=1

(λm(j)µj

)nj=

= Ck∏j=1

µj

)nj,∑

nj = k,

C = 1∑n1,...,nk

∏( πµj

)nj ,∑nj = m.

65

Page 66: University of Karlsruhe (TH) Department of Mathematics

Chapter 6 Network Models: Communication Network, Manufacturing Network

66

Page 67: University of Karlsruhe (TH) Department of Mathematics

Chapter 7

Non-Markovian Queues

7.1 Model M/G/1Customers arrive according to a Poisson process with parameter λ to a single serverqueue. Service times are iid variables with pdf ν(t), cdf B(t), mean 1/µ and varianceσ2. The process is non-Markovian, since we also need knowledge of the service time,elapsed for the customer getting service. To overcome that, we observe the processonly just after service completion times. We study the steady state, which at any timedepends on how long the unit in service is getting service. Denote:

Tn+=departure time of the n-th customerXn = X(Tn+)=system size (number of customers in the system) at time Tn+Y=the number of customers joining the queue during the service time of the present-

ing customers

qn = P (n customers arrive during one service time) =

= P (Y = n) =∫ ∞

0e−λt

(λt)nn! ν(t)dt = eλT

(λT )nn! =

∫ ∞0

e−λt(λt)kk! dB(t)

=∫ ∞

0P (n customers arrive|T = t)P (T = t)dt

T is a random variable with pdf h(t).

Note, that if T = 1/µ, qn = e−λµ (λ

µ)k

k! .

Xn+1 =

Xn − 1 + Y if Xn > 0Y if Xn = 0

or in short, Xn+1 = [Xn − 1]+ + Y . If Xn = 1 or Xn = 0, then Xn+1 = Y .The following equality holds:

limt→∞

P (X(t) = j) = limP (X(Tn) = j).

67

Page 68: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

Transition Probabilities

pij = P (Xn+1 = j|Xn = j) =

P (Y = j − i+ 1) = qj−i+1 = πj, if i > 0qj if i = 0

.

The transition probability matrix is

P = ((pij)) =

q0 q1 q2 ...q0 q1 q2 ...0 q0 q1 ...0 0 q0 ...... ... ... ...

.

Stationary probabilities exist if ρ = λµ< 1. Also

πP = π

(π0, π1, π2, ...)

q0 q1 q2 ...q0 q1 q2 ...0 q0 q1 ...0 0 q0 ...... ... ... ...

= (π0, π1, π2, ...)

Now we multiply the terms in the left side to obtain these in the right one. The resultis the following system of equations

q0π0 + q0π1 = π0

q1π0 + q1π1 + q0π2 = π1

q2π0 + q2π1 + q1π2 + q0π3 = π2

...

Now we multiply the first equation by ζ0, the second by ζ1, etc., then add the right handsides of all equations and obtain the function

P (ζ) = π0 + π1ζ + π2ζ2 + ...

Let us putQ(ζ) = q0 + q1ζ + q2ζ

2 + ...

We are going to find some relations between these 2 functions.

π0Q(ζ) + π1Q(ζ) + π2ζQ(ζ) + π3ζ2Q(ζ) + ... = P (ζ)

Q(ζ)[π0 + π1 + π2ζ + π3ζ2 + ...] = P (ζ)

68

Page 69: University of Karlsruhe (TH) Department of Mathematics

7.1 Model M/G/1

Q(ζ)[π0 + 1ζ

(π1ζ + π2ζ2 + π3ζ

3 + ...)] = P (ζ)

Q(ζ)[π0 + 1ζ

(P (ζ)− π0)] = P (ζ)

Q(ζ)[π0(1− 1ζ

) + P (ζ)ζ

] = P (ζ)

Q(ζ)[π0(ζ − 1) + P (ζ)] = ζP (ζ)

P (ζ)[Q(ζ)− ζ] = π0(1− ζ)Q(ζ)

P (ζ) = π0(1− ζ)Q(ζ)Q(ζ)− ζ (7.1)

Q(ζ) =∞∑k=0

qkζ =∞∑k=0

( ∫ ∞0

e−λt(λt)kk! ν(t)dt

)ζk =

∫ ∞0

e−λt∞∑k=0

((λt)kk! ζk

)ν(t)dt

and thenQ(ζ) =

∫ ∞0

e−λt(1−ζ)ν(t)dt.

Now we determine π0 in (9.1)

P (ζ)[Q(ζ)− ζ] = π0(1− ζ)Q(ζ), P (1) = 1, Q(1) = 1

P ′(ζ)(Q(ζ)− ζ) + P (ζ)(Q′(ζ)− 1) = π0(−1)Q(ζ) + π0(1− ζ)Q′(ζ).

Here we put ζ = 1 and obtain

Q′(1)− 1 = π0(−1), π0 = 1−Q′(1) = λ

µ= ρ

and thenP (ζ) = (1− ρ)(1− ζ)Q(ζ)

Q(ζ)− ζ

P ′′(ζ)(Q(ζ)−ζ)+2P ′(ζ)(Q′(ζ)−1)+P (ζ)Q′′(ζ) = π0(−1)Q′′(ζ)+π0(1−ζ)Q′′(ζ)−π0Q′(ζ)

Put ζ = 1:2P ′(1)(Q′(1)− 1) +Q′′(1) = −2π0Q

′(1)

P ′(1) = Q′′(1) + 2(1− ρ)Q′(1)2(1−Q′(1)) = Ls = ρ+ Q′′(1)

2(1−Q′(1)) .

We have already proved, that Ls = Lq + ρ. Then we obtain

69

Page 70: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

Pollakzek-Khinchin Formula

Lq = Q′′(1)2(1−Q′(1)) = Q′′(1)

2(1− ρ)

and then

Wq = Lqλ

= Q′′(1)2λ(1− ρ)

Also, let us mark, thatQ′′(1)− (Q′(1))2 = σ2

7.1.1 Special CasesHere we consider some special cases for the service distribution.

1. M/D/1 Here the service(1/µ) is deterministic.

Q(ζ) =∫ ∞

0e−λt(1−ζ)dB(t) = e−

λµ

(1−ζ) = e−ρ(1−ζ)

P (ζ) = (1− ρ)(1− ζ)1− ζeρ(1−ζ)

Lq = Q′′(1)2(1− ρ) = ρ2

2(1− ρ) .

2. M/M/1

Q(ζ) =∫ ∞

0e−λt(1−ζ)µe−µtdt = µ

µ+ λ(1− ζ) = 11 + ρ(1− ζ)

P (ζ) = (1− ρ)(1− ζ)1− ζ(1 + ρ(1− ζ)) = 1− ρ

1− ρζ

Lq = ρ2

(1− ρ)

ThereforeLq(M/D/1) = 1

2Lq(M/M/1).

70

Page 71: University of Karlsruhe (TH) Department of Mathematics

7.2 Model M/G/∞

3. M/Ek/1 For the Erlang distribution the mean is kµ

and the variance is kµ2 . After

replacing µ by kµ, the mean is 1µ

and the variance is 1kµ2 .

Q(ζ) = 1(1 + ρ

k(1− ζ))k

Lq = 1 + k

2kρ2

(1− ρ)Another improtant result is

Q′′(1) = λ2σ2 + ρ2

where σ is the standard deviation from the service time. Then the Pollakzek-Khinchinformula takes the form

Lq = λ2σ2 + ρ2

2(1− ρ)Let us summarize the results:

Model M/D/1: σ = 0, Lq = ρ2

2(1−ρ) .Model M/M/1: σ = 1

µ, Lq = ρ2

1−ρ .Model M/Ek/1: σ2 = 1

kµ2 , Lq = k+12k

ρ2

1−ρ .!Note, that

Lq(M/Ek/1)→ Lq(M/D/1), k →∞!Observe, that

Lq(M/D/1) < Lq(M/Ek/1) < Lq(M/M/1).

7.2 Model M/G/∞λ=arrival rate of the customers

B(t)=cdf of the service timeThere is noone waiting. Let us denote:A(t)=number of customers arriving in (0, t)N(t)=number of present customers at time t

P (A(t) = k) = e−λt(λt)k

k!r(t)=P(a person who arrives in (0, t) is still in the system)

P (N(t) = n) =∞∑k=0

P (N(t) = n|A(t) = k)P (A(t) = k) =

=∞∑k=n

(k

n

)(r(t))n(1− r(t))k−n e

−λt(λt)kk!

71

Page 72: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

Consider a Poisson process X(t) and assume X(t) = 1. The pdf of an arrival in (u, u+4u), given that X(t) = 1 is

λe−λue−λ(t−u)

λte−λt= 1t

where 0 < u < t and there is one arrival till time u and no arrival in the interval betweenthe times u and t. If s is the service time, then

B(t) = P (s ≤ t)→ 1−B(t) = P (s > t) = P (s > t− u)

r(t) = 1t

∫ t

0(1−B(t− u))du

P (N(t) = n) = e−λt∞∑k=n

(k

n

)(λtr(t))n (λt(1− r(t))k−n

k! = e−λtr(t)(λtr(t))n

n!

Therefore the mean is:

λtr(t) = λ∫ t

0(1−B(t− u))du = λ

∫ t

0(1−B(u))du→ λ

µ, for t→∞

Example. If B(u) = 1− e−µu, then mean=1−e−µuµ

, r(t) = 1t

1−e−µuµ

, tr(t) = 1−e−µuµ

. Thisis actually M/M/∞ queue.

7.3 Priority QueueNPRP=nonpreemptive priority, PRP=preemptive priority

In a nonpreemptive rule, a lower priority customer who is getting service will completehis service even if a higher order priority customer arrives during his service time. Thereare m priority classes: 1, 2, ...,m. 1 has highest priority, and m has lowest priority.

λi=arrival rate; 1/µi=mean service time. Assume general service and one server.ρk = λk

µk. Assume ρ1 + ρ2 + ...+ ρm < 1.

Denote Sk = ρ1 + ρ2 + ...+ ρk.Wq

(k)=average waiting time of the k-th priority customer in the queue is

∑mi=1 λi

(σi

2 + 1µi

)2(1− Sk−1)(1− Sk)

, S0 = 0.

One server queue with two priority classes We consider now a one server queue withtwo priority classes. Again we have Poisson arrivals and exponential service times.

λ1, λ2=arrival ratesµ1, µ2=service rates1 has higher priority and assume, that we have nonpreemptive priority.

72

Page 73: University of Karlsruhe (TH) Department of Mathematics

7.3 Priority Queue

N1=number of type 1 customersN2=number of type 2 customersS0=time required to finish service the item at handS1=total time required to complete the service of all type 1 customersS2=total time required to complete the service of all type 2 customersNote, that S0 = 0 if the system is empty.Tq

(1)=waiting time in queue of a type 1 customerTq

(2)=waiting time in queue of a type 2 customerWq

(1)=mean waiting time in queue of a type 1 customerWq

(2)=mean waiting time in queue of a type 2 customerThe ”combined service distribution” is

λ1

λ1 + λ2(1− e−µ1t) + λ2

λ1 + λ2(1− e−µ2t).

In this formula λiλ1+λ2

, i = 1, 2 is the probability, that an arrival is type i arrival.The mean is

λ1

λ1 + λ2

1µ1

+ λ2

λ1 + λ2

1µ2.

The probability, that the system is busy is

(λ1 + λ2)(

λ1

λ1 + λ2

1µ1

+ λ2

λ1 + λ2

1µ2

)= ρ1 + ρ2,

where ρ1 = λ1µ1, ρ2 = λ2

µ2.

E(S0) = E(S0|the system is busy)P (the system is busy)+

+E(S0|the system is idle)P (the system is idle)The second term in this sum is 0.

E(S0|the system is busy) =

= E(S0|the system is busy with type 1 customer)P (the system is busy with type 1 customer)++E(S0|the system is busy with type 2 customer)P (the system is busy with type 2 customer) =

= 1µ1

ρ1

ρ1 + ρ2+ 1µ2

ρ2

ρ1 + ρ2

E(S0) = ρ1

µ1+ ρ2

µ2

Tq(1) = S0 + S1

E(S1) = E(Y1 + ...+ YN1) = ρ1

µ1E(N1) = 1

µ1Lq

(1) = λ1

µ1Wq

(1) = ρ1Wq(1)

73

Page 74: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

Wq(1) = E(Tq(1)) = ρ1

µ1+ ρ2

µ2+ ρ1Wq

(1)

Wq(1) =

ρ1µ1

+ ρ2µ2

1− ρ1

Tq(2) = S ′1 + S1 + S2 + S0

Here S ′1=number of customers, who join later, S1=present customers before from type1, S2=present customers before from type 2, S0=present customers with remaining ser-vice.

As beforeE(S1) = ρ1Wq

(1), E(S2) = ρ2Wq(2), E(S ′1) = ρ1Wq

(2)

Wq(2) = ρ1Wq

(2) + ρ2Wq(2) + E(S0)

Wq(2) = ρ1Wq

(1) + E(S0)1− ρ1 − ρ2

=ρ1E(S0)

1−ρ1+ E(S0)

1− ρ1 − ρ2= E(S0)

(1− ρ1)(1− ρ1 − ρ2)

Tq(3) = S ′1 + S ′2 + S1 + S2 + S3 + S0.

7.4 Model G/M/1We consider the process at arrival epochs. Service times are exponentially distributedand interarrival times are independent with pdf a(t) and cdf A(t). Consider 2 consecutivearrival epochs

Yn, Yn+1, numbers of customers present

Yn+1 = Yn + 1−Bn+1, Bn+1 ≤ Yn+1

E(ζBn+1) =∫ ∞

0e−µt(1−ζ)a(t)dt

where Bn+1 is the number of service completions during one arrival time. Yn is a Markovchain.

pij = P (Yn+1 = j|Yn = i) = P (Bn+1 = i+ 1− j) = gi+1−j =

= P (i+ 1− jservices are completed during one arrival time), i+ 1 ≥ j ≥ 1.

This number is independent on n.Put hi = 1 − gi − gi+1 − gi+2 − ... and then the transition probability matrix takes

the form

P = ((pij)) =

h0 g0 0 ... 0h1 g1 g0 ... 0... ... ... ... ...

The row sums in this matrix are 0.

74

Page 75: University of Karlsruhe (TH) Department of Mathematics

7.4 Model G/M/1

Let B denote the number of service completions during one interarrival time.

P (B = n) =∫ ∞

0e−µt

(µt)nn! dA(t), n = 0, 1, 2, ...

gn = P (B = n)

G(g) = g0 + g1ζ + ... =∫ ∞

0e−µt(1−ζ)dA(t)

G(ζ) =pdf of number of service completions during one interarrival time; G(1) = 1.

G′(1) =∫ ∞

0tdA(t) = µ

λ= 1ρ

where 1/λ is the mean of A(t).G′(1) > 1 iff ρ < 1.π = (π0, π1, ...), the stationary distribution, exists.Xn=number of people in the system just after an arrival ”seen” by an arrival

(π0, π1, ...) = (π0, π1, ...)

h0 g0 0 ... 0h1 g1 g0 ... 0... ... ... ... ...

We omit the first equation and obtain the system

π1 = g0π0 + g1π1 + g2π2 + ...

π2 = g0π1 + g1π2 + g2π3 + ...

...

πn = g0πn−1 + g1πn + g2πn+1 + ...

Assume, that πn = Cζ0n.

Cζ0n = g0Cζ0

n−1 + g1Cζ0n + g2Cζ0

n+1 + ...

ζ0 = g0 + g1ζ0 + g2ζ20 + ...

G(ζ0) = ζ0 → a funstional equation

ζ0 < 1iffG′(1) > 1iffµλ> 1iffρ < 1

πn = Cζn0 , n = 1, 2, ..., but ∑∞n=0 πn = 1 and then

πn = (1− ζ0)ζn0

We also have, thatLs = ζ0

1− ζ0

75

Page 76: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

Example 1: Model M/M/1

G(ζ) =∫ ∞

0e−µt(1−ζ)dA(t)

where A(t) = 1− e−λt.

G(ζ) =∫ ∞

0e−µt(1−ζ)λe−λtdt = λ

λ+ µ(1− ζ) = ρ

ρ+ 1− ζ

G(ζ) = ζ → ρ

ρ+ 1− ζ = ζ

and the roots of this equation are ζ12 = 1, ρ.

Ls = ρ

1− ρ

Example 2: Model D/M/1 . For this model D = 1/λ.

G(ζ) =∫ ∞

0e−µt(1−ζ)dA(t) = e−

µλ

(1−ζ) = e−1−ζρ .

Thereforee−

1−ζρ = ζ

Example 3: Model Ek/M/1 The Laplace transform of Ek with k and λ is(

λs+λ

)k.

The Laplace transform of Ek with k and kλ is(

kλs+kλ

)k.

G(ζ) =(

µ(1− ζ) + kλ

)k=(

1− ζ + 2ρ

)k= ζ

and that goes to e1−ζζ as k →∞.

For k = 2 we obtain ( 2ρ1− ζ + 2ρ

)2= ζ

From here we obtain the following cubic equation

(ζ − 1)(ζ2 − (1 + 4ρ)ζ + 4ρ2) = 0

Its(positive) roots are ζ1 = 1 and ζ0 = 1+4ρ−√

1+8ρ2 .

If ρ < 1, then ζ0 < 1. When ρ = 3/8, ζ0 = 1/4.

76

Page 77: University of Karlsruhe (TH) Department of Mathematics

7.5 Model G/G/1

Remark.limt→∞

(X(t) = j) 6= limn→∞

(Xn = j)

limt→∞

(X(t) = j) =

1− ρ j = 0ρ(1− ζ0)ζ i−1

0 j = 1, 2, ...

Ls = ρ

1− ζ0

7.5 Model G/G/1Wn=waiting time of the n-th customer

Sn=service time of the n-th customerXn+1=interarrival time between the n-th and the n+ 1-th customerThen the Lindley’s equation holds:

Wn+1 = max0,Wn + Sn −Xn+1

Let us denote Fn(x) = P (Wn ≤ x). Then

Fn+1(x) =

0 if x < 0∫ x−∞ Fn(x− y)dG(y)if x ≥ 0

G(x) = P (Sn −Xn+1 ≤ x)

limn→∞

Fn(x) = F (x) satisfies F (x) =∫ x

−∞Fn(x− y)dG(y),

which is called Wiener-Hopf equation.If ρ < 1, F is non-defective.If ρ > 1, F (x) ≡ 0.Assume W1 = 0.

W2 = max0,W1 + U1 = maxW1 + U1, Un = Sn −Xn+1;

W3 = max0,W2 + U2 = max0, U1, U1 + U2;

Wn+1 = max0, U1, U1 + U2, U1 + ...+ Un.

77

Page 78: University of Karlsruhe (TH) Department of Mathematics

Chapter 7 Non-Markovian Queues

78

Page 79: University of Karlsruhe (TH) Department of Mathematics

Chapter 8

Deterministic Inventory ModelsIntroduction Inventory is a list of goods and materials, or those goods and materialsthemselves, held available in a stock by a business. Inventory are held in order tomanage and hide from the customer the fact that manufacture/supply delay is longerthan delivey delay, and also to case the effect of imperfections in the manufacturingprocess that lower production efficiencies if production capacity stands idle for luck ofmaterials.

Costs Involved in Inventory Models We want to minimize the total Inventory cost.Total Inventory cost = Purchasing cost + Setup cost + Holding cost + Shortage

cost

Purchasing cost is the price per unit of an inventory item. At times the item is offeredat a discount if the ordered size exceeds a certain amount, which is a factor decidinghow much to order.

Setup cost represents the fixed charge incurred, when an oreder is placed regardless ofits size. Increasing the order quantity reduces the setup cost, but increases the averageinventory level and hence the cost of tied capital. Reducing the order size increasesfrequency of ordering and the associated setup cost. An inventory cost model balancesthe two costs.

Holding cost represents the cost of maintaining inventory in stock. It includes intereston capital and the cost of storage, maintenance and handling.

Shortage cost is the penalty incurred when we run out of stock. It includes potentialloss of income and the more subjective cost of loss customer’s goodwill.

Demand is deterministic!y = order quantity(the number of units)D = demand rate(units/unit time)t = ordering cycle length(time units)k = setup cost(independent of y)

79

Page 80: University of Karlsruhe (TH) Department of Mathematics

Chapter 8 Deterministic Inventory Models

h = holding cost(unit/unit time)p = penalty cost/unit/unit timea = production rate We want to determine y, which minimizes the total cost.

8.1 EOQ(Economic Order Quantity) ModelIn this model we have

- constant rate demand.- instantaneous replenishment- no shortagesWe want to determine y , which minimizes the total cost.An order of size y is placed and received instantaneously when the inventory model

reaches 0.t = y

D

Total cost per unit time = TCU(y) =

=k + h1

2ty

t= k

t+ h

2y = kD

y+ h

2y.

∂TCU(y)∂y

= −kDy2 + h

2 = 0

y∗ =√

2kDh− Wilson’s formula

∂2TCU(y)∂y2 = 2kD

y3 > 0

t∗ = y∗D

=√

2kDh

and then the minimal total cost is

TCU(y∗) =√

2kDh

As k increases, y∗ increases too.As h increases, y∗ decreases.

8.2 EOQ Model with Finite ReplenishmentThis could be visualized with a triangle ABC, E is a point of AB, AE = t1, EC = t2,AB = a−D, BC = D.

TCU(y) =k + h1

2tBE

t

80

Page 81: University of Karlsruhe (TH) Department of Mathematics

8.3 EOQ Model with Storages

BE = (a−D)t1 = Dt2

(a−D)(t1 + t2) = Dt2 + (a−D)t2 = at2 → t2 = (1− D

a)t→ BE = D(1− D

a)t

TCU(y) = kD

y+ D(a−D)yh

2a = kD

y+ 1

2h(1− D

a)y

∂TCU(y)∂y

= −kDy2 + h

2 (1− D

a) = 0

y∗ =√

2kDh

(1− D

a).

Therefore

TCU(y∗) =√

2kDh(1− D

a)

Remark. If a =∞, we get Model 1.

8.3 EOQ Model with StoragesShortages are filled as soon as the inventory order is realized.

y = Q1 +Q2, t = t1 + t2,

whereQ1 = t1D,Q2 = t2D = (t− t1)D = y −Q1

TCU(y,Q1) =k + h1

2Q1t1 + p12Q2t2

t= kD

y+ 1

2hQ2

1y

+12P (y −Q1)2

y

We are going to determine y and Q1, so that TCU(y) is minimal.

∂TCU(y,Q1)∂Q1

= hQ1

yD − p(y −Q1)D

y= 0

Q1 = py

h+ p, y −Q1 = hy

h+ p

TCU(y,Q1) = kD

y+ 1

2hp2y2D

(h+ p)2 + 12p

h2y2D

(h+ p)2 =

= kD

y+ 1

2y2

(h+ p)2hp(h+ p)

∂TCU(y,Q1)∂y

= −kDy2 +

12

1h

+ 1p

= 0

81

Page 82: University of Karlsruhe (TH) Department of Mathematics

Chapter 8 Deterministic Inventory Models

y∗ =√

2kD( 1h

+ 1p

) >√

2kDh

If p =∞ we get Model 1.Usually p >> h

Q∗1 = h

h+ py ∗ .

8.4 EOQ Model with Storages and FiniteReplenishment

Prove, that

TCU(y, w) = kD

y+h(y(1− D

a)− w

)2+ pw2

2(1− Da

)y

y∗ =

√√√√2kD(p+ h)(1− Da

)ph

, w∗ =

√√√√2kDh(p+ h)(1− Da

)p(p+ h) .

8.5 Price BreakAn inventory item may be purchased at a discount if the size of the order exceeds q. Inthe previous Model 4 we have not taken into account the purchase price. We had setup,holding and storage costs.

cy = purchase costcyt

= cD = const = purchase cost/unit time(independent of y)This vanishes when we differentiate TCU(y) with respect to y.Let us denote:c = cost per itemc1 = the cost of an item, if you purchase ≤ q itemsc2 = the cost of an item, if you purchase > q itemsOf course, c1 > c2 and q = price break.So,

c =

c1 if y ≤ q

c2 if y > q

TCU(y) =

Dc1 + kDy

+ h2y if y ≤ q

Dc2 + kDy

+ h2y if y > q

If we make a graphic(try alone), then If q is in zone 1: ym is the order quantity.If q is in zone 2: q is the order quantity(because q is the minimum).If q is in zone 3: ym is the order quantity.

82

Page 83: University of Karlsruhe (TH) Department of Mathematics

8.6 Multiitem EOQ with Storage Limitation

Given this situation, find ym as before.If q ≤ ym, the order quantity is ym, otherwise determine Q, such that:- TCU1(ym) = TCU1(Q)or find out if- TCU1(ym) > or < TCU2(q).

8.6 Multiitem EOQ with Storage Limitation(This model is related to Portfolio Optimization in Finance)

There are n items competing for limited space. For i = 1, 2, ..., n, let:yi = order quantity(the number of units)Di = demand rate(units/unit time)ki = setup cost(independent of y)hi = holding cost(unit/unit time)ai = storage areaA = maximum available storageOur task is to minimaze the total cost

TCU(y1, ..., yn) =n∑i=1

(kiDi

yi+ hiyi

2

),

such thata1y1 + ...+ anyn ≤ A.

Compute y∗i =√

2kiDihi

(EOQ formula). If∑ aiyi ≤ A, these are the optimal solutions.Otherwise use Lagrange multipliers, i.e. compose the function

L(λ, y1, ..., yn) = TCU(y1, ..., yn)− λ(∑

aiyi − A)

and find its minimum. Determine y1, ..., yn, λ:∂L∂yi

= 0, i = 1, 2, ..., n∂L∂λ

= 0

Solving this system, we obtain:

y∗i =√

2kiDi

hi − 2λ ∗ ai

This is a minimization problem, so λ < 0. Successively reduce λ till ∑ aiyi ≈ A.

83

Page 84: University of Karlsruhe (TH) Department of Mathematics

Chapter 8 Deterministic Inventory Models

8.7 Dynamic EOQ Model with no Setup CostThere is a planning horizon with n equal periods. Each period has a limited productioncapacity that can include several production levels(e.g. regular time, overtime). Acurrent period may produce more than its immediate demand to satisfy demand forlater periods in which case a holding cost must be changed(transportation problem).

8.8 Dynamic EOQ Model with Setup CostsThere are n periods. For i = 1, 2, ..., n:

zi = amount orderedDi = demand for period ixi = inventory at the start of period iki = setup cost

Ci(zi) =

0 zi = 0ki + ci(zi) zi > 0

= Production cost

where ci(zi)=marginal cost.

xi+1 = xi + zi −Di, 0 ≤ xi+1 ≤ Di+1 +Di+2 + ...+Dn

f1(x2) = minz1=D1+x2−x1

C1(z1) + h1(x2)

fi(xi+1) = min0≤zi≤Di+Ci

Ci(zi) + hi(xi+1) + fi−1(xi) =

= min0≤zi≤Di+Ci

Ci(zi) + hi(xi+1) + fi−1(xi+1 +Di − zi)

fi(xi+1)=minimal Inventory cost for periods 1, 2, ..., i given the end-of-period inventoryxi+1.

We actually break the problem in several smaller problems, find the optimal solutionsand then the general solution.

84

Page 85: University of Karlsruhe (TH) Department of Mathematics

Chapter 9

Probabilistic Inventory ModelsDemand is random(which makes our life more difficult!) with specified distribution.

9.1 ”Probabilitized” EOQ ModelL = Lead time = time between placing an order and receiving an order

xL = random demand during lead timeµL = E(xL)σL = SD(xL)B = Buffer stock sizeα = maximum allowed probability of running out of a stockDetermine the optimal buffer size.

P (xL > B + µL) ≤ α

1− F (B + µL) ≤ α.

If xL is N(DL, σ2L)

P (xL − µLσL

>B

σL) ≤ α.

Determine B from normal tables.

9.2 Probabilistic EOQ Modelf(x) = pdf of demand during lead time L

D = expected demand/unit timeh = holding cost/unitp = storage cost/unitk = setup cost/orderDetermine R∗, y∗, such that TCU(R, y) is minimal(R is reorder point, y is ordered

amount).

85

Page 86: University of Karlsruhe (TH) Department of Mathematics

Chapter 9 Probabilistic Inventory Models

We make some calculations:)

y∗ =√

2D(k + pS)R

,∫ ∞R∗

xf(x)dx = hy∗pD

,

S =∫ ∞R

(x−R)f(x)dx,

R >∫ ∞

0xf(x)dx→

∫ ∞0

(x−R)f(x)dx > 0.

Assume

y =√

2D(k + pEx)h

> y′ = pD

h.

Note that R = 0→ S = Ex. If S0 = 0 then y1 = 2kDh, R0 = 0→ S1 = Ex. Then we

determiney2 → R1 → S2 → y3 → R2 → S3...

and stop when yi ≈ yi+1.If f(x) is uniform(i.e. constant), then S takes a simple form S =

∫∞R

x−hb−a dx.

If f(x) is exp(α), then S =∫∞R (x−R)e−αxdx.

9.3 Single-Period without Setup Costf(D) = pdf of demand

D = random demand during the periodh = holding cost/unit during the periodp = storage cost/unit during the periody = order quantityx = inventory on hand before an order is placedC(y) = Total cost = Holding cost + Shortage cost, and then for the expectancy we

obtainE[C(y)] = h

∫ y

0(y −D)f(D)dD + p

∫ ∞y

(D − y)f(D)dD

∂E[C(y)]∂y

= h∫ y

0f(D)dD − p

∫ ∞y

)f(D)dD = 0

h∫ y

0f(D)dD + p

∫ y

0)f(D)dD − p = 0∫ y

0f(D)dD = p

p+ h

Remark. If D is discrete

P (D ≤ y ∗ −1) ≤ p

p+ h≤ P (D ≤ y∗)

86

Page 87: University of Karlsruhe (TH) Department of Mathematics

9.4 Single-Period with Setup Cost(s− S Policy)

9.4 Single-Period with Setup Cost(s− S Policy)

E[ ˆC(y)] = k + E[C(y)].

First calculate y∗, which minimizes E[C(y)].Let y∗ = S.E[ ˆC(y)] is minimal for the same y∗.Determine s < S such that E[ ˆC(s)] = E[C(s)].If x < s, order S − x.If x > s do not order.

FINISH

87