faults and fault-tolerance one of the selling points of a distributed system is that the system will...
Post on 18-Dec-2015
220 views
TRANSCRIPT
Faults and fault-tolerance
One of the selling points of a distributed system is that the
system will continue to perform even if some components / processes fail.
Cause and effect
• Study what causes what.
• We view the effect of failures at our level of abstraction, and then try to mask it, or recover from it.
• Be familiar with the terms MTBF (Mean Time Between Failures) and MTTR (Mean Time To Repair)
Classification of failures
Crash failureOmission failure
Transient failure
Byzantine failure
Software failure
Temporal failure Security failure
Crash failuresCrash failure is irreversible. How can we distinguish between a process that
has crashed and a process that is running very slowly?
In synchronous system, it is easy to detect crash failure (using heartbeat signals and timeout), but in asynchronous systems, it is never accurate.
Some failures may be complex and nasty. Arbitrary deviation from program execution is a form of failure that may not be as “nice” as a crash. Fail-stop failure is an simple abstraction that mimics crash failure when program execution becomes arbitrary. Such implementations help detect which processor has failed. If a system cannot tolerate fail-stop failure, then it cannot tolerate crash.
Omission failures
Message lost in transit. May happen due to various causes, like
– Transmitter malfunction– Buffer overflow– Collisions at the MAC layer– Receiver out of range
Transient failure
(Hardware) Arbitrary perturbation of the global state. May be
induced by power surge, weak batteries, lightning, radio-
frequency interferences etc.
(Software) Heisenbugs, are a class of temporary internal
faults and are intermittent. They are essentially permanent
faults whose conditions of activation occur rarely or are not
easily reproducible, so they are harder to detect during the
testing phase.
Over 99% of bugs in IBM DB2 production code are non-
deterministic and transient
Byzantine failure
Anything goes! Includes every conceivable form of erroneous behavior.
Numerous possible causes. Includes malicious behaviors (like a process executing a different program instead of the specified one) too.
Most difficult kind of failure to deal with.
Software failures
• Coding error or human error
• Design flaws
• Memory leak
• Incomplete specification (example Y2K)
Many failures (like crash, omission etc) can be caused by software bugs too.
Specification of faulty behavior
program example1;define x : boolean (initially x = true);{a, b are messages);
do {S}: x send a {specified action} {F}: true send b {faulty action}od
a a a a b a a a b b a a a a a a a …
Specifying Byzantine Faults
program example2;
define j:integer, flag : boolean ;
{a, b are messages}, x : buffer;
initially j=0, flag = false;
do ~flag message=a x:= a ; flag :=true
(j<N) flag send x to j; j := j+1
j = N j := 0; flag :=false
od
F : (Byzantine) flag x :=b (b != a)
Specifying Byzantine Faultsprogram example3;
define k:integer, x : boolean
initially k=0, x = true;
S: do k >2 send k; k:= k+1
x (k=2) send k; k:= k+1
k >= 3 k:=0;
F: x x:= false
~x (k=2) send 9; k := k+1
od
Specifying Temporal Failures
program example4 { for process j};
define f[i]: boolean {initially f[i] = false}
S: do ~f[i] message received from process i skip
F: timeout (i,j) f[i] := true
od
Fault-tolerance
F-intolerant vs F-tolerant systems
Four types of tolerance:
- Masking
- Non-masking
- Fail-safe
- Graceful degradation
faults
to
lera
nces
A system thattolerates failure
of type F
Fault-tolerance
P is the invariant of the
original fault-free system
Q represents the worst
possible behavior of the
system when failures occur.
It is called the fault span.
Q is closed under S or F.
P
Q
Fault-tolerance
Masking tolerance: P = Q
(neither safety nor liveness is violated
Non-masking tolerance: P Q
(safety property may be temporarily
violated, but not liveness). Eventually
safety property is restored
P
Q
Classifying fault-tolerance
Masking tolerance. Application runs as it is. The failure does not have a visible impact. All properties (both liveness & safety) continue to hold.
Non-masking tolerance. Safety property is temporarily affected, but not liveness.
Example 1. Clocks lose synchronization, but recover soon thereafter.Example 2. Multiple processes temporarily enter their critical sections, but thereafter, the normal behavior is restored.
Backward error-recovery vs. forward error-recovery
Backward vs. forward error recovery
Backward error recoveryWhen safety property is violated, the computation rolls back and resume from a previous correct state.
time
rollbackForward error recoveryComputation does not care about getting the history right, but moves on, as long as eventually the safety property is restored.True for stabilizing systems.
Classifying fault-tolerance
Fail-safe tolerance Given safety predicate is preserved, but liveness may be affected
Example. Due to failure, no process can enter its critical section for an indefinite period. In a traffic crossing, failure changes the traffic in both directions to red.
Graceful degradation Application continues, but in a “degraded” mode. Much depends on what kind of degradation is acceptable.
Example. Consider message-based mutual exclusion. Processes will enter their critical sections, but not in timestamp order.
Failure detection
The design of fault-tolerant systems will be easier if failures can be detected. Depends on the
1. System model, and 2. the type of failures.
Asynchronous systems are more tricky. We first focus on synchronous systems only.
Detection of crash failures
Failure can be detected using heartbeat messages
(periodic “I am alive” broadcast) and timeout
- if the largest time to execute a step is known
- channel delays have a known upper bound.
Detection of omission failures
For FIFO channels: Use sequence numbers with messages.
Non-FIFO channels and bounded propagation delay - use timeout
What about non-FIFO channels for which the upper bound of thedelay is not known? Use unbounded sequence numbers andacknowledgments. But acknowledgments may be lost too, causingunnecessary re-transmission of messages :- (
Let us look how a real protocol deals with omission ….
Tolerating crash failures
Triple modular redundancy (TMR)
for masking any single failure.
N-modular redundancy masks
up to m failures, when N = 2m +1
A
B0
B1
B2
C
x
x
x
f(x)
f(x)
?
Take a vote
What if the voting unit fails?
Tolerating omission failures
Central theme in networking A
B
router
router
Routers may drop messages, butreliable end-to-end transmissionis an important requirement. Thisimplies, the communication musttolerate Loss, Duplication, and
Re-ordering of messages
Stenning’s protocol{program for process S}define ok : boolean; next : integer;initially next = 0, ok = true, both channels are empty;do ok send (m[next], next); ok:= false (ack, next) is received ok:= true; next := next + 1 timeout (r,s) send (m[next], next)od
{program for process R}define r : integer;initially r = 0;do (m[ ], s) is received s = r accept the message;
send (ack, r);r:= r+1
(m[ ], s) is received s≠r (ack, r-1)od
Sender S
Receiver R
m[0], 0
ack
Observations on Stenning’s protocol
Both messages and acks may be lost
Q. Why is the last ack reinforced by R when s≠r?
A. Needed to guarantee progress.
Progress is guaranteed, but the protocol
is inefficient due to low throughput.
Sender S
Receiver R
m[0], 0
ack
Sliding window protocol
next
last
.
.
:
S R
last + w
j
} accepted messages
(s, r)
(r, s)
The sender continues the send action without receiving the acknowledgements of at most w messages (w > 0), w is called the window size.
Sliding window protocol
{program for process S}
define next, last, w : integer;
initially next = 0, last = -1, w > 0
do last+1 ≤ next ≤ last + w
send (m[next], next); next := next + 1
(ack, j) is received
if j > last last := j
j ≤ last skip
fi
timeout (R,S) next := last+1
{retransmission begins}
od
{program for process R}
define j : integer;
initially j = 0;
do (m[next], next) is received
if j = next accept message;
send (ack, j);
j:= j+1
j ≠ next send (ack, j-1)
fi;
od
Why does it work?
Lemma. Every message is accepted exactly once.
Lemma. m[k] is always accepted before m[k+1].
(Argue that these are true.)
Observation. Uses unbounded sequence number.This is bad. Can we avoid it?
Theorem
If the communication channels are non-FIFO, and the
message propagation delays are arbitrarily large, then
using bounded sequence numbers, it is impossible to
design a window protocol that can withstand the (1)
loss, (2) duplication, and (3) reordering of messages.
Why unbounded sequence no?
(m[k],k)(m’, k)(m’’,k)
Retransmitted version of m
New message using the sameseq number k
We want to accept m” but reject m’. How is that possible?
Alternating Bit Protocol
S R
ABP is a link layer protocol. Works on FIFO channels only.Guarantees reliable message delivery with a 1-bit sequence number (this is the traditional version with window size = 1). Study how this works.
m[0],0m[0],0
ack, 0
m[1],1m[2],0
Alternating Bit Protocolprogram ABP;{program for process S}define sent, b : 0 or 1; next : integer;initially next = 0, sent = 1, b = 0, and channels are empty;do sent ≠b send (m[next], b);
next := next+1; sent := b (ack, j) is received if j = b b := 1-b
j ≠ b skip fi
timeout (R,S) send (m[next-1], b)od{program for process R}define j : 0 or 1; {initially j = 0};do (m[ ], b) is received
if j = b accept the message;send (ack, j); j:= 1 - j
j ≠ b send (ack, 1-j)fi
od
S
R
m[1],1
m[0],0
m[0],0
m[2],0
a,0
How TCP works
Supports end-to-end logical connection between any two computers on the Internet. Basic idea is the same as those of sliding window protocols. But TCP uses bounded sequence numbers!
It is safe to re-use a sequence number when it is unique. With a high probability, a random 32 or 64-bit number is unique. Also, current sequence numbers are flushed out of the system after a time = 2d, where d is the round trip delay.
How TCP works
send (m, y+1)
ACK, ack=y+1
Sender Receiver
ack (y+2)
SYN seq = x
SYN , seq=y, ack = x+1
How TCP works
• Three-way handshake. Sequence numbers are unique w.h.p.
• Why is the knowledge of roundtrip delay important?
• What if the window is too small / too large?
• What if the timeout period is too small / toolarge?
• Adaptive retransmission: receiver can throttle sender
and control the window size to save its buffer space.
Distributed Consensus
Reaching agreement is a fundamental problem in distributed computing. Some examples are
Leader election / Mutual ExclusionCommit or Abort in distributed transactionsReaching agreement about which process has failedClock phase synchronizationAir traffic control system: all aircrafts must have the same view
If there is no failure, then reaching consensus is trivial. All-to-all broadcastFollowed by a applying a choice function … Consensus in presence of failures can however be complex.
Problem Specification
u3
u2
u1
u0 v
v
v
v
Here, v must be equal to the value at some input line.Also, all outputs must be identical.
input output
p0
p1
p2
p3
Problem Specification
Termination. Every non-faulty process must eventually decide.
Agreement. The final decision of every non-faulty process
must be identical.
Validity. If every non-faulty process begins with the same
initial value v, then their final decision must be v.
Asynchronous Consensus
Seven members of a busy household decided to hire a cook, since they do not have time to prepare their own food. Each member separately interviewed every applicant for the cook’s position. Depending on how it went, each member voted "yes" (means “hire”) or "no" (means “don't hire”).
These members will now have to communicate with one another to reach a
uniform final decision about whether the applicant will be hired. The process will be repeated with the next applicant, until someone is hired.
Consider various modes of communication…
Asynchronous Consensus
Theorem.In a purely asynchronous distributed system, the consensus problem is impossible to solve if even a single process crashes
Famous result due to Fischer, Lynch, Patterson
(commonly known as FLP 85)
Proof
Bivalent and Univalent states
A decision state is bivalent, if starting from that state, there existtwo distinct executions leading to two distinct decision values 0 or 1.
Otherwise it is univalent.
A univalent state may be either 0-valent or 1-valent.
Proof
Lemma. No execution can lead from a 0-valent to a 1-valent state or vice versa.
Proof. Follows from the definition of 0-valent and 1-valent states.
Proof
Lemma. Every consensus protocol must have a bivalent initial state.
Proof by contradiction. Suppose not. Then consider the following scenario:
s[0] 0 0 0 0 0 0 …0 0 0 {0-valent)
0 0 0 0 0 0 …0 0 1 s[j] is 0-valent
0 0 0 0 0 0 …0 1 1 s[j+1] is 1-valent
… … … … (differ in jth position)
s[n-1] 1 1 1 1 1 1 …1 1 1 {1-valent}
What if process (j+1) crashes at the first step?
Lemma.
In a consensus protocol,
starting from any initial
bivalent state, there must
exist a reachable bivalent
state T, such that every
action taken by some process
p in state T leads to either a
0-valent or a 1-valent state.
Q
S R U T
T0 T1R0 R1
action 0 action 0action 1 action 1
o-valent 1-valent o-valent 1-valent
bivalent
bivalent
bivalent bivalent bivalent
Actions 0 and 1 from T must be taken by the same process p. Why?
ProofThe adversary tries to prevent
The system from reaching consensus
Lemma.
In a consensus protocol, starting
from any initial bivalent state I,
there must exist a reachable
bivalent state T, such that every
action taken by some process p
in state T leads to either a 0-
valent or a 1-valent state.
Q
S R U T
T0 T1R0 R1
action 0 action 0action 1 action 1
o-valent 1-valent o-valent 1-valent
bivalent
bivalent
bivalent bivalent bivalent
Actions 0 and 1 from T must be taken by the same process p. Why?
Proof of FLP (continued)
Proof of FLP (continued)
T
T0
T1 Decision =1
Decision = 0p reads
q writes
e1
e0
• Starting from T, let e1 be a computation that excludes any step by p.• Let p crash after reading.Then e1 is a valid computation from T0 too.To all non-faulty processes, these two computations are identical, but the outcomes are different! This is not possible!
Case 1. 1-valent
0-valent
Assume shared memory communication. Also assume that p ≠ q. Various cases are possible
Such a computation must existsince p can crash at any time
Proof (continued)
T
T0
T1 Decision =1
Decision = 0p writes
q writes
e1
e0
Both write on the same variable, and p writes first.
• From T, let e1 be a computation that excludes any step by p.• Let p crash after writing.Then e1 is a valid computation from T0 too.
To all non-faulty processes, these two computations are identical, but the outcomes are different!
Case 2. 1-valent
0-valent
Proof (continued)
T
T0
T1
Z
Decision =1
Decision = 0p writes
q writes
Let both p and q write, but on different variables.
Then regardless of the order of these writes, both computations lead
to the same intermediate global state Z. Is Z 1-valent or 0-valent?
Case 3
0-valent
1-valent
p writes
q writes
Proof (continued)
Similar arguments can be made for communication usingthe message passing model too (See Lynch’s book). Theselead to the fact that p, q cannot be distinct processes, and p = q. Call p the decider process.
What if p crashes in state T? No consensus is reached!
Conclusion
• In a purely asynchronous system, there is no solution to the consensus problem if a single process crashes..
• Note that this is true for deterministic
algorithms only. Solutions do exist for theconsensus problem using randomized algorithm, or using the synchronous model.
Byzantine Generals Problem
Describes and solves the consensus problem on the synchronous model of communication.
- Processor speeds have lower bounds and communication delays have upper bounds.
- The network is completely connected- Processes undergo byzantine failures, the worst
possible kind of failure
Byzantine Generals Problem
• n generals {0, 1, 2, ..., n-1} decide about whether to "attack" or to "retreat" during a particular phase of a war. The goal is to agree upon the same plan of action.
• Some generals may be "traitors" and therefore send either no
input, or send conflicting inputs to prevent the "loyal" generals from reaching an agreement.
• Devise a strategy, by which every loyal general eventually agrees upon the same plan, regardless of the action of the traitors.
Byzantine Generals
0
32
1
Attack = 1Attack=1
Retreat = 0 Retreat = 0
{1, 1, 0, 0}
{1, 1, 0, 0}
Every general will broadcast his judgment to everyone else.These are inputs to the consensus protocol.
{1, 1, 0, 1}
{1, 1, 0, 0}
traitor
The traitormay send out conflicting inputs
Byzantine Generals
We need to devise a protocol so that every peer
(call it a lieutenant) receives the same value from
any given general (call it a commander). Clearly,
the lieutenants will have to use secondary information.
Note that the roles of the commander and the
lieutenants will rotate among the generals.
Interactive consistency specifications
IC1. Every loyal lieutenant receives
the same order from the commander.
IC2. If the commander is loyal, then
every loyal lieutenant receives
the order that the commander
sends.
commander
lieutenants
The Communication Model
Oral Messages
Messages are not corrupted in transit.Messages can be lost, but the absence of message can be detected.When a message is received (or its absence is detected), the receiver
knows the identity of the sender (or the defaulter).
OM(m) represents an interactive consistency protocol in presence of at most m traitors.
An Impossibility Result
commander 0 commander 0
lieutenent 1 lieutenant 2 lieutenent 1 lieutenant 2
1 1
0
1 0
0
1
(a) (b)
Using oral messages, no solution to the ByzantineGenerals problem exists with three or fewer generals and one traitor. Consider the two cases:
Impossibility result
Using oral messages, no solution to the Byzantine Generals problem exists with 3m or fewer generals and m traitors (m > 0).
Hint. Divide the 3m generals into three groups of m generals
each, such that all the traitors belong to one group. This scenario
is no better than the case of three generals and one traitor.
The OM(m) algorithmRecursive algorithm
OM(m)
OM(m-1)
OM(m-2)
OM(0)
OM(0) = Direct broadcast
OM(0)
The OM(m) algorithm
1. Commander i sends out a value v (0 or 1)
2. If m > 0, then every lieutenant j ≠ i, afterreceiving v, acts as a commander and initiates OM(m-1) with everyone except i .
3. Every lieutenant, collects (n-1) values: (n-2) values sent by the lieutenants usingOM(m-1), and one direct value from the commander. Then he picks the majority of these values as the order from i
Example of OM(1)
1 1
(a)
0
1 22 3
2 3 3 1 1 2
1
1 1 1 1 0 0
1 1
0
1 22 3
2 3 3 1 1 2
1 1 0 0 1 1
0
(b)
commander commander
Example of OM(2)
0
1 2
2
3 4 5 6
4 5 6 5 6 2 4 6 2 4 5
Commander
OM(2)
OM(1)
v v vvv v
v v v v v v v v v v v v
5 6 2 6 2 5OM(0)
v v vv v v
OM(2)
OM(1)
OM(0)
Proof of OM(m)
Lemma.Let the commander beloyal, and n > 2m + k,where m = maximumnumber of traitors.
Then OM(k) satisfies IC2
m traitors
n-m-1 loyal lieutenants
values received via OM(r)
loyal commander
Proof of OM(m)
ProofIf k=0, then the result trivially holds.
Let it hold for k = r (r > 0) i.e. OM(r)satisfies IC2. We have to show thatit holds for k = r + 1 too.
Since n > 2m + r+1, so n -1 > 2m + rSo OM(r) holds for the lieutenants in the bottom row. Each loyal lieutenant willcollect n-m-1 identical good values andm bad values. So bad values are votedout (n-m-1 > m + r implies n-m-1 > m)
m traitors
n-m-1 loyal lieutenants
values received via OM(r)
loyal commander
The final theoremTheorem. If n > 3m where m is the maximum number of traitors, then OM(m) satisfies both IC1 and IC2.
Proof. Consider two cases:
Case 1. Commander is loyal. The theorem follows from the previous lemma (substitute k = m).Case 2. Commander is a traitor. We prove it by induction. Base case. m=0 trivial. (Induction hypothesis) Let the theorem hold for m = r. We have to show that it holds for m = r+1 too.
Proof (continued)
There are n > 3(r + 1) generals and r + 1 traitors. Excluding the commander, there are > 3r+2 generals of which there are r traitors. So > 2r+2 lieutenants are loyal. Since 3r+ 2 > 3.r, OM(r) satisfies IC1 and IC2
> 2r+2 r traitors
Proof (continued)
In OM(r+1), a loyal lieutenant chooses the
majority from (1) > 2r+1 values obtained
from the loyal lieutenants via OM(r),
(2) the r values from the traitors, and
(3) the value directly from the commander. > 2r+2 r traitors
The values collected in part (1) & (3) are the same for all loyal lieutenants –
it is the same value that these lieutenants received from the commander.
Also, by the induction hypothesis, in part (2) each loyal lieutenant receives
identical values from each traitor. So every loyal lieutenant collects the same set of values.
Acknowledgements
This part relies heavily on Dr. Sukumar Ghosh’s Iowa University Distributed Systems course 22C:166