distributed database systems - · pdf filedistributed database systems architectures ......

29
©2000 Vera Goebel III–1 ©2002 Denise Ecklund, Vera Goebel DDBS-1 Distributed Database Systems Architectures Key Problems Dr. Denise Ecklund 2 October 2002 ©2002 Denise Ecklund, Vera Goebel DDBS-2 Contents • Review : Layered DBMS Architecture Distributed DBMS Architectures – DDBMS Taxonomy Client/Server Models Key Problems of Distributed DBMS – Distributed data modeling – Distributed query processing & optimization – Distributed transaction management Failure recovery See the Pensum and Anbefalt on the class webpage :

Upload: leduong

Post on 06-Feb-2018

255 views

Category:

Documents


2 download

TRANSCRIPT

©2000 Vera Goebel III–1

©2002 Denise Ecklund, Vera Goebel DDBS-1

Distributed Database SystemsArchitectures

Key Problems

Dr. Denise Ecklund2 October 2002

©2002 Denise Ecklund, Vera Goebel DDBS-2

Contents• Review: Layered DBMS Architecture• Distributed DBMS Architectures

– DDBMS Taxonomy• Client/Server Models• Key Problems of Distributed DBMS

– Distributed data modeling– Distributed query processing & optimization– Distributed transaction management

• Failure recovery

See the Pensum and Anbefalt on the class webpage:

©2000 Vera Goebel III–2

©2002 Denise Ecklund, Vera Goebel DDBS-3

Functional Layersof a DBMS

Database

Applications

User Interfaces / View Management

Semantic Integrity Control / Authorization

Query Processing and Optimization

Storage Structures

Buffer Management

Concurrency Control / Logging

Interface

Control

Compilation

Execution

Data Access

Consistency

Tra

nsac

tion

Man

agem

ent

©2002 Denise Ecklund, Vera Goebel DDBS-4

Dependencies among DBMS components

Log component(with savepoint mgmt)

Lockcomponent

System BufferMgmt

CentralComponents

Access PathMgmt

TransactionMgmt

Sortingcomponent

Indicates a dependency

©2000 Vera Goebel III–3

©2002 Denise Ecklund, Vera Goebel DDBS-5

Centralized DBS• logically integrated• physically centralized

network

T1T2

T4

T3

T5

DBS

Traditionally: one large mainframe DBMS + n “stupid” terminals

©2002 Denise Ecklund, Vera Goebel DDBS-6

Distributed DBS• Data logically integrated (i.e., access based on one schema)• Data physically distributed among multiple database nodes• Processing is distributed among multiple database nodes

network

T1 T2

T3

DBS1

DBS3

DBS2

Traditionally: m mainframes for the DBMSs + n terminals

Why a Distributed DBS?

Performance via parallel execution- more users- quick response

More data volume- max disks on multiple nodes

©2000 Vera Goebel III–4

©2002 Denise Ecklund, Vera Goebel DDBS-7

CentralizedDBMS

DistributedDBMS

Homogeneous DBMS

CentralizedHeterogeneous DBMS

Homogeneous DBMS

DistributedHeterogeneous DBMS

DBMS Implementation Alternatives

Distribution

Heterogeneity

AutonomyCentralized Homog.

Distributed Homog. Federated DBMS

Distributed Heterog.Federated DBMS

Centralized Heterog.Federated DBMS

Client/Server Distribution

Centralized Homog.

DistributedMulti-DBMS

Distributed Heterog.Multi-DBMS

Centralized Heterog.Multi-DBMS

Federated DBMS Multi-DBMS

©2002 Denise Ecklund, Vera Goebel DDBS-8

Common DBMS Architectural Configurations

No Autonomy Federated Multi

N1

N3 N4

N2

D3

D2

D4

D1

D3D1 D2 D4

GTM & SM

Fully integrated nodesComplete cooperation on:

- Data Schema- Transaction Mgmt

Fully aware of all othernodes in the DDB

Independent DBMSsImplement some“cooperation functions”:

- Transaction Mgmt- Schema mapping

Aware of other DBsin the federation

Fully independent DBMSsIndependent, globalmanager tries tocoordinate the DBMSsusing only existingDBMS servicesUnaware of GTM andother DBs in the MultiDB

©2000 Vera Goebel III–5

©2002 Denise Ecklund, Vera Goebel DDBS-9

Parallel Database Platforms(A form of Non-autonomous Distributed DBMS)

Shared Everything

Disk 1 Disk p...

Fast InterconnectionNetwork

Proc1 ProcNProc2 ...Mem1

Mem2

MemM

...

Typically a special-purpose hardwareconfiguration, but this can be emulatedin software.

Shared Nothing

Disk 1 Disk N...

...

Fast Interconnection Network

Proc1 ProcN

Mem1 MemN

Can be a special-purpose hardwareconfiguration, or a network of workstations.

©2002 Denise Ecklund, Vera Goebel DDBS-10

Distributed DBMS• Advantages:

– Improved performance– Efficiency– Extensibility (addition of new nodes)– Transparency of distribution

• Storage of data • Query execution

– Autonomy of individual nodes

• Problems:– Complexity of design and implementation– Data consistency– Safety– Failure recovery

©2000 Vera Goebel III–6

©2002 Denise Ecklund, Vera Goebel DDBS-11

Client/Server Database Systems

The “Simple” Case of Distributed Database Systems?

©2002 Denise Ecklund, Vera Goebel DDBS-12

Client/Server Environments

• objects stored and administered on server• objects processed (accessed and modified) on

workstations [sometimes on the server too]• CPU-time intensive applications on workstations

– GUIs, design tools

• Use client system local storage capabilities• combination with distributed DBS services

=> distributed server architecture

data (object) server + n smart clients (workstations)

©2000 Vera Goebel III–7

©2002 Denise Ecklund, Vera Goebel DDBS-13

Clients with Centralized Server Architecture

Disk 1 Disk n

...

...

DataServer

database

local communicationnetwork

interface

database functions

workstations

©2002 Denise Ecklund, Vera Goebel DDBS-14

Clients with Distributed Server Architecture

Disk 1 Disk n

...

...

Data Server 1

database

local communication network

interface

local data mgnt. functions

workstations

Data Server m

distributed DBMS ...

Disk 1 Disk p...

database

interface

local data mgnt. functions

distributed DBMS

©2000 Vera Goebel III–8

©2002 Denise Ecklund, Vera Goebel DDBS-15

Clients with Data Server Approach

Disk 1 Disk n

...

...

ApplicationServer

DataServer

database

communicationchannel

user interface

query parsing

data server interface

application server interface

database functions

“3-tier client/server”

©2002 Denise Ecklund, Vera Goebel DDBS-16

In the Client/Server DBMS Architecture,

how are the database servicesorganized?

There are several architectural options!

©2000 Vera Goebel III–9

©2002 Denise Ecklund, Vera Goebel DDBS-17

Client/Server Architectures

Relational

Application

client processserver process

SQLcursor management

Object-Oriented

Application

client process

server process

objects,pages, or

files

object/page cachemanagement

DBMS

©2002 Denise Ecklund, Vera Goebel DDBS-18

Object Server Architecture

ObjectCache

PageCache

Log/LockManager

ObjectManager

Storage Allocationand I/O

objectsobject references

queriesmethod calls

lockslog records

ObjectCache

Application

Object Manager

client process

server process

File/IndexManager

Page CacheManager

Database

©2000 Vera Goebel III–10

©2002 Denise Ecklund, Vera Goebel DDBS-19

Object Server Architecture - summary• Unit of transfer: object(s)• Server understands the object concept and can execute methods• most DBMS functionality is replicated on client(s) and server(s)• Advantages:

- server and client can run methods, workload can be balanced- simplifies concurrency control design (centralized in server)- implementation of object-level locking- low cost for enforcing “constraints” on objects

• Disadvantages/problems:- remote procedure calls (RPC) for object references- complex server design- client cache consistency problems- page-level locking, objects get copied multiple times, large objects

©2002 Denise Ecklund, Vera Goebel DDBS-20

Page Server Architecture

PageCache

database

Log/LockManager

Page CacheManager

Storage Allocationand I/O

pagespage references

lockslog records

server process

client process

ObjectCache

PageCache

Application

Object Manager

File/IndexManager

Page CacheManager

©2000 Vera Goebel III–11

©2002 Denise Ecklund, Vera Goebel DDBS-21

Page Server Architecture - summary• unit of transfer: page(s)• server deals only with pages (understands no object semantics)• server functionality: storage/retrieval of page(s), concurrency

control, recovery• advantages:

- most DBMS functionality at client- page transfer -> server overhead minimized- more clients can be supported- object clustering improves performance considerably

• disadvantages/problems:- method execution only on clients- object-level locking- object clustering- large client buffer pool required

©2002 Denise Ecklund, Vera Goebel DDBS-22

File Server Architecture

database

Log/LockManager

SpaceAllocation

lockslog records

server process

pagespage references

NFS

ObjectCache

Application

Object Manager

client process

File/IndexManager

Page CacheManager

PageCache

©2000 Vera Goebel III–12

©2002 Denise Ecklund, Vera Goebel DDBS-23

File Server Architecture - summary• unit of transfer: page(s)• simplification of page server, clients use remote file system (e.g.,

NFS) to read and write DB page(s) directly• server functionality: I/O handling, concurrency control, recovery• advantages:

- same as for page server- NFS: user-level context switches can be avoided- NFS widely-used -> stable SW, will be improved

• disadvantages/problems:- same as for page server - NFS write are slow - read operations bypass the server -> no request combination- object clustering- coordination of disk space allocation

©2002 Denise Ecklund, Vera Goebel DDBS-24

Client sends all write lock requests to the server at commit time;Client waits;Server replies based on W-W conflicts only.

Client sends 1 msgper lock to the server;Client continues;After commit, the server sends updates to all cached copies.

Client sends object status query to server for each access;Client waits;Server replies.

Client sends all write lock requests to the server at commit time;Client waits;Server replies when all cached copies are freed.

Client sends 1 msgper lock to the server;Client continues;Server invalidates cached copies at other clients.

Client sends 1 msgper lock to server;Client waits;Server replies with ACK or NACK.

Cache Consistency in Client/Server Architectures

Avoidance-BasedAlgorithms

Detection-BasedAlgorithms

Synchronous Asynchronous Deferred

Best Performing Algorithms

©2000 Vera Goebel III–13

©2002 Denise Ecklund, Vera Goebel DDBS-25

Comparison of the 3 Client/Server Architectures

• Simple server design• Complex client design• Fine grained

concurrency control difficult

• Very sensitive to client buffer pool size and clustering

• Complex server design• “Relatively” simple client

design• Fine-grained concurrency

control• Reduces data movement,

relatively insensitive to clustering

• Sensitive to client buffer pool size

Page & File Server Object Server

Conclusions:• No clear winner• Depends on object size and application’s object access pattern• File server ruled out by poor NFS performance

©2002 Denise Ecklund, Vera Goebel DDBS-26

Problems in Distributed DBMS ServicesDistributed Database DesignDistributed Directory/Catalogue MgmtDistributed Query Processing and OptimizationDistributed Transaction Mgmt

– Distributed Concurreny Control– Distributed Deadlock Mgmt– Distributed Recovery Mgmt

influences

query processing

directory management

distributed DB design reliability (log)

concurrency control(lock)

deadlock management

transactionmanagement

©2000 Vera Goebel III–14

©2002 Denise Ecklund, Vera Goebel DDBS-27

Distributed Storage in Relational DBMSs• horizontal fragmentation: distribution of “rows”, selection

• vertical fragmentation: distribution of “columns”, projection

• hybrid fragmentation: ”projected columns” from ”selected rows”

• allocation: which fragment is assigned to which node?

• replication: multiple copies at different nodes, how many copies?

• Evaluation Criteria– Cost metrics for: network traffic, query processing, transaction mgmt

– A system-wide goal: Maximize throughput or minimize latency

• Design factors:– Most frequent query access patterns

– Available distributed query processing algorithms

©2002 Denise Ecklund, Vera Goebel DDBS-28

Distributed Storage in OODBMSs• Must fragment, allocate, and replicate object data among nodes• Complicating Factors:

– Encapsulation hides object structure– Object methods (bound to class not instance)– Users dynamically create new classes– Complex objects– Effects on garbage collection algorithm and performance

• Approaches:– Store objects in relations and use RDBMS strategies to distribute data

• All objects are stored in binary relations [OID, attr-value]• One relation per class attribute• Use nested relations to store complex (multi-class) objects

– Use object semantics• Evaluation Criteria:

– Affinity metric (maximize)– Cost metric (minimize)

©2000 Vera Goebel III–15

©2002 Denise Ecklund, Vera Goebel DDBS-29

Node1: : Node2

Node1: : Node2

Horizontal Partitioning using Object Semantics

• Divide instances of a class into multiple groups– Based on selected attribute values

– Based on subclass designation Class C

Subcl X Subcl Y

Attr A < 100 Attr A >= 100

• Fragment an object’s complex attributesfrom it’s simple attributes

• Fragment an object’s attributes based ontypical method invocation sequence– Keep sequentially referenced attributes together

©2002 Denise Ecklund, Vera Goebel DDBS-30

Vertical Partitioning using Object Semantics

• Fragment object attributes based on the class hierarchy

Breaks object encapsulation?

Instances of Class X

Instances of Subclass subX

Instances of Subclass subsubX

Fragment #1Fragment #2

Fragment #3

©2000 Vera Goebel III–16

©2002 Denise Ecklund, Vera Goebel DDBS-31

Path Partitioning using Object Semantics

• Use the object composition graph for complex objects

• A path partition is the set of objects corresponding to instance variables in the subtree rooted at the composite object– Typically represented in a ”structure index”

CompositeObject OID

{SubC#1-OID, SubC#2-OID, …SubC#n-OID}

©2002 Denise Ecklund, Vera Goebel DDBS-32

More Issues in OODBMS FragmentationLocal Object Data

RemoteMethods

Remote Object Data

LocalMethods

• Replication Options– Objects– Classes (collections of objects)– Methods– Class/Type specifications

Good performance;Issue: Replicate methods so they are local to all instances?

Send data to remote site, execute, and return resultOR fetch the method and execute;Issues: Time/cost for two transfers?Ability to execute locally?

Fetch the remote data and execute the methods;Issue: Time/cost for data transfer?

Send additional input values via RPC, execute and return result;Issues: RPC time?Execution load on remote node?

©2000 Vera Goebel III–17

©2002 Denise Ecklund, Vera Goebel DDBS-33

Distributed Directory and Catalogue Management

• Directory Information:– Description and location of records/objects

• Size, special data properties (e.g.,executable, DB type, user-defined type, etc.)

• Fragmentation scheme

– Definitions for views, integrity constraints

Issues: bottleneck, unreliable

Issues: complicated access protocol, consistency

Issues: consistency, storage overhead

• Options for organizing the directory:– Centralized– Fully replicated– Partitioned– Combination of partitioned and replicated e.g., zoned,

replicated zoned

©2002 Denise Ecklund, Vera Goebel DDBS-34

Distributed Query Processing and Optimization• Construction and execution of query plans, query optimization• Goals: maximize parallelism (response time optimization)

minimize network data transfer (throughput optimization)

• Basic Approaches to distributed query processing:

Pipelining – functional decomposition

Parallelism – data decomposition

Node 1Rel A

Select A.x > 100

Node 2 Rel B

Join A and B on y

Node 1

Node 2

Frag A.1

Frag A.2

Select A.x > 100

Processing Timelines

Node 1:Node 2:

Node 1:

Node 3:

Processing Timelines

Node 2:

Node 3

Union

©2000 Vera Goebel III–18

©2002 Denise Ecklund, Vera Goebel DDBS-35

Creating the Distributed Query Processing Plan

• Factors to be considered:- distribution of data- communication costs- lack of sufficient locally available information

• 4 processing steps:(1) query decomposition(2) data localization control site (uses global information)(3) global optimization(4) local optimization local sites (use local information)

©2002 Denise Ecklund, Vera Goebel DDBS-36

Generic Layered Scheme for Planning a Dist. Querycalculus query on distributed objects

query decomposition

algebraic query on distributed objects

Globalschema

data localization

fragment query

Fragmentschema

global optimization

optimized fragment querywith communication operations

Statistics onfragments

local optimization

optimized local queries

Localschema

cont

rol s

itelo

cal s

ites

This approach was initially designed for distributed relational DBMSs.

It also applies to distributed OODBMSs, Multi-DBMSs, and distributed Multi-DBMSs.

©2000 Vera Goebel III–19

©2002 Denise Ecklund, Vera Goebel DDBS-37

Distributed Query Processing Plans in OODBMSs

• Two forms of queries– Explicit queries written in OQL– Object navigation/traversal

Reduce to logicalalgebraic expressions}

• Planning and Optimizing– Additional rewriting rules for sets of objects

• Union, Intersection, and Selection

– Cost function• Should consider object size, structure, location, indexes, etc.• Breaks object encapsulation to obtain this info?• Objects can ”reflect” their access cost estimate

– Volatile object databases can invalidate optimizations • Dynamic Plan Selection (compile N plans; select 1 at runtime)• Periodic replan during long-running queries

©2002 Denise Ecklund, Vera Goebel DDBS-38

Distributed Query Optimization• information needed for optimization (fragment statistics):

- size of objects, image sizes of attributes- transfer costs- workload among nodes- physical data layout- access path, indexes, clustering information- properties of the result (objects) - formulas for estimating the

cardinalities of operation results• execution cost is expressed as a weighted combination of I/O,

CPU, and communication costs (mostly dominant).

total-cost = CCPU * #insts + CI/O * #I/Os + CMSG * #msgs + CTR * #bytes

Optimization Goals: response time of single transaction or system throughput

©2000 Vera Goebel III–20

©2002 Denise Ecklund, Vera Goebel DDBS-39

Transaction Manager

Distributed Transaction Managment• Transaction Management (TM) in centralized DBS

– Achieves transaction ACID properties by using:• concurrency control (CC)• recovery (logging)

• TM in DDBS– Achieves transaction ACID properties by using:

Concurrency Control(Isolation)

Commit/Abort Protocol(Atomicity)

Recovery Protocol(Durability)

Replica Control Protocol(Mutual Consistency)

Log Manager

Buffer Manager

Strong algorithmicdependencies

©2002 Denise Ecklund, Vera Goebel DDBS-40

Classification of Concurrency Control Approaches

CC Approaches

Pessimistic Optimistic

Locking TimestampOrdering

Hybrid Locking TimestampOrdering

Centralized

PrimaryCopy

Distributed

Basic

Multi-Version

Conser-vative

phases of pessimistictransaction execution

validate read compute write

phases of optimistictransaction execution

read compute validate write

Isolation

©2000 Vera Goebel III–21

©2002 Denise Ecklund, Vera Goebel DDBS-41

Two-Phase-Locking Approach (2PL)

BEGIN LOCK ENDPOINT

Obtain Lock

Release Lock

TransactionDuration

Num

ber

of lo

cks2PL Lock Graph

Strict 2PL Lock Graph

BEGIN END

Period of data item use

Obtain Lock

Release Lock

TransactionDuration

Num

ber

of lo

cks

Isolation

©2002 Denise Ecklund, Vera Goebel DDBS-42

Communication Structure of Centralized 2PL

Data Processors at Coordinating TM Central Site LMparticipating sites

2Lock Granted

3Operation

4End of Operation

5Release Locks

1Lock Request

Isolation

©2000 Vera Goebel III–22

©2002 Denise Ecklund, Vera Goebel DDBS-43

Communication Structure of Distributed 2PL

Data Processors at Participating LMs Coordinating TMparticipating sites

1Lock Request

2Operation

3End of Operation

4Release Locks

Isolation

©2002 Denise Ecklund, Vera Goebel DDBS-44

Distributed Deadlock Management

Example

Site X

Site Y

T1 xholds lock Lc

T2 y

holds lock Ld

T1 x needs awaits for T1on site yto complete

T2 y needs bwaits for T2on site xto complete

T1 y

waits for T2 yto release Ld

T2 x

waits for T1 xto release Lc

holds lock La

holds lock Lb

Isolation

Distributed Waits-For Graph

- requires many messagesto update lock status

- combine the “wait-for”tracing message with lockstatus update messages

- still an expensive algorithm

©2000 Vera Goebel III–23

©2002 Denise Ecklund, Vera Goebel DDBS-45

Communication Structure of Centralized 2P Commit Protocol

Coordinating TM Participating Sites

1Prepare

2Vote-Commit or Vote-Abort

Make local commit/abort decisionWrite log entry

3Global-Abort or Global-Commit

Count the votesIf (missing any votes

or any Vote-Abort)Then Global-AbortElse Global-Commit

4ACK

Write log entry

Who participates?Depends on the CC alg.

Other communicationstructures are possible:– Linear– Distributed

Atomicity

©2002 Denise Ecklund, Vera Goebel DDBS-46

State Transitions for 2P Commit ProtocolAdvantages:

- Preserves atomicity

- All processes are“synchronous withinone state transition”

Coordinating TM Participating Sites

2Vote-Commit

or Vote-Abort

1Prepare

3Global-Abort or

Global-Commit

4ACK

Initial

InitialWait

Commit Abort

Commit

Ready

Abort

Disadvantages:

- Many, many messages

- If failure occurs,the 2PC protocol blocks!

Attempted solutions for the blocking problem:

1) Termination Protocol2) 3-Phase Commit3) Quorum 3P Commit

Atomicity

©2000 Vera Goebel III–24

©2002 Denise Ecklund, Vera Goebel DDBS-47

Failures in a Distributed System

Types of Failure:– Transaction failure– Node failure– Media failure– Network failure

• Partitions each containing 1 or more sitesWho addresses the problem?

Termination Protocols

Modified Concurrency Control& Commit/Abort Protocols

Recovery Protocols,Termination Protocols,& Replica Control Protocols

Issues to be addressed:– How to continue service– How to maintain ACID properties

while providing continued service– How to ensure ACID properties after

recovery from the failure(s)

©2002 Denise Ecklund, Vera Goebel DDBS-48

Coordinating TM Participating Sites

2Vote-Commit

or Vote-Abort

1Prepare

3Global-Abort or

Global-Commit

4ACK

Initial

InitialWait

Commit Abort

Commit

Ready

Abort

Termination Protocols

Timeout states:Coordinator: wait, commit, abortParticipant: initial, ready

Use timeouts to detect potential failures that could block protocol progress

Commit Abort

Ready

Coordinator Termination Protocol:Wait – Send global-abortCommit or Abort – BLOCKED!

Participant Termination Protocol:Ready – Query the coordinator

If timeoutthen query other participants;If global-abort | global-committhen proceed and terminateelse BLOCKED!

Atomicity, Consistency

©2000 Vera Goebel III–25

©2002 Denise Ecklund, Vera Goebel DDBS-49

Replica Control Protocols

• Update propagation of committed write operations

Period of data use

Num

ber

of lo

cks

Obtain Locks

COMMIT POINT

ReleaseLocks

BEGIN END

2-Phase Commit

STRICT UPDATE

LAZY UPDATE

Consistency

©2002 Denise Ecklund, Vera Goebel DDBS-50

Strict Replica Control Protocol

• Read-One-Write-All (ROWA)• Part of the Concurrency Control Protocol

and the 2-Phase Commit Protocol– CC locks all copies– 2PC propagates the updated values with 2PC

messages (or an update propagation phase is inserted between the wait and commit states for those nodes holding an updateable value).

Consistency

©2000 Vera Goebel III–26

©2002 Denise Ecklund, Vera Goebel DDBS-51

Lazy Replica Control Protocol

• Propagates updates from a primary node.• Concurrency Control algorithm locks the

primary copy node (same node as the primary lock node).

• To preserve single copy semantics, must ensure that a transaction reads a current copy.– Changes the CC algorithm for read-locks– Adds an extra communication cost for reading data

• Extended transaction models may not require single copy semantics.

Consistency

©2002 Denise Ecklund, Vera Goebel DDBS-52

Recovery in Distributed SystemsSelect COMMIT or ABORT (or blocked) for each interrupted subtransaction

Implementation requires knowledge of:– Buffer manager algorithms for writing updated data

from volatile storage buffers to persistent storage– Concurrency Control Algorithm– Commit/Abort Protocols– Replica Control Protocol

Atomicity, Durability

Abort Approaches:Undo – use the undo/redo log to backout all the writes that were actually performed

Compensation – use the transaction log to select and execute ”reverse”subtransactions that semantically undo the write operations.

Commit Approaches:Redo – use the undo/redo log to perform all the write operations again

Retry – use the transaction log to redo the entire subtransaction (R + W)

©2000 Vera Goebel III–27

©2002 Denise Ecklund, Vera Goebel DDBS-53

Network Partitions in Distributed Systems

Issues:Termination of interrupted transactionsPartition integration upon recovery from a network failure

– Data availability while failure is ongoing

network

N1 N2

N6

N3

N5

N4Partition #1

Partition #3

N8

N7

Partition #2

©2002 Denise Ecklund, Vera Goebel DDBS-54

Data Availability in Partitioned Networks

• Concurrency Control model impacts data availability.

• ROWA – data replicated in multiple partitions is not available for reading or writing.

• Primary Copy Node CC – can execute transactions if the primary copy node for all of the read-set and all of the write-set are in the client’s partition.

Availability is still very limited . . . We need a new idea!

©2000 Vera Goebel III–28

©2002 Denise Ecklund, Vera Goebel DDBS-55

Quorums

• Quorum – a special type of majority• Use quorums in the Concurrency Control,

Commit/Abort, Termination, and Recovery Protocols– CC uses read-quorum & write-quorum– C/A, Term, & Recov use commit-quorum & abort-quorum

• Advantages:– More transactions can be executed during site failure and

network failure (and still retain ACID properties)

• Disadvantages:– Many messages are required to establish a quorum– Necessity for a read-quorum slows down read operations– Not quite sufficient (failures are not “clean”)

ACID

©2002 Denise Ecklund, Vera Goebel DDBS-56

Read-Quorums and Write-Quorums

• The Concurrency Control Algorithm serializes valid transactions in a partition. It must obtain– A read-quorum for each read operation– A write-quorum for each write operation

• Let N=total number of nodes in the system• Define the size of the read-quorum Nr and the size

of the write-quorum Nw as follows:– Nr + Nw > N– Nw > (N/2)

• When failures occur, it is possible to have a valid read quorum and no valid write quorum

Simple Example:N = 8 Nr = 4 Nw = 5

Isolation

©2000 Vera Goebel III–29

©2002 Denise Ecklund, Vera Goebel DDBS-57

Commit-Quorums and Abort-Quorums• The Commit/Abort Protocol requires votes from all

participants to commit or abort a transaction.– Commit a transaction if the accessible nodes can form a commit-

quorum

– Abort a transaction if the accessible nodes can form an abort-quorum

– Elect a new coordinator node (if necessary)

– Try to form a commit-quorum before attemptingto form an abort-quorum

• Let N=total number of nodes in the system

• Define the size of the commit-quorum Nc and the size of abort-quorum Na as follows:

– Na + Nc > N; 0 ≤ Na, Nc ≤ N

Simple Examples:N = 7 Nc = 4 Na = 4N = 7 Nc = 5 Na = 3

ACID

©2002 Denise Ecklund, Vera Goebel DDBS-58

Conclusions• Nearly all commerical relational database systems offer some

form of distribution– Client/server at a minimum

• Only a few commercial object-oriented database systems support distribution beyond N clients and 1 server

• Future research directions:– Distributed data architecture and placement schemes (app-influenced)

– Distributed databases as part of Internet applications

– Continued service during disconnection or failure

• Mobile systems accessing databases

• Relaxed transaction models for semantically-correct continued operation