a tailorable environment for assessing the quality of deployment architectures in highly distributed...

21
A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman Nenad Medvidovic CD 2004 University of Southern California

Upload: aaliyah-marsh

Post on 28-Mar-2015

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

A Tailorable Environment for Assessing the Quality of Deployment Architectures in

Highly Distributed Settings

Sam MalekandMarija Mikic-RakicNels BeckmanNenad Medvidovic

CD 2004

University of Southern California

Page 2: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 2May 20th

Outline Motivation Background

Problem description Algorithms

DeSi Under the hood

Architecture Implementation

Future work and concluding remarks

Page 3: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 3May 20th

Motivation

Deployment architecture: distribution (i.e., assignment) of software components onto hardware nodes.

How good is this deployment architecture?

What are its properties?

How should it be modified to ensure higher availability?

Page 4: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 4May 20th

Outline Motivation Background

Problem description Algorithms

DeSi Under the hood

Architecture Implementation

Future work and concluding remarks

Page 5: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 5May 20th

System modelGiven:(1) a set C of n components ( Cn ), two relations

CCfreq : , and a function

Cmemcomp :

jiji

jiji ccifcandcbetweencommoffrequency

ccifccfreq

0),(

cformemoryrequiredcmemcomp )(

CCsizeevt :_,

jiji

jiji ccifexchangecandcdataofsizeavg

ccifccsizeevt

0),(_

(2) a set H of k hardware nodes ( Hk ), twoHHrel : , and a

Hmemhost :

jiji

ji

ji

ji

hhifhandhbetweenlinktheofyreliabilit

htoconnectednotishif

hhif

hhrel 0

1

),(

hhostonmemoryavailablehmemhost )(

relations , HHbw :

function

jiji

ji

ji

ji

hhifhandhbetweenlinktheofbandwidth

htoconnectednotishif

hhif

hhbw 0),(

(3) Two relations that restrict locations of software components

}1,0{: HCloc }1,0,1{: CCcolloc

ji

jiji hontodeployedbecannotcif

hontodeployedbecancifhcloc

0

1),(

ji

ji

ji

ji

candcofncollocatioonnsrestrictionoarethereif

cashostsametheonbetohascifcashostsametheonbecannotcif

cccolloc0

11

),(

Page 6: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 6May 20th

Find a function HCf : such that the

A defined as

n

i

n

jji

n

i

n

jjiji

ccfreq

cfcfrelccfreq

A

1 1

1 1

),(

))(),((),(

system’s overall availability

is maximized, and the following four conditions are satisfied:

j

ihostjcompij hmemcmemhcfnjki )())()(],1[],1[ 1))(,(],1[ jj cfclocnj

Problem – formal definition

],1[],1[ nlnk ))()(()1),(( lklk cfcfcccolloc ))()(()1),(( lklk cfcfcccolloc

Note that the possible number of

different functions is ]),1[],1[( kijki

ml

jiml

jmil

hhbweffectiveccvoldata

hcfhcfwhere

nlmnl

,

),(_),(_

)()(

]),1[],1[(

),(_*),(),(_ yxyxyx ccsizeevtccfreqccvoldata

),(*),(),(_ yxyxyx hhbwhhrelhhbweffective

Page 7: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 7May 20th

Problem breakdown1) Lack of knowledge about runtime system parameters

System model parameters not known at the time of initial deployment System model parameters change at runtime

Reliability of links, frequencies of interaction, etc. Middleware with monitoring support - PrismMW

2) Exponentially complex problem n components and k hosts = kn possible deployments Polynomial time approximating algorithms

3) Environment for assessing deployments Comparison of different solutions and algorithms performance vs. complexity, sensitivity analysis, etc DeSi’s analysis and visualization utilities

4) Effecting the selected solution Redeploying components Requires an automated solution Middleware with deployment support - PrismMW

Page 8: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 8May 20th

Suite of algorithmsExact – finds optimal solution O(k^n)Biased/Unbiased stochastic – random selection O(n^2)Avala – greedy approximation O(n^3)DecAp – decentralized auction based O(n^3)Clustering – decreases complexity

1

100

10000

1000000

Stochastic Algorithm Exact AlgorithmAvala Algorithm

Time taken(in ms)

10 comps 100 comps 200 comps 1000 comps 100 comps 30 comps 300 comps 4 hosts 10 hosts 20 hosts 100 hosts 40 hosts 7 hosts 70 hosts

0

0.2

0.4

0.6

0.8

1

z

Achieved availability

10 comps 100 comps 200 comps 1000 comps 100 comps 30 comps 300 comps 4 hosts 10 hosts 20 hosts 100 hosts 40 hosts 7 hosts 70 hosts

DecAp AlgorithmOriginal Availability

Page 9: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 9May 20th

DeSi Deployment simulation environment

Specification and generation of deployment architectures

Visualization and analysis of distributed system Estimation of the quality of deployment Facilitation of rapid development and comparison

of algorithms

Page 10: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 10May 20th

Page 11: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 11May 20th

Outline Motivation Background

Problem description Algorithms

DeSi Under the hood

Architecture Implementation

Future work and concluding remarks

Page 12: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 12May 20th

DeSi’s architectureDeSi Model DeSi View

DeSi Controller

SystemData AlgoResultData

GraphViewData

TableView GraphView

Generator

Modifier

AlgorithmContainer

MiddlewareAdapter

Monitor

Effector

MiddlewarePlatform

Legend:

Data flow

Control flow

Page 13: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 13May 20th

IComponentIConnector

AbstractMonitor

Scaffold

AbstractDispatcher

Round RobinDispatcher

AbstractScheduler

FifoScheduler

Brick

Architecture

ExtensibleComponent

Component

Connector

Event

Port

IPort

Serializable

Admin

IArchitecture

#mutualPort

AbstractAdmin

Deployer

Disconnection Rate

EvtFrequency

Prism middleware An architectural

middleware Enables

implementation and deployment of distributed systems in terms of their architectural elements

Support for monitoring and redeployment

Page 14: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 14May 20th

DeSi Model DeSi View

DeSi Controller

TableView GraphView

SystemData

AlgoResultData

GraphViewData

Generator

AlgorithmContainer

Modifier

MiddlewareAdapter

Monitor

Effector

Legend:

Dataflow

Controlflow

Integration with middleware

Admin

34

31

18

2 615

16

4 12

21

Admin

8

3 9

29 1

Admin

28

2030

17

14

0Admin

2226

13

27

10

33

7

24

25

32

19

23

11

Deployer

5

Monitoring DataRedeployment Data

Distributed system

Legend:

Event frequencymonitor

Architecture

Network reliabilitymonitor

Skeletonconfiguration

Deployer/Admin

Pointer toarchitecture

i Component

Page 15: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 15May 20th

DeSi’s implementation

Eclipse plug-in GEF

Tailorability Simple API Model specialization

Scalability Efficiency Exploration capability

Page 16: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 16May 20th

Conclusion and future work Quality of deployment architecture An environment that assists with

Development of redeployment algorithms Analysis of real systems Effecting the results

Scalability, efficiency, tailorability, explorability On-going/future work: Modeling other system properties Creating new visualizations Integrating DeSi with other platforms

Page 17: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 17May 20th

Questions?

Page 18: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 18May 20th

Approach - overview

Enabling the system to: Monitor its operation Estimate its new deployment

architecture Effect the estimated

architecture Time

Availability

A1

A2

TM TR

TTOTE

T’M T’RT’

T’OT’E

A3

A4

Page 19: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 19May 20th

Automatic algorithm selection

AC

AS

T0S

GreedyAG

AEExact

EG TEE

T0GT0E

Time

Availability

TRT

TE * AC + (TAVG – TE) * AEXP

Page 20: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 20May 20th

Architecture - DEMO

class DemoArch { static public void main(String argv[]) { Architecture arch = new Architecture ("DEMO ");

Using Prism-MW

// create componentsComponentA a = new ComponentA ("A");ComponentB b = new ComponentB ("B");ComponentD d = new ComponentD (“D”);

Component BComponent A Component D

// create connectorsConnector conn = new Connector("Conn");

CConnector C

// add components and connectors arch.addComponent(a);arch.addComponent(b);arch.addComponent(d);arch.addConnector(conn);

Component BComponent A

Component D

CConnector C

// establish the interconnectionsarch.weld(a, conn);arch.weld(b, conn);arch.weld(conn, d)

}}

Page 21: A Tailorable Environment for Assessing the Quality of Deployment Architectures in Highly Distributed Settings Sam Malek and Marija Mikic-Rakic Nels Beckman

CD 2004 21May 20th

Component B handles the event and sends a response

public void handle(Event e){

if (e.equals(“Event_D”)) {... Event e1= new Event(“Response_to_D”);e1.addParameter("response", resp);send(e1);}...

}

Send

(e1)

Using Prism-MW

Architecture - DEMO

Component BComponent A

Component D

CConnector C

Component D sends an event

Event e = new Event (“Event_D”);e.addParameter("param_1", p1);send (e);

Send

(e)