server-storage virtualization: integration and load balancing in data centers

Post on 15-Jan-2016

76 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Server-Storage Virtualization: Integration and Load Balancing in Data Centers. Aameek Singh, Madhukar Korupolu (IBM Almaden) Dushmanta Mohapatra (Georgia Tech). Overview. Motivation Virtualization is common in datacenters Both compute and storage New degrees of freedom for load balancing - PowerPoint PPT Presentation

TRANSCRIPT

Server-Storage Virtualization: Integration and Load Balancing in Data

Centers

Aameek Singh, Madhukar Korupolu (IBM Almaden)

Dushmanta Mohapatra (Georgia Tech)

Overview

• Motivation– Virtualization is common in datacenters

• Both compute and storage

– New degrees of freedom for load balancing– Integrating compute & storage mgmt is important– Multiple resource dimensions complicate solution– Hierarchical data flows must be considered

Harmony

• A system for virtual server and storage monitoring and control– Monitoring & migration are off-the-shelf

• Employs VectorDot, a heuristic load balancing algorithm for load balancing systems with multidimensional and hierarchical constraints– Inspired by Toyoda method for solving

multidimensional knapsack problem

Harmony overview

Servers and Storage Mgmt

Servers and Storage Mgmt

Server Virtualization

Mgmt

Server Virtualization

Mgmt

Storage Virtualization

Mgmt

Storage Virtualization

Mgmt

Configuration and Performance

Manager

Configuration and Performance

Manager

Virtualization OrchestratorVirtualization Orchestrator

Trigger Detection

Trigger Detection

Optimization Planning

(VectorDot)

Optimization Planning

(VectorDot)

Migrationrecommendations

Harmony

Cluster testbed

Load balancing input

• Record system state in utilization, capacities and thresholds

• Any node which has any utilization above threshold is called a trigger

Node/Item Resource type Parameter

Server node CPU cpuU, cpuCap, cpuT

Memory memU, memCap, memT

Net BW netU, netCap, netT

IO BW ioU, ioCap, ioT

Storage node Space spaceU, spaceCap, spaceT

IO Rate ioU, ioCap, ioT

Switch node IO Rate ioU, ioCap, ioT

Multidimensionality and hierarchy constraints

• Vitems, nodes w/ multidimensional resources– E.g., VM requires 100MHz CPU, 50 MB RAM,

0.5Mbps network, 0.2 Mbps of storage IO– Server with 2GHz of CPU, 512 MB RAM, 2Mbps

network, 2Mbps storage

• VMs also use switch resources, determined by paths to the root switch– Path vectors encode path from node to the root

• What if a flow doesn’t go all the way to the root?

Node load and virtualitem fraction vectors

• Usage fraction & threshold for each resource– For a server:

• <cpuU/cpuCap, memU/memCap, netU/netCap, ioU/ioCap>,<cpuT, memT, netT, ioT>

– For storage node:• <spaceU/spaceCap, ioU/ioCap>, <spaceT, ioT>

– For switch:• <ioU/ioCap>, <ioT>

• Requirements for VMs and vdisks– VM: <cpuU, memU, netU, ioU>– Vdisk: <spaceU, ioU>

Imbalance scores

• Imbalance score penalizes nodes for being above threshold

• IBscore(f, T) = 0 if f < T, e^(f – T)/T otherwise– Exponential weighting penalizes nodes which are

further over threshold– E.g., distinguish between (3T, T) and (2T, 2T)

• Sum scores over all dimensions and all nodes

Path vectors

• FlowPath(u) for a node is the path from node to the storage virtualizer

VectorDot

• Score of mapping virtual items to nodes• Start with simple dot product of the

PathLoadFracVec(u) (Au) and the ItemPathLoadFracVec(vi, u) (Bu(vi))– Example:

• Au = <0.4, 0.2, 0.4, 0.2, 0.2>• Aw = <0.2, 0.4, 0.2, 0.4, 0.2>• Bu(vi) = Bv(vi) = <0.2, 0.05, 0.2, 0.05, 0.2>• Au . Bu(vi) < Av . Bv(vi), so assign vi to u

Extended vector product (EVP)

• Extensions to account for thresholds, imbalance scores, and avoid oscillations

• First: Smooth PathLoadFracVec(u) with respect to PathThresholdVec(u)– Similar to exponential penalization of imbalance– E.g., component at utilization 0.6 with threshold of

0.4 gets higher value than 0.6 when threshold is 0.8

• Second: Avoid oscillations by considering post-move load vectors

Using EVP

• Identify trigger nodes: those whose load fraction exceeds the threshold along any dim– Search among trigger nodes in descending IBScore order

• Consider four selection criteria for search, traversing destination nodes in static order (i.e., by name)– FirstFit– BestFit– WorstFit– RelaxedBestFit

• Visit nodes in random order until N feasible nodes are found, then choose that with minimum EVP

Migration overheads

• Simple experiment: live migration of VM running PostMark benchmark, and its vdisk

• Migration incurs some overhead

State Throughput (trans/s) Duration (s) Overhead

Normal 1436

VM migration 1265 50 11.9%

Vdisk migration 1327 549 7.6%

Evaluation: Simulation• Built a simulator to generate topologies and system

and node configurations• Simple ratios between # of components

– E.g., 500 vms, 1 disk per vm mapped onto 100 physical hosts, 33 storage nodes, 10 edge switches 4 core switches

– No details on what these ratios are besides example• Load capacities and resource requirements from

Normal distributions– No details on parameters, other than the default for α and

β are 0.55, although they claim to vary them…• Generate vms, vdisks, servers, switches and storage

nodes, do initial mapping, then balance with VectorDot

Results: Imbalance

• BestFit and RelaxedBestFit achieve low imbalance scores

Results: Moves from initial state

• BestFit and RelaxedBestFit require fewest moves to reach balance

• At no point does the # of triggers or imbalance score increase

Results: Convergence

• ??

Results: Running time

• Basic allocation 35 seconds, max• Better initial placement = faster load balancing

Time for initial placement Initial placement + load balancing

Evaluation: Real data center

• Figure 1– 3 servers, 3 switches, 3 storage nodes– 6 vms, 6 storage volumes– Disabled caching?

• Workload generators – lookbusy, IOMeter

Results: Single server overload

• Figure 11b

Results: Multi-server overload

Results: Server+storage overload

Results: Switch overload

Summary

• Virtual server + virtual storage load balancing together

• Harmony: System for monitoring, planning, and executing server & storage load balancing– They just use off the shelf software…

• VectorDot: Heuristics for multidimensional and hierarchical load balancing– Does this generalize back to other problems?

• Evaluation w/ simulated & “real” datacenters– “Real” evaluation seems too dinky

top related