talk slides

34
Towards Virtual Networks for Virtual Machine Grid Computing Ananth I. Sundararaj Peter A. Dinda Prescience Lab Department of Computer Science Northwestern University http:// virtuoso.cs.northwestern.edu

Upload: cameroon45

Post on 12-Jan-2015

583 views

Category:

Technology


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Talk Slides

Towards Virtual Networks for Virtual Machine Grid Computing

Ananth I. Sundararaj

Peter A. Dinda

Prescience Lab

Department of Computer Science

Northwestern University

http://virtuoso.cs.northwestern.edu

Page 2: Talk Slides

2

Outline• Virtual machine grid computing• Virtuoso system• Networking challenges in Virtuoso• Enter VNET• VNET Adaptive virtual

network• Related Work• Conclusions• Current Status

Page 3: Talk Slides

3

Aim

Grid Computing

New Paradigm

Traditional Paradigm

Deliver arbitrary amounts of computational power to perform distributed and parallel computations

Problem1:

Grid Computing using virtual machines

Problem2:

Solution

How to leverage them?

Virtual Machines What are they?

6b

6a

5

4

3b3a

2

1

Resource multiplexing using OS level mechanism

Complexity from resource user’s perspective

Complexity from resource owner’s perspective

Page 4: Talk Slides

4

Virtual Machines

Virtual machine monitors (VMMs)

•Raw machine is the abstraction

•VM represented by a single image

•VMware GSX Server

Page 5: Talk Slides

5

Virtual machine grid computing

• Approach: Lower level of abstraction– Raw machines, not processes, jobs, RPC calls

R. Figueiredo, P. Dinda, J. Fortes, A Case For Grid Computing on Virtual Machines, ICDCS 2003

• Mechanism: Virtual machine monitors• Our Focus: Middleware support to hide complexity

– Ordering, instantiation, migration of machines– Virtual networking – remote devices– Connectivity to remote files, machines– Information services– Monitoring and prediction– Resource control

Page 6: Talk Slides

6

The Simplified Virtuoso Model

Orders a raw machine

User

Specific hardware and performance

Basic software installation available

User’s LAN

VM

Virtual networking ties the machine back to user’s home network

Virtuoso continuously monitors and adapts

Page 7: Talk Slides

7

User’s View in Virtuoso Model

User

User’s LAN

VM

Page 8: Talk Slides

8

Outline• Virtual machine grid computing• Virtuoso system• Networking challenges in Virtuoso• Enter VNET• VNET Adaptive virtual

network• Related Work• Conclusions• Current Status

Page 10: Talk Slides

10

User’s friendlyLAN

Foreign hostile LAN

Virtual Machine

VNET: A bridge with long wires

Host

Proxy

X

Why VNET? A Scenario VM traffic going out on foreign LAN

IP network

A machine is suddenly plugged into a foreign network. What happens?

• Does it get an IP address?• Is it a routeable address?• Does firewall let its traffic through? To any port?

Page 11: Talk Slides

11

Outline• Virtual machine grid computing• Virtuoso system• Networking challenges in Virtuoso• Enter VNET• VNET Adaptive virtual

network• Related Work• Conclusions• Current Status

Page 12: Talk Slides

12

A Layer 2 Virtual Network for the User’s Virtual Machines

• Why Layer 2?– Protocol agnostic– Mobility– Simple to understand – Ubiquity of Ethernet on end-systems

• What about scaling?– Number of VMs limited (~1024 per user)– One VNET per user– Hierarchical routing possible because MAC

addresses can be assigned hierarchically

Page 13: Talk Slides

13

Host

VM

ProxyVNET

Client

vmnet0ethx

ethz “eth0”

VNET

ethy“eth0”

ClientLAN IP Network

Ethernet Packet Tunneledover TCP/SSL Connection

Ethernet Packet Captured by PromiscuousPacket Filter

Ethernet Packet Injected Directly into VM interface

“Host Only” Network

VNET operation

Traffic outbound from the user’s LAN

Page 14: Talk Slides

14

Performance Evaluation

Main goalConvey the network

management problem induced by VMs to the home network of the user

VNET’s performance should be

• In line with physical network

• Comparable to other options

• Sufficient for scenarios

However

Metrics

Latency

Bandwidth

• small transfer

• Interactivity

• Large transfer

• low throughput

Why? How? How?Why?

• ping

• hour long intervals

• ttcp

• socket buffer

• 1 GB of data

Page 15: Talk Slides

15

VNET test configuration

Proxy

100 mbitSwitches

Client

100 mbitSwitchFirewall

1

Router

Host

100 mbitSwitches

100 mbitSwitch Firewall

2

VM

Local

Local area configuration

Proxy

100 mbitSwitches

Client

100 mbitSwitch

Firewall 1 RouterHost

100 mbitSwitch

Router

VM

LocalIP Network(14 hops via Abilene)

Wide area configurationNorthwestern University, IL Carnegie Mellon University,

PA

Page 16: Talk Slides

16

Average latency over WAN

Proxy

Client HostVM

IP Network

Northwestern University, IL Carnegie Mellon University, PA

(Physical Network)

Client<->VM Client<->VM (VNET) Client<->VM (VNET+SSL)0

5

10

15

20

25

30

35

40

Mill

iseconds

Host - VM

Client - Proxy

Proxy - Host

Page 17: Talk Slides

17

Standard deviation of latency over WAN What: VNET increases

variability in latency

TCP connection between VNET servers trades packet loss for increased delay

Why:

Client<->VM Client<->VM (VNET) Client<->VM (VNET+SSL)0

10

20

30

40

50

60

70

80

Milli

seco

nds

(Physical Network)

Page 18: Talk Slides

18

Bandwidth over WAN

What do we see:

VNET achieves lower than expected throughput

VNET’s is tricking TTCP’s TCP connection

Why:

Expectation:VNET to achieve throughput comparable to the physical network

Host<->Client Client<->VM (VNET) Client<->VM (VNET+SSL)0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

MB

/s

Page 19: Talk Slides

19

Outline• Virtual machine grid computing• Virtuoso system• Networking challenges in Virtuoso• Enter VNET• VNET Adaptive virtual

network• Related Work• Conclusions• Current Status

Page 20: Talk Slides

20

User’s friendlyLAN

Foreign hostile LAN 1

Host 2+

VNET

Proxy+

VNET

VNET Overlay

IP network

Host 3+

VNETHost 4

+VNET

Host 1+

VNET

Foreign hostile LAN 3

Foreign hostile LAN 4

Foreign hostile LAN 2

VM 1

VM 4VM 3

VM 2

Page 21: Talk Slides

21

Bootstrapping the Virtual Network

• Topology may change• Links can be added or removed on demand• Virtual machines can migrate

VMVnetd

VMHost + VNETd

Proxy + VNETd

VM

• Star topology always possible

• Forwarding rules can change• Forwarding rules can be added or removed on demand

Page 22: Talk Slides

22

VMLayer

VNETdLayer

PhysicalLayer

Application communicationtopology and traffic load;application processor load

Network bandwidth andlatency; sometimes topology

Vnetd layer can collect all this information as a side effect of packet transfers and invisibly act

• Reservation

• Routing change

• VM migrates

• Topology changes

Page 23: Talk Slides

23

Outline• Virtual machine grid computing• Virtuoso system• Networking challenges in Virtuoso• Enter VNET• VNET Adaptive virtual

network• Related Work• Conclusions• Current Status

Page 24: Talk Slides

24

Related Work• Collective / Capsule Computing (Stanford)

– VMM, Migration/caching, Hierarchical image files• Denali (U. Washington)

– Highly scalable VMMs (1000s of VMMs per node)• SODA and VIOLIN (Purdue)

– Virtual Server, fast deployment of services• VPN• Virtual LANs, IEEE• Overlay Networks: RON, Spawning networks, Overcast• Ensim• Virtuozzo (SWSoft)

– Ensim competitor• Available VMMs: IBM’s VM, VMWare, Virtual

PC/Server, Plex/86, SIMICS, Hypervisor, VM/386

Page 25: Talk Slides

25

Conclusions

• There exists a strong case for grid computing using virtual machines

• Challenging network management problem induced by VMs in the grid environment

• Described and evaluated a tool, VNET, that solves this problem

• Discussed the opportunities, the combination of VNET and VMs present, to exploit an adaptive overlay network

Page 26: Talk Slides

26

Current Status

• Application traffic load measurement and topology inference [Ashish Gupta]

• Support for arbitrary topologies and forwarding rules

• Dynamic adaptation to improve performance

Page 27: Talk Slides

27

Current Status SnapshotsPseudo proxy

Page 28: Talk Slides

28

• For More Information– Prescience Lab (Northwestern University)

• http://plab.cs.northwestern.edu

– Virtuoso: Resource Management and Prediction for Distributed Computing using Virtual Machines

• http://virtuoso.cs.northwestern.edu

• VNET is publicly available from• http://virtuoso.cs.northwestern.edu

Page 29: Talk Slides

29

Isn’t It Going to Be Too Slow?Application Resource ExecTime

(10^3 s)

Overhead

SpecHPC Seismic

(serial, medium)

Physical 16.4 N/A

VM, local 16.6 1.2%

VM, Grid virtual FS

16.8 2.0%

SpecHPC

Climate

(serial, medium)

Physical 9.31 N/A

VM, local 9.68 4.0%

VM, Grid virtual FS

9.70 4.2%

Experimental setup: physical: dual Pentium III 933MHz, 512MB memory, RedHat 7.1,30GB disk; virtual: Vmware Workstation 3.0a, 128MB memory, 2GB virtual disk, RedHat 2.0NFS-based grid virtual file system between UFL (client) and NWU (server)

Small relativevirtualizationoverhead;compute-intensive

Relativeoverheads < 5%

Page 30: Talk Slides

30

Isn’t It Going To Be Too Slow?

0

0.5

1

1.5

2

2.5

3

0

0.5

1

1.5

2

2.5

3

No Load Light Load Heavy Load

Tasks onPhysicalMachine

Tasks onVirtual

Machine

Tasks onPhysicalMachine

Tasks onVirtual

Machine

Tasks onPhysicalMachine

Tasks onVirtual

Machine

Synthetic benchmark: exponentially arrivals of compute bound tasks, background load provided by playback of traces from PSC

Relative overheads < 10%

Page 31: Talk Slides

31

Isn’t It Going To Be Too Slow?

• Virtualized NICs have very similar bandwidth, slightly higher latencies

– J. Sugerman, G. Venkitachalam, B-H Lim, “Virtualizing I/O Devices on VMware Workstation’s Hosted Virtual Machine Monitor”, USENIX 2001

• Disk-intensive workloads (kernel build, web service): 30% slowdown– S. King, G. Dunlap, P. Chen, “OS support for Virtual Machines”,

USENIX 2003

However: May not scale with faster NIC or disk

Page 32: Talk Slides

32

Average latency over WAN

0

5

10

15

20

25

30

35

40

0.345

36.993 36.848

0.044 0.189

35.622

37.436 37.535

35.524

VMWare VNETPhysical

Comparison with options

VNET = 37.535 ms

= 35.525 ms (with SSL)

VMware = 35.625 (NAT)

= 37.435 ms (bridged)

Inline with Physical?

Physical= C-P + P-H + H-VM

= 0.34 + 36.993 + 0.189

= 37.522 ms

VNET = 37.535 ms

= 35.525 ms (with SSL)

Client -- C

Proxy -- P

Host -- H

Physical network VMware options VNET options

H-VM

P-H

C-P

Page 33: Talk Slides

33

Standard deviation of latency over WAN

0

10

20

30

40

50

60

70

80

1.105

18.702 17.287

0.011 0.095

4.867

18.484

77.287

40.763

VMWare VNETPhysical

Inline with Physical?

Physical= C-P + P-H + H-VM

= 1.11 + 18.702 + 0.095

= 19.907 ms

VNET = 77.287 ms

= 40.763 ms (with SSL)

Client -- C

Proxy -- P

Host -- H

H-VM

H-VM

C-P

What: VNET increases variability in latency

TCP connection between VNET servers trades packet loss for increased delay

Why:

Page 34: Talk Slides

34

Bandwidth over WAN

Local<->Host

Client<->ProxyHost<->Proxy

Host<->Client

Host<->HostHost<->VM

Client<->VM (Bridged)

Client<->VM (NAT)

Client<->VM (VNET)

Client<->VM (VNET+SSL)

Host<->Client (SSH)0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

22

MB

/s

1.971.93

1.63

0.72

1.22

0.94

0.4

VMWare VNETPhysical SSH11.2 207.6 27.9

Inline with Physical?

Physical= 1.93 MB/s

VNET = 1.22 MB/s

= 0.94 MB/s (with SSL)

What: VNET achieves lower than expected throughput

VNET’s is tricking TTCP’s TCP connection

Why:

Expect:VNET to achieve throughput comparable to the physical network

VMWare bridged networking

Physical network