planetlab: a distributed test lab for planetary scale network services opportunities emerging...

1
PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: CDNs and P2P networks are first examples Application spreads itself over the Internet Vibrant Research Community: Distributed Hash Tables: Chord, CAN, Tapestry, Pastry Distributed Storage: Oceanstore, Mnemosyne, Past Lack of viable testbed for ideas People and Organization Project Owner: Timothy Roscoe (acting ) Hans Mulder (sponsor) David Culler Larry Peterson Tom Anderson Milan Milenkovic Earl Hines Seed Research & Design Community MIT Utah Duke UCSD Rice Be rkele y Distributed Design Team Washington Princeton IR DSL Core Engineering & Design Team Larger academic, industry, government non-profit organiz CMU Pittsburg The Hard Problems “Slice-ability”: multiple experimental services sharing many nodes Security and Integrity • Management Building Blocks and Primitives • Instrumentation Q2 ‘02 Q2 ‘02 Phase 0 Phase 2 Phase 1 2H ‘02 2H ‘02 1H ‘03 1H ‘03 2H ‘03 2H ‘03 1H ‘04 1H ‘04 2H ‘04 2H ‘04 Silk: Safe Raw Sockets 100 Nodes online Broader Consortium to take to 1000 Nodes Phase 0 Design Spec Mission Control System Rev 0 Centralized Mgmt Service thinix API in Linux Native thinix Mgmt Services 300 Nodes online PlanetLab Summit Aug-20 Secure thinix Continued development of Self-serviced Infrastructure Transition Ops Mgmt to Consortium Overall Timeline Target Phase 0 Sites Intel Berkeley Intel Berkeley ICIR ICIR MIT MIT Princeton Princeton Cornell Cornell Duke Duke UT UT Columbia Columbia UCSB UCSB UCB UCB UCSD UCSD UCLA UCLA UW UW Intel Seattle Intel Seattle KY KY Melbourne Melbourne Cambridge Cambridge Harvard Harvard GIT GIT Uppsala Uppsala Copenhagen Copenhagen CMU CMU UPenn UPenn WI WI Chicago Chicago Utah Utah Intel OR Intel OR UBC UBC Washu Washu ISI ISI Intel Intel Rice Rice Beijing Beijing Tokyo Tokyo Barcelona Barcelona Amsterdam Amsterdam Karlsruhe Karlsruhe St. Louis St. Louis 33 sites, ~3 nodes each: Located with friends and family Running First Design code by 9/01/2002 10 of 100 located in manage colo centers, Phase 0 (40 of 300 in Phase 1) Disciplined roll-out Synopsis Open, planetary-scale research and experimental facility Dual-role: Research Testbed AND Deployment Platform >1000 viewpoints (compute nodes) on the internet 10-100 resource-rich sites at network crossroads Analysis of infrastructure enhancements, Experimentation with new applications and services Typical use involves a slice across substantial subset of nodes New Ideas / Opportunities Service-centric Virtualization Re-emergence of Virtual Machine technology (VMWare…) Sandboxing to provide virtual servers (Ensim, UML, Vservers) Network Services require fundamentally simpler virtual machines, making them more scalable (more VMs per PM), focussed on service requirements. Instrumentation and Management become further virtualized “slices” Restricted API => Simple Machine Monitor Very simple monitor => push complexity up to where it can be managed Ultimately one can only make very tiny machine monitor truly secure SILK effort (Princeton) captures most valuable part of ANets nodeOS in Linux kernel modules API should self-virtualize: deploy the next PlanetLab within the current one Host V.1 planetSILK within V.2 thinix VM Planned Obsolescence of Building Block Services Community-driven service definition and development 2003 2004 2005 V.0: seed 100 nodes V.1: get API & interfaces right Grow to 300 nodes V.2: get underlying arch. and impl. Right Grow to 1000+ nodes • Minimal VM / Auth. requirements • Applications mostly self- sufficient • Core team manages platform • Thin, event-driven monitor • Host new services directly • Host phase 1 as virtual OS • Replace bootstrap • Linux variant with constrained API •Bootstrap services hosted in VMs •Outsource operational support Project Strategy What will PlanetLab enable? The open infrastructure that enables the next generation of planetary scale services to be invented Post-cluster, post-yahoo, post-CDN, post-P2P, ... Potentially, the foundation on which the next Internet can emerge •A different kind of testbed Focus and Mobilize the Network / Systems Research Community Position Intel to lead in the emerging Internet

Upload: ronald-gibbs

Post on 29-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: –CDNs and P2P networks are first examples

PlanetLab: A Distributed Test Lab for Planetary Scale Network ServicesOpportunities

• Emerging “Killer Apps”:– CDNs and P2P networks are first examples– Application spreads itself over the Internet

• Vibrant Research Community:– Distributed Hash Tables: Chord, CAN, Tapestry, Pastry– Distributed Storage: Oceanstore, Mnemosyne, Past– Lack of viable testbed for ideas

People and Organization

• Project Owner: Timothy Roscoe (acting)• Hans Mulder (sponsor)• David Culler• Larry Peterson• Tom Anderson• Milan Milenkovic• Earl Hines

Seed Research & Design CommunityMIT

Utah

Duke UCSDRice

Berkeley

Distributed Design Team

WashingtonPri

ncet

on

IRDSL

Core Engineering & Design Team

Larger academic, industry,government non-profit organizationCMU

Pittsburg

The Hard Problems

• “Slice-ability”: multiple experimental services sharing many nodes• Security and Integrity• Management• Building Blocks and Primitives• Instrumentation

Q2 ‘02Q2 ‘02

Phase 0

Phase 2

Phase 1

2H ‘022H ‘02 1H ‘031H ‘03 2H ‘032H ‘03 1H ‘041H ‘04 2H ‘042H ‘04

Silk: Safe Raw Sockets

100 Nodes online

Broader Consortiumto take to

1000 Nodes

Phase 0 Design Spec

Mission ControlSystem Rev 0

Centralized MgmtService

thinix APIin Linux

Native thinixMgmt Services

300Nodes online

PlanetLab Summit Aug-20

Secure thinix

Continued development of Self-serviced Infrastructure

Transition Ops Mgmt to Consortium

Overall Timeline

Target Phase 0 SitesTarget Phase 0 Sites

Intel BerkeleyIntel BerkeleyICIRICIR

MITMIT

PrincetonPrincetonCornellCornell

DukeDuke

UTUT

ColumbiaColumbiaUCSBUCSBUCBUCB

UCSDUCSDUCLAUCLA

UWUW

Intel SeattleIntel Seattle

KYKY

MelbourneMelbourne

CambridgeCambridge

HarvardHarvard

GITGIT

UppsalaUppsalaCopenhagenCopenhagen

CMUCMU

UPennUPennWIWI

ChicagoChicagoUtahUtah

Intel ORIntel OR

UBCUBC

WashuWashu

ISIISI

IntelIntel

RiceRice

BeijingBeijingTokyoTokyo

BarcelonaBarcelona

AmsterdamAmsterdamKarlsruheKarlsruhe

St. LouisSt. Louis

33 sites, ~3 nodes each:– Located with friends and family– Running First Design code by 9/01/2002– 10 of 100 located in manage colo centers, Phase 0 (40 of 300 in Phase 1)

– Disciplined roll-out

Synopsis

• Open, planetary-scale research and experimental facility• Dual-role: Research Testbed AND Deployment Platform• >1000 viewpoints (compute nodes) on the internet• 10-100 resource-rich sites at network crossroads• Analysis of infrastructure enhancements,

• Experimentation with new applications and services• Typical use involves a slice across substantial subset of nodes

New Ideas / Opportunities

• Service-centric Virtualization– Re-emergence of Virtual Machine technology (VMWare…)– Sandboxing to provide virtual servers (Ensim, UML, Vservers)– Network Services require fundamentally simpler virtual machines, making

them more scalable (more VMs per PM), focussed on service requirements.– Instrumentation and Management become further virtualized “slices”

• Restricted API => Simple Machine Monitor– Very simple monitor => push complexity up to where it can be managed– Ultimately one can only make very tiny machine monitor truly secure– SILK effort (Princeton) captures most valuable part of ANets nodeOS in

Linux kernel modules– API should self-virtualize: deploy the next PlanetLab within the current one– Host V.1 planetSILK within V.2 thinix VM

• Planned Obsolescence of Building Block Services– Community-driven service definition and development– Service components on node run in just another VM– Team develops bootstrap ‘global’ services

2003 2004 2005

V.0: seed100 nodes

V.1: get API & interfaces rightGrow to 300 nodes

V.2: get underlying arch. and impl. RightGrow to 1000+ nodes

• Minimal VM / Auth. requirements• Applications mostly self-sufficient• Core team manages platform

• Thin, event-driven monitor• Host new services directly• Host phase 1 as virtual OS• Replace bootstrap services

• Linux variant with constrained API•Bootstrap services hosted in VMs•Outsource operational support

Project Strategy

What will PlanetLab enable?

• The open infrastructure that enables the next generation of planetary scale services to be invented

• Post-cluster, post-yahoo, post-CDN, post-P2P, ...

• Potentially, the foundation on which the next Internet can emerge• A different kind of testbed• Focus and Mobilize the Network / Systems Research Community• Position Intel to lead in the emerging Internet