vmworld 2013: designing network virtualization for data-centers: greenfield design and migration...

52
Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios Ben Basler, VMware Roberto Mari, VMware TEX5350 #TEX5350

Upload: vmworld

Post on 12-May-2015

491 views

Category:

Technology


1 download

DESCRIPTION

VMworld 2013 Ben Basler, VMware Roberto Mari, VMware Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare

TRANSCRIPT

Page 1: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

Designing Network Virtualization for Data-Centers:

Greenfield Design and Migration Scenarios

Ben Basler, VMware

Roberto Mari, VMware

TEX5350

#TEX5350

Page 2: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

2 Confidential

This presentation may contain product features that are currently

under development.

This overview of new technology represents no commitment from

VMware to deliver these features in any generally available

product.

Features are subject to change, and must not be included in

contracts, purchase orders, or sales agreements of any kind.

Technical feasibility and market demand will affect final delivery.

Pricing and packaging for any new technologies or features

discussed or presented have not been determined.

Disclaimer

Page 3: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

3

Session Objectives

Discuss Network virtualization Design principles and introduce the

VMware NSX network virtualization product

Discuss the different Data-Center design options and the benefits

that Network virtualization brings

Analyze more in detail design aspects of the VMware NSX Network

Virtualization for the vSphere/ESXi deployments

Discuss migration scenarios with a mix of physical and virtual

workloads

Page 4: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

4

Reference Sessions

NET5716 – Advanced NSX Architecture

NET5266 – Bringing Network Virtualization to VMware

Environments with NSX

NET5270 – Virtualized Network Services Model with NSX

NET5521 - vSphere Distributed Switch - Design and Best Practices

SEC5582 – Multi-site Deployments with VMware NSX

NET5796 – Network Virtualization Concepts for Network

Administrators

NET5184 - Designing Your Next Generation Datacenter for Network

Virtualization

See NSX in action in lab HOL-SDC-1319 and HOL-SDC-1303

Page 5: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

5

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

Data Center Migration scenarios

Key Takeaways

Page 6: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

6

Agenda

Network Virtualization design Advantages

• Network Virtualization Design Advantages

• Why Network Virtualization ?

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

Data Center Migration scenarios

Key Takeaways

Page 7: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

7

Network Virtualization Design Advantages

Network virtualization leverages existing datacenter networks

Overlay Technology can be deployed over common datacenter

network design

Flexible and Rapid deployment options via adoption of virtual

Networking and Logical Networks.

SOFTWARE-DEFINED

DATACENTER

All infrastructure is virtualized and delivered as a

service, and the control of this datacenter is

entirely automated by software.

Page 8: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

8

Why Network Virtualization ?

Physical Infrastructure

Compute Virtualization Abstraction Layer

Physical Network: A Barrier to

Software Defined Data Center

SOFTWARE-DEFINED DATACENTER SERVICES

VDC

• Provisioning is slow

• Placement is limited

• Mobility is limited

• Hardware dependent

• Operationally intensive

Page 9: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

9

Why Network Virtualization ?

Physical Infrastructure

Compute Virtualization Abstraction Layer

Network Virtualization Abstraction Layer

SOFTWARE-DEFINED DATACENTER SERVICES

VDC

Solution: Virtualize the Network

• Programmatic provisioning

• Place any workload anywhere

• Move any workload anywhere

• Decoupled from hardware

• Operationally efficient

Page 10: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

10

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

• VMware solutions and VMware NSX

• VMware NSX Functional System Overview

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

Data Center Migration scenarios

Key Takeaways

Page 11: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

11

VMware NSX – the Network Virtualization Platform

Multi-Hypervisor

VMware NSX

Any Physical

Infrastructure

Compute, Storage & Network Hardware Independent

SOFTWARE-DEFINED DATACENTER SERVICES

VDC

SOFTWARE-DEFINED DATACENTER SERVICES

VDC

SOFTWARE-DEFINED DATACENTER SERVICES

VDC

Multi-Cloud Management Platform

Page 12: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

12

VMware NSX Functional System Overview

vSphere KVM XenServer Hyper-V*

vSwitch vSwitch vSwitch vSwitch

Hosts

Data Plane

Operations

UI

Logs/Stats

Control Plane Run-time state

Management Plane

API

API, config, etc.

HA, scale-out

NSX

Gateways

NSX Manager

NSX Controller

Consumption

Tenant UI

CMP

Page 13: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

13

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

• Virtual Networking and Physical Fabrics

• Terminology, Design Considerations and Features

• Virtualization and Distributed Service placement

NSX and vSphere Design Considerations

Data Center Migration scenarios

Key Takeaways

Page 14: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

14

Virtualization improves efficiency of compute clusters

Less than 40% Asset Utilization

to 90% Asset Utilization

Transformation

Page 15: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

15

L2/L3 Traditional Fabrics - Physical Fabric Resource Utilization

Before NV With NV

MAC addresses

ARP entries

VLAN usage

STP load

# of VMs # of Tenants

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM

Anim

ate

d S

lide

L3

L2

Page 16: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

16

L2/L3 Traditional Fabrics - Physical Fabric Resource Utilization

Before NV With NV

MAC addresses

ARP entries

VLAN usage

STP load

# of VMs # of Tenants

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM

VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM VM

L3

L2

Anim

ate

d S

lide

Page 17: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

17

Traditional Physical Networking Configuration Tasks

L3

L2

Initial configuration

• Multi-chassis LAG

• Routing configuration

• SVIs/RVIs

• VRRP/HSRP

• STP

Instances/mappings

Priorities

Safeguards

• LACP

• STP

Instances/mappings

Priorities

Safeguards

Recurring configuration

• SVIs/RVIs

• VRRP/HSRP

• Advertise new subnets

• Access lists (ACLs)

• VLANs

• Adjust VLANs on trunks

• VLANs STP/MST

mapping

• VLANs STP/MST mapping

• Add VLANs on uplinks

• Add VLANs to server ports

Anim

ate

d S

lide

Configuration consistency !

Page 18: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

18

Fabric Technology Trends

From 2- or 3-tier to spine/leaf

Density & bandwidth jump

ECMP for layer 3 (and layer 2, i.e.

TRILL)

Reduce network oversubscription

Wire & configure once

Uniform configurations

WAN/Internet

WAN/Internet

L3

L2

L3

L2

Page 19: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

19

L3 Leaf-Spine Fabrics Simplified Configuration Tasks

Anim

ate

d S

lide

Initial configuration

• Multi-chassis LAG

• Routing configuration

• SVIs/RVIs

• VRRP/HSRP

• STP

Instances/mappings

Priorities

Safeguards

• LACP

• STP

Instances/mappings

Priorities

Safeguards

Recurring configuration

• SVIs/RVIs

• VRRP/HSRP

• Advertise new subnets

• Access lists (ACLs)

• VLANs

• Adjust VLANs on trunks

• VLANs STP/MST

mapping

• VLANs STP/MST mapping

• Add VLANs on uplinks

• Add VLANs to server ports

L3

L2

Simplified 2-Tier L3 Fabric

with Network virtualization

L3

L2

• Routing configuration

• SVIs/RVIs

Page 20: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

20

Simplified Physical Infrastructure

Compute Racks

• Repeatable Rack design

• No VLANs for VMs

Infra/Storage Racks Edge Racks

WAN

Internet

• On/off ramp to physical

• Work with existing devices

• Multi-CMP

• Multi-VC

• Scalable

L2

L3

Higher Speed, Lower Cost, Non-Disruptive

Page 21: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

21

Spine Nodes

Spine

Leaf

L3 point-to-point

10.99.1.0/31

… …

.2

.1

Spine L3 Only

(Route Table entries, ARP entries,

no mac consumption)

To Leaf

1

To Leaf N

L3 Downlinks

Spine connects to

leaf switches

• Interfaces configured

as routed point-to-point

L3 links.

• Links between spine

switches not required.

• In case of a spine to

leaf link failure, routing

protocol reroutes traffic

on the alternate paths.

• Aggregates the leaf

nodes and provide

connectivity between

racks.

Page 22: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

22

Leaf Nodes

Servers facing ports have minimal

configuration

May use Link Aggregation Control

Protocol (LACP)

• Applies to same speed Interfaces

• Active/Active Bandwidth

• Fast failover times

801.Q trunks with a small set of

VLANs

• VTEP (VXLAN), management, storage,

vMotion Traffic

L2

L3

L3 Uplinks

VLAN

Boundary 802.1Q

Hypervisor 1

802.1Q Hypervisor n

. . .

Page 23: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

23

Leaf Nodes

Storage

VXLAN

L3 Uplinks

VLAN

Boundary Mgmt: 10.66.1.x/26

vMotion 10.77.1.2/26

VXLAN: 10.88.1.x/26

Storage: 10.99.1.x/26

L2

L3

Management

802.1Q Hypervisor 1

802.1Q Hypervisor n

. . .

L3 ToR designs have

dynamic routing protocol

between leaf and spine.

• BGP, OSPF or ISIS can be used

• Rack advertises small set of

prefixes (one per VLAN/subnet).

• Equal cost paths to the other

racks prefixes.

• Switch provides default gateway

service for each VLAN subnet

vMotion

Page 24: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

24

QoS in virtualized Data Center Designs

Leaf

Trust QoS marking

802.1Q Hypervisor

Trust or set QoS marking

No Marking/Reclassification

Spine

Virtualized environments can

carry different types of traffic

Hypervisor is a trusted

boundary, sets the respective

QoS values.

The physical switching

infrastructure to “trust” these

values. No reclassification is

necessary at the server facing

port of a leaf.

With congestion QoS values

are looked at for taking a

decision which traffic should

be queued up (and potentially

dropped) or prioritized.

Page 25: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

25

Scalable Bandwidth Growth

WAN

Internet

Compute Racks

(1x10 Gbps) or (2x10 Gbps)

Infra Racks

1x10 Gbps

Edge Racks

(1x10 Gbps)

L2

L3

Spine

Leaf

Individual L3 Links or

L3 Port-channels (LACP)

Storage Racks

(2x10 Gbps)

1x10 Gbps

2x10 Gbps

Scale incrementally grows with: # of racks, # of paths from a leaf, #

of spines, # of ports on each Hypervisors Leaf or Spine node

Different racks with different bandwidth requirements.

Link speed and number of uplinks to satisfy bandwidth demands

Page 26: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

26

VMware NSX Components

Compute Racks Infrastructure Racks Edge Racks

WAN

Internet

L2

L3

Controllers/Infra Mgmt.

• CMP, vCenter, SYSLOG,

network and storage mgmt.

• Massive scale

Hypervisor Service Modules

• Distributed network services (Switching,

Routing)

• Load Balancer, Switch, Firewall, Router

Gateway/Edge Software

• Integration with existing

physical infra.

• V to V / V to P

Page 27: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

27

Network Virtualization – Networking functions in Virtual Space

Routing/NAT

Security/Firewalling

QoS

Port Mirroring

Counters

VM VM VM

Physical Fabric

NSX Controllers

NSX vSwitch

Anim

ate

d S

lide

VM-Aware Networking services

Networking services distributed at

the edge

Scale-out with the number of

Hypervisors

No choke points

Page 28: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

28

Centralized vs. Distributed Services

VM

VM

VM

VM VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

VM

Page 29: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

29

Distributed Services with Network Virtualization

Perimeter NSX Edge

Dynamic Routing

(OSPF, BGP)

External

Networks

Web Logical

Switch App Logical Switch DB Logical Switch Web Logical

Switch App Logical Switch DB Logical Switch

Web Logical

Switch App Logical Switch DB Logical Switch

Logical Router Instance 1

Logical Router Instance 2

Logical Router Instance n

2 Tiers of Routing

• Distributed for E-W

• Perimeter for N-S

Dynamic Routing to

advertise Logical Networks Dynamic Routing

(OSPF, BGP)

Page 30: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

30

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

• vSphere Networking considerations

• Network Addressing and Management consideration

• vSphere scalability considerations

Data Center Migration scenarios

Key Takeaways

Page 31: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

31

VMkernel Networking

Multi instance TCP/IP Stack

• Introduced with vSphere 5.5 and leveraged by:

• VXLAN

• NSX vSwitch transport network

• Separate routing table, ARP

table and default gateway per

stack instance

• Provides increased isolation and

reservation of networking

resources such as sockets,

buffers and heap memory

• Enables VXLAN VTEPs to use a

gateway independent from the

default TCP/IP stack

• Management, vMotion, FT, NFS,

iSCSI leverage the default

TCP/IP stack in 5.5

Page 32: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

32

VMkernel Networking

Static Routing

• VMkernel VLANs do not extend beyond the rack in an L3 fabric design or

beyond the cluster with an L2 fabric, static routes are required for

Management, Storage and vMotion Traffic

• Host Profiles reduce the overhead of managing static routes and ensure

persistence

• Follow the RPQ (Request for Product Qualification) process for official

support of routed vMotion. Routing of IP Storage traffic also has some

caveats.

• A number of customers went

through RPQ and use routed

vMotion with full support today

• Future enhancements will

simplify ESXi host routing and

enable greater support for L3

network topologies

Page 33: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

33

VMkernel Networking

Teaming Recommendations

• LACP (802.3ad) good option for optimal use of

available bandwidth and quick convergence.

• Load Based Teaming recommended to simplify

configuration and reduce dependencies on the

physical network, while using multiple uplinks

• NSX supports multiple VTEPs per host with VXLAN

• 2x 10Gbe NIC adapters per server is most common

• Proprietary network partitioning technologies increase

complexity and dependencies on 3rd party drivers

Overlay Networks are used for VMs

• Have only one VMkernel port per IP subnet

• Have a dedicated VLAN for each VMkernel interface

Considerations for vSphere Auto Deploy

• DHCP relay and IP helper to provision boot images

ESXi Host

NSX vSwitch

Physical Switch

See NET5521 –

VDS Design and Best Practices

Page 34: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

34

vSphere Host (ESXi)

VMkernel Networking

L3 ToR Switch

Routed uplinks (ECMP)

Span

of V

LA

Ns

VLAN Trunk (802.1Q)

VLAN 66

Mgmt

10.66.1.25/26

GW: 10.66.1.1

VLAN 77

vMotion

10.77.1.25/26

GW: 10.77.1.1

VLAN 88

VXLAN

10.88.1.25/26

GW: 10.88.1.1

VLAN 99

Storage

10.99.1.25/26

GW: 10.99.1.1

SVI 66: 10.66.1.1/26

SVI 77: 10.77.1.1/26

SVI 88: 10.88.1.1/26

SVI 99: 10.99.1.1/26

Anim

ate

d S

lide

Page 35: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

35

WAN

Internet

Rack 1

Infrastructure Racks Edge Racks

ToR

Rack 2 Rack N

ToR ToR ToR ToR

Compute Rack - IP Address Allocations and VLANs

Function Global VLAN ID IP Address

Management 66 10.66.R_id.x/26

vMotion 77 10.77.R_id.x/26

VXLAN 88 10.88.R_id.x/26

Storage 99 10.99.R_id.x/26

Note: Values for VLANs, IP addresses and masks are provided as an example. R_id is the rack or cluster number

vSphere Network Addressing

Page 36: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

36

Management and Edge Rack Requirements

WAN

Internet

Leaf

L2

L3

L2

L3

L2

L3 L2

Edge 2 Edge 1

VMkernel VLANs

VLANs for Edge

VMs to Physical

Network

VMkernel VLANs

Edge N Edge N

Management Racks

• L2 between racks needed for

Management workloads such as

vCenter Server, NSX

Controllers, NSX Manager and

IP Storage

Leaf L2

L3

L2

L3

Mgmt N

Mgmt N Mgmt 1

VMkernel VLANs

VLANs for

Management

VMs

VMkernel VLANs

Edge Racks

• L2 between racks needed for

External 802.1q VLANs

Mgmt 2

L2

Edge 3 Edge 4

Page 37: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

37

vSphere Scalability

Cluster Sizing

• vSphere 5.x HA: 32 hosts

Storage

• VAAI ATS removes SCSI reservation constraints on datastore sizing

Virtual Machines

• 10,000 powered on VMs per vCenter Server

Concurrent Operations

• Eight concurrent operations per vCenter Server

Networking

• NSX allows scaling of the network

independent to vCenter Server

• NSX for vSphere supports similar

scale. Logical networks are bound

to a vCenter Server at present

See NET5184 - Designing Your

Next Generation Datacenter for

Network Virtualization

Page 38: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

38

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

Data Center Migration scenarios

• Integrate Physical workloads with x86 NSX Gateways

• Integrate Physical workloads with 3rd party NSX Gateways

Key Takeaways

Page 39: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

39

Migration scenarios: Connecting Physical and Virtual Worlds

Physical network (port, or VLAN)

NSX L2 Gateway

Logical network (VNI)

Page 40: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

40

The VMware NSX Gateway

VM VM VM VM VM VM VM

Virtual Network Virtual Network

Hypervisor

NSX Gateway

VLAN VLAN

Physical

Virtual

e.g. Hosted Servers

e.g. Cloud Servers

1

1 2

2 3 4

3 4

Bare Metal, Physical Switch

or Virtual Appliance

Page 41: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

41

NSX Gateway – What is the NSX Gateway for?

Connect physical workloads to logical Networks via a L2 service

Physical workloads are reachable on a specific VLAN or PPort

Logical Switch Logical View

Service Node

Physical View

NSX L2 Gateway

Logical network (VNI)

DNS-DHCP VLAN 10

VLAN 10 Database

VLAN 10

VLAN 10

Database

Page 42: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

42

Connecting virtual and Physical: NSX x86 Gateways

Physical servers part of the compute infrastructure

NSX x86 Gateways w. 10Gbps performance

L2 Bridging and L3 Routing Services

P-to-V, P-to-P connectivity (STT or VXLAN)

Migrating/Virtualizing Legacy Services (same design)

Compute Racks Infra/Storage Racks Edge Racks

WAN

Internet

L2

L3

Hypervisors

w. virtual Workloads

Physical Servers

w. Hosted Services

NSX x86 Gateways

Page 43: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

43

How do 3rd Party NSX Gateways work?

1. Registration of hardware appliance into NSX Controller (one time)

2. Creation of a L2 Gateway Service including the hardware

appliance physical port.

3. NSX API calls to connect a physical port/VLAN to a Logical

Switch

3rd Party Gateway

VXLAN

L2

L3 Virtual Network

L2

VMware NSX

1

2

3

Page 44: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

44

Connecting virtual and Physical: NSX 3rd Party HW Gateways

3rd Party Gateways Controlled by VMware NSX

High Throughput L2 Bridging Service

P-to-V, P-to-P connectivity (VXLAN)

Same Design and Operational model

• Troubleshooting tools and Traffic statistics

Hypervisors

w. virtual Workloads

Physical Servers

w. Hosted Services

NSX 3rd Party

HW Gateways

Compute Racks Infra/Storage Racks Edge Racks

WAN

Internet

L2

L3

Page 45: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

45

Agenda

Network Virtualization design Advantages

VMware NSX Solution for Datacenter Network Virtualization

Datacenter virtualization Design principles

NSX and vSphere Design Considerations

Data Center Migration scenarios

Key Takeaways

Page 46: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

46

VMware NSX: Key Takeaways…

… works great on top of existing networks and new fabrics

… network virtualization for vSphere and 3rd party Hypervisors

… unleashes the power of x86-based networking

… integrates physical compute with same provisioning/operations

See our NSX Demos at the main VMware Booth

and check out the NSX Hands-on Labs!

Page 47: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

47

Reference Sessions

NET5716 – Advanced NSX Architecture

NET5266 – Bringing Network Virtualization to VMware

Environments with NSX

NET5270 – Virtualized Network Services Model with NSX

NET5521 - vSphere Distributed Switch - Design and Best Practices

SEC5582 – Multi-site Deployments with VMware NSX

NET5796 – Network Virtualization Concepts for Network

Administrators

NET5184 - Designing Your Next Generation Datacenter for Network

Virtualization

See NSX in action in lab HOL-SDC-1319 and HOL-SDC-1303

Page 48: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

48

TAP Membership Renewal – Great Benefits

• TAP Access membership includes:

New TAP Access NFR Bundle

• Access to NDA Roadmap sessions at VMworld, PEX and Onsite/Online

• VMware Solution Exchange (VSX) and Partner Locator listings

• VMware Ready logo (ISVs)

• Partner University and other resources in Partner Central

• TAP Elite includes all of the above plus:

• 5X the number of licenses in the NFR Bundle

• Unlimited product technical support

• 5 instances of SDK Support

• Services Software Solutions Bundle

• Annual Fees

• TAP Access - $750

• TAP Elite - $7,500

• Send email to [email protected]

Page 49: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

49

TAP Resources

TAP

• TAP support: 1-866-524-4966

• Email: [email protected]

• Partner Central: http://www.vmware.com/partners/partners.html

TAP Team

• Kristen Edwards – Sr. Alliance Program Manager

• Sheela Toor – Marketing Communication Manager

• Michael Thompson – Alliance Web Application Manager

• Audra Bowcutt –

• Ted Dunn –

• Dalene Bishop – Partner Enablement Manager, TAP

VMware Solution Exchange

• Marketplace support –

[email protected]

• Partner Marketplace @ VMware

booth pod TAP1

Page 50: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

THANK YOU

Page 51: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios
Page 52: VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield Design and Migration Scenarios

Designing Network Virtualization for Data-Centers:

Greenfield Design and Migration Scenarios

Ben Basler, VMware

Roberto Mari, VMware

TEX5350

#TEX5350