must have static ip address pool and vlans for provider address (pa) network – network on which...
TRANSCRIPT
Spark the future.
May 4 – 8, 2015Chicago, IL
Deploying Hyper-V Network VirtualizationGreg CusanzaScott Napolitan
BRK3492
Session Objective(s): Define the requirements for Hyper-V Network Virtualization (HNV)Provide conceptual understanding of HNVWalkthrough a deployment end-to-end
SCVMM is a critical component of the Hyper-V infrastructure for network virtualization
Session Objectives And Takeaways
Compute/Storage& TOR Switches
Spine Switches/Routers
Fixed-Function Physical Appliances
Edge Routers
Datac
ente
r
Physical Infrastructure
Datac
ente
r
Physical Infrastructure
Datac
ente
r
• Must have static IP address pool and VLANs for • Provider Address (PA) network – network on which NVGRE
encapsulated packets are sent• All subnets in PA network must be routable to each other
• Front-end network(s) – network onto which packets are routed as they enter/exit the virtual network
• Routing must be configured to forward to virtual subnets if forwarding gateways are used
Physical infrastructure requirements L2&L3
Logical networks in SCVMMUsed to model the physical network so SCVMM can allocate IP addressesMust have minimum of three logical networks definedLogicalNetwork Requirements
Management • Can be DHCP or Static• VMM, Hyper-V host OS, HNV Gateways must be able to
communicate with each other over this network
HNV PA • Must have static IP pool• Must be “one connected network”. Can contain multiple
sites/VLANs, but all must be routable to each other.• Must check box in UI that says “Allow hyper-v network
virtualization on this logical network”
Front-end • Must have a static IP pool.• May have multiple depending on security zones/VLANs you
will route onto.
Physical Infrastructure
Datac
ente
r
• Physical switches must expose same L2 networks to each physical adapter in team
• Technologies like MC-LAG required with LACP teaming to different physical switches
Physical switch requirements for NIC teaming
Team
VLANS 100,101,2000
VLANS 100,101,2000
Physical Infrastructure
Datac
ente
r
• NVGRE task offload and VMQ strongly advised for 10Gb NICs and up
Hyper-V host physical requirements
Network Performance Enhancements
Without NVGRE SandyBridge(NVGRE) IvyBridge(NVGRE)0
2
4
6
8
10
12
14
16
18
20
4
12
18
2
5.8
10
2
5.3
8
0.81.3
1.8
East <-> West N <-> S Forwarding N <-> S NAT N <-> S S2S
Th
rou
gh
pu
t G
bp
s
• LACP Teaming• RSS enabled
on Host and Guest
• Enable VMQ• NVGRE
Offload• Processor
PerformanceMaximum Performance
task offload
vRSS in the guest : Get-netadapterrss | set-netadapterrss –enable $true
Enable VMQ on the port profile in SCVMM
VM settings for optimum performance
Receive
SendRSS kicks in here
• Hyper-V Network Virtualization
VMNetwork in SCVMM
• Hyper-V Virtual Switch
Use Logical Switch in SCVMM for team and vSwitch configuration
Physical InfrastructureCore Services
Datac
ente
r
Win
dows
Azure
Pac
k (W
AP)
Virtual
Mac
hine
Man
ager
(VM
M)
HNV Core services
Connecting VMs and host functions to NICsHyper-V Host
NIC
VM1 VM2 …
NIC
Clustering
Live Migration
Replica
Storage
Management
HBA HBA
SANNIC NIC
RDMA
Hyper-V Host Detail – Fully ConvergedHyper-V Host
NIC
VS
VM1 VM2 …
NIC
LBFO
Recommended teaming modes: Switch independent or LACPLoad balancing mode: Hyper-V Port (Hyper-V 2012)
Dynamic (Hyper-V 2012 R2)
Clustering
Live Migration
Replica
Storage
Management
• Different functions require different qualities of service (QOS)
• QOS defined by adapter and assigned to function
• Multiple vNICs required for SMB multi-channel
• Teaming must be configured to handle all traffic types
Mgmt NIC
Cluster NIC
LM NIC
SMB1 SMB2
SMB3 SMB4
Replica NIC
Hyper-V Host Detail - SANHyper-V Host
NIC
VS
VM1 VM2 …
NIC
LBFO
Clustering
Live Migration
Storage
Management
HBA HBA
SAN
• Different functions require different qualities of service (QOS)
• QOS defined by adapter and assigned to function
• Teaming must be configured to handle all traffic types
Mgmt NIC
Cluster NIC
LM NIC
ReplicaReplica NIC
Hyper-V Host Detail - RDMAHyper-V Host
NIC
VS
VM1 VM2 …
NIC
LBFO
NIC NIC
RDMA
Clustering
Live Migration
Storage
Management
• All host traffic goes over RDMA NICs
• Live migration and Storage benefit from RDMA
• All VM traffic goes over “regular” NICs
• Microsoft Cloud Platform System (CPS) uses this configuration
Replica
Hyper-V Host
NIC NIC
Clustering
Live Migration
Replica
Storage
Management
Configuring host networking in VMM
Logical Switch
Virtual switch settings
Physical adapterSettings (Uplink)
Port ProfilesVM1 VM2 …
LBFO
VS
Mgmt NIC
Cluster NIC
LM NIC
SMB1 SMB2
SMB3 SMB4
Replica NIC
Physical adapterSettings (Uplink)
Virtual switch settings
MgmtClusterSMBReplicaHigh Performance
VM2Live Migration
Virtual adapterSettings
Physical InfrastructureCore Services
Edge Services
Datac
ente
r
Win
dows
Azure
Por
tal (
WAP)
Virtual
Mac
hine
Man
ager
(VM
M)
• Multi-tenant gateway
• Forwarding gateway
Deployed by VMM using service template.
Service template requires:• Scale-out file server• Uses logical networks previously created• Dedicated host node, two for high availability
Edge Services
Highly available configuration Requires: 2 node Hyper-V cluster containing VMs in 2 node guest
cluster
Gateway
Hyper-V 10.0.0.6 Node 1
Hyper-VNode 2
Management157.16.55.0/24
VMNode1 (active)
157.16.1.4 (VIP)
157.16.1.5
VMNode2
(standby)157.16.1.6
Hyper-V Cluster
VM guest Cluster
Back end – PA network
Front-end
After fail over
Gateway
Hyper-V Node 1
Hyper-V 10.0.0.6 Node 2
Back end – PA network
Front-end
Management157.16.55.0/24
VMNode1 (standby)
157.16.1.5
VMNode2 (active)
157.16.1.4 (VIP)
157.16.1.6
Hyper-V Cluster
VM guest Cluster
Multi-tenancy in gateway and firewall requirements
Active Gateway
VSID trunkBack-end vNIC
Compartment 1VSID 5001
10.254.254.2
Compartment 2VSID 5002
10.254.254.2
Compartment NVSID 5003
10.254.254.2
Default compartment
10.0.0.2
10.0.0.3
S2
SN
AT
Front-end vNIC
Fire
wall
/ N
AT
2.2.2.2 UDP 5002.2.2.2 UDP 45002.2.2.2 ESP *
2.2.2.2 UDP 5002.2.2.2 UDP 45002.2.2.2 ESP *
NAT2.2.2.2 -> 172.16.1.22.2.2.3 -> 172.16.1.3
DatacenterEdge
Deployed Components – POC minimumHyper-V Host Hyper-V
Host
NIC NIC
VS VSNICs NICs
ADDNS
DHCPWAP
NVGREGatewa
y
Front-End - 131.107.156.0/24 VLAN 100Management – 172.16.0.0/16 VLAN 200PA – 172.21.0.0/16 VLAN 300
Hyper-V Host
NIC
VS NICs
VMM SQL Tenant VM
VLAN Trunk
Edge ClusterCompute/Management Cluster
Storage Node
NIC NI
Storage Node
NIC
Storage Cluster
. .
VLANs:
Deployed with reliabilityHyper-V Host Hyper-V
Host
NIC NIC
VS VSNIC NIC
ADDNS
DHCPWAP1
NVGREGatewa
y
NICNIC
Hyper-V Host
NIC
VS NIC
NIC
Hyper-V Host
NIC
VS NIC
WAP2
NIC
VLAN Trunks
Edge Cluster
SOFS
Cluster
Hyper-V Host
NIC
VS NIC
Tenant VM
NIC
Compute/Management Cluster(s)
ADDNS
DHCP
VMM1 SQL1
ADDNS
DHCP
VMM2 SQL2
NVGREGatewa
y
HA VM App. HAGuest Cluster
Load Balancer
Front-End - 131.107.156.0/24 VLAN 100Management – 172.16.0.0/16 VLAN 200PA – 172.21.0.0/16 VLAN 300
VLANs:
Redundant Edge
Perimeter deployment with untrusted domainsHV-H01 GW-H01
NIC NIC
VS VSNIC NIC
FW/NATAD
DNSDHCP
WAP1NVGREGatewa
y
Internet - 131.107.156.0/24 VLAN 100Datacenter – 172.16.0.0/16 VLAN 200DMZ – 10.0.0.0/24 VLAN 300
NICNIC
GW-H02
NIC
VS NIC
NIC
HV-H02
NIC
VS NIC
WAP2
NIC
VLAN Trunks
GW-HV-CL01
SOFS
Cluster
HV-H03
NIC
VS NIC
Tenant VM
NIC
HV-CL01
ADDNS
DHCP
VMM1 SQL1
ADDNS
DHCP
VMM2 SQL2
NVGREGatewa
y
Infra for untrusted
ADDNS
ADDNS
ADDNS
SOFS
Cluster
Redundant Edge
VMNetwork
VMSubnet
NetworkGatewayVMNetworkGateway
VPNConnection
NATConnection
Virtual Network
VM Network Routing – Single SubnetPackets in the
Provider Address Space
Windows ServerMulti-tenant
GatewayS2SNATGRE
Client1
Firewall/Router
Internet
Front-end Interface:Compartment: defaultIP: 23.221.10.57Default GW:23.221.10.1
IP: 23.221.10.1
IP: 23.221.10.5One interface per compartment/VMNetworkwithin back-end adapter
VM Network Routing – Single Subnet + NAT
Windows ServerMulti-tenant
GatewayS2SNATGRE
“Green” VMNetwork interface:Compartment: 4IP: 10.254.254.2Routes: 192.168.0.0/24 10.254.254.1
Client1
Firewall/Router
Internet
Front-end Interface:Compartment: defaultIP: 23.221.10.57Default GW:23.221.10.1
IP: 23.221.10.1
IP: 23.221.10.5
“Green” VM Network:VM Subnet: 192.168.0.0/24CA Route: 0.0.0.0/0 10.254.254.2
192.168.0.0/24 -> 0.0.0.0
Destination IP:
23.221.10.5
Packets in theProvider Address Space
VM Network Routing – Multiple Subnet+NAT
Windows ServerMulti-tenant
GatewayS2SNATGRE
“Green” interface:Compartment: 4IP: 10.254.254.2Routes: 192.168.0.0/24 10.254.254.1
192.168.1.0/24 10.254.254.1
Client1
Firewall/Router
Internet
Front-end Interface:Compartment: defaultIP: 23.221.10.57Default GW:23.221.10.1
IP: 23.221.10.1
IP: 23.221.10.5
“Green” VM Network:VM Subnets: 192.168.0.0/24, 192.168.1.0/24CA Route: 0.0.0.0/0 10.254.254.2
192.168.0.0/24 0.0.0.0 192.168.1.0/24 0.0.0.0
Destination IP:
23.221.10.5
Packets in theProvider Address Space
VM Network Routing – Multiple Subnet+FWD
Windows ServerForwarding Gateway
“Green” Interface:Compartment: defaultIPs: 10.254.254.2Routes: 192.168.0.0/24 10.254.254.1
192.168.1.0/24 10.254.254.1
0.0.0.0/0 192.168.3.1
Firewall/Router
Internet
Compartment: DefaultIP: 192.168.3.2Default GW:192.168.3.1
IP: 192.168.3.1Routes: 192.168.0.0/24192.168.3.2192.168.1.0/24192.168.3.2…and many more…
“Green” VM Network:VM Subnets: 192.168.0.0/24, 192.168.1.0/24CA Route: 0.0.0.0/0 10.254.254.2
192.168.0.0/24 192.168.0.1 192.168.1.0/24 192.168.1.1
Destination IP:
8.8.8.8
Packets in theProvider Address Space
Deployment Walkthrough
Scott Napolitan
DTAP
Demo ConfigGW-H01
NIC
DTAP
VS NIC
NVGREGatewa
y
Internet - 131.107.156.0/24 VLAN 100Datacenter – 172.16.0.0/16 VLAN 200
NIC
GW-H02
NIC
VS NIC
NIC
HV-H01
NIC
VS NIC
NIC
VLAN Management
GW-HV-CL01
SOFS
Cluster
HV-H02
NIC
VS NIC
NIC
MANAGEMENT-CL01
ADDNS
DHCP
VMM1 SQL1
ADDNS
DHCP
VMM2 SQL2
NVGREGatewa
y
HA VM App. HAGuest Cluster
Load Balancer
HV-H01
NIC
VS NIC
Tenant VM
NIC
HV-H02
NIC
VS NIC
NIC
HV-CL01
Tenant VM
Tenant VM
Tenant VM
VLAN Trunk
Deploying HNV Networks
Create Logical
Networks
Create IP Pools
Create Uplink Port Profiles
Create Logical
Switches
Deploy Logical
Switches to Hosts
Deploy Scale Out File Server(SOFS)
Configure Service
Template & Deployment
Deploy Gateway Service
Onboard Gateway Service
Create Tenant VM Network
Deploy VMsValidate
Configuration
HNV Networking Deployment GotchasLogical Switch DeploymentAre your Host Network Adapters Teamed? Make sure your Logical switch settings match your host adapter configuration. Use consistent naming for your Host Adapters! If you can, leverage the Consistent Device Naming (CDN) feature available in Windows Server 2012 R2.If you are using a Teamed Logical Switch, make sure you select the adapter that is connected to the same network as VMM (Management adapter) first! Then add the other Host Adapters.If you are adding your management adapter to a Teamed Logical Switch, your management traffic MUST be using the Default VLAN. Otherwise you will lose connectivity with the host and the deployment will fail.
HNV Gateway DeploymentUse the Service Template!Your SOFS servers must NOT also be running Hyper-V.Your Gateway VMs should ONLY have a Default Gateway configured on the Front End network.
Additional ResourcesResourcesWindows Server 2012 R2 HA Gateway TemplateAdopting Network Virtualization HNV Gateway Diagnostics Script.
Recommended Follow-up SessionsBRK2499 – Deep Dive into Microsoft Cloud Platform System NetworkingSpeaker: Greg CusanzaWhen: Thursday 1:30-2:45 Where: E352
BRK2463 - Hyper-V Network Virtualization 100+ Customer Service Provider Deployments Speaker: Ricardo MachadoWhen: Friday, 12:30-1:45Where: S104
HNVDiagnostics.zip
Learn more with FREE IT Pro Resources
Free technical training resources: On-demand online training: http://aka.ms/moderninfrastructure
Expand your Modern Infrastructure Knowledge
Free ebooks:Deploying Hyper-V with Software-Defined Storage & Networking: http://aka.ms/deployinghyperv
Microsoft System Center: Integrated Cloud Platform: http://aka.ms/cloud-platform-ebook
Join the IT Pro community: Twitter @MS_ITPro
Get hands-on: Free virtual labs: Microsoft Virtualization with Windows Server and System Center: http://aka.ms/virtualization-lab
Windows Azure Pack: Install and Configure: http://aka.ms/wap-lab
Visit Myignite at http://myignite.microsoft.com or download and use the Ignite Mobile App with the QR code above.
Please evaluate this sessionYour feedback is important to us!
© 2015 Microsoft Corporation. All rights reserved.