windows server 8 hyper v networking (aidan finn)
TRANSCRIPT
Windows Server 8 Hyper-V Networking
Aidan Finn, MVP (Virtual Machine) @joe_elway http://www.aidanfinn.com
About Aidan Finn
• MVP (Virtual Machine)
• Technical Sales Lead at MicroWarehouse
• Working in IT since 1996
• Experienced with Windows Server/Desktop, System Center, virtualisation, and IT infrastructure.
• Blog: http://www.aidanfinn.com
• Twitter: @joe_elway
Writing
Just Announced
WARNING!
• All content in this presentation is subject to change
• We have not even reached beta release
– Currently Developer Preview Release
• A lot of material to cover
– More in this sub-topic than in all of W2008 R2 Hyper-V
Agenda
• NIC Teaming
• Storage optimisation
• Workload mobility
• Performance & optimisations
• Extensible Hyper-V Switch
• Security
• Fabric convergence
• Host network architectures
Windows Server 8 Hyper-V Plans
• Great Big Hyper-V Survey 2011:
– Conducted by me, Hans Vredevoort, and Damian Flynn in August 2011 (before Win 8 Dev Prev)
– Who’s deploying it:
• 27.21% interested
• 62.01% planning
• 8.09% undecided
• 2.7% not interested
NIC Teaming & Windows 2008 R2
• KB968703: No support from Microsoft
– Use HP/Dell/Broadcom/Intel drivers/software
– Complicates deployment & support
• Great Big Hyper-V Survey of 2011
– 27.94% found NIC teaming to be biggest challenge in Hyper-V deployment
– 27.21% said networking was their biggest issue
• One of the last objections by VMware enthusiasts
NIC Teaming & Windows Server 8
• Built into the OS and supported
– Simplified deployment & support
• Load balancing and failover (LBFO)
• Aggregate bandwidth
• Use different model & vendor NICs!
• Opens up interesting opportunities
• One more VMware wall knocked down
NIC Teaming Hyper-V Extensible Switch
NIC 1 NIC 2 NIC 3
Network switch
LBFO Provider
Protocol edge
Virtual miniport 1
Frame distribution/aggregation Failure detection
Control protocol implementation
IM Mux
Port 1 Port 2 Port 3
LBFO Configuration DLL
LBFO Admin GUI
Ker
nel
mo
de
Use
r m
od
e
WMI
IOCTL
Scaling File SharingTraffic
• CPU utilisation is a challenge for high I/O SMB traffic
• Solution: Remote Direct Memory Access (RDMA)
– A secure way to enable a DMA engine to transfer buffers
– Built into Windows Server 8
• Why care about SMB? More to come …
SMB 2.2
Used by File Server and Clustered Shared Volumes • Scalable, fast and efficient storage
access • Minimal CPU utilization for I/O • High throughput with low latency • Multi-channel
• NIC Teaming • Much greater I/O speeds
• •Required hardware • •InfiniBand • •10G Ethernet w/ RDMA
And SMB 2.2 Enables
• Storage of VMs on file shares without performance compromise
• Affordable scalable & continuously available storage – Active/Active file share cluster
– VMs stored on UNC paths
• Live Migration between non-clustered hosts – VMs on file shares
Multi-Tenant Cloud Flexibility & Security
• Great Big Hyper-V Survey of 2011 – 28.68% considering hybrid cloud deployment
• A public cloud (hosting) or large private cloud (centralisation) has lots of hosted organisations – Trust issues – Compliance & regulations
• Hosting company requires flexibility & mobility of virtual workloads – Virtualisation is mobile – But networking addresses are not
Network Virtualisation
Physical network
Physical server
Woodgrove VM Contoso VM Woodgrove network Contoso network
Hyper-V Machine Virtualization • Run multiple virtual servers on a
physical server • Each VM has illusion it is running
as a physical server
Hyper-V Network Virtualization • Run multiple virtual networks on a physical network • Each virtual network has illusion it is running as a physical
fabric
Network Virtualisation Benefits
• No need to re-address virtual workloads
– For example 192.168.1.0/24 to 10.100.25.0/24
– Retain communications and LOB app SLA
• Enable easy migration of private cloud to multi-tenant public cloud
• Enable Live Migration mobility of workloads within the data centre
– Move virtual workloads between network footprints
Virtual Machine Queue
• Static (non VMQ) networking can become overloaded during high I/O loads
• Virtual Machine Queue (VMQ)
– Add in Windows 2008 R2
– Offloads burden from the parent to the network controller, to accelerate network I/O throughput
• Can overload CPU cores
Dynamic Virtual Machine Queue (DVMQ)
No VMQ
Adaptive network processing across CPU to provide optimal power and performance across changing workloads
Root Partition
Physical NIC
CPU 0
CPU 1
CPU 2
CPU 3
Static VMQ
Root Partition
Physical NIC
CPU 0
CPU 1
CPU 2
CPU 3
Windows Server 8 Dynamic VMQ
Root Partition
Physical NIC
CPU 0
CPU 1
CPU 2
CPU 3
Root Partition
Physical NIC
CPU 0
CPU 1
CPU 2
CPU 3
Single Root I/O Virtualization (SR-IOV)
Host
Network I/O path without SRIOV Network I/O path with SRIOV
Root Partition
Hyper-V Switch
Physical NIC
Virtual Machine
Virtual NIC
Routing VLAN Filtering
Data Copy
Host
Root Partition
Hyper-V Switch
SR-IOV Physical NIC
Virtual Machine
Virtual Function
Routing VLAN Filtering
Data Copy
Hyper-V Live Migration Policy
• No new features that prevent Live Migration
• For example, SR-IOV enabled VM being live migrated to host without SR-IOV
– Switches from SR-IOV virtual function to Hyper-V switch on original host
– Live Migration then takes place
– Zero downtime
More Optimisations
• Receive Side Scaling (RSS) – Share network I/O across many processors
– Incompatible with VMQ on the NIC
• Receive Side Coalescing (RSC) – Consolidate network caused interrupts
• IPSec Task Offload (IPsecTO) – Moves the workload from the host’s CPU to a
dedicated processor on the network adapter
Virtual Network -> Virtual Switch
• In 2008/R2: – A VM has a vNIC – The vNIC connects to a virtual network (aka virtual
switch) • Remember that we have something new called Network
Virtualisation to abstract IP addressess
– The virtual network connects to a pNIC in the host
• In Windows Server 8: – The Extensible Hyper-V Virtual Switch – Supports unified tracing for network diagnostics
Extensible Hyper-V Virtual Switch Root Partition
Extension Miniport
Extension Protocol
Hyper-V Switch
Physical NIC
Virtual Machine
Host NIC VM NIC
Virtual Machine
VM NIC
Filtering Extensions
Forwarding Extension
WFP Extensions
Capture Extensions
Certified Extensions
Cloud & Security
• Great Big Hyper-V Survey 2011: – 42.65% concerned about private cloud security
• You cannot trust tenants in multi-tenant cloud – Tenant VS hosting company – Tenant VS Tenant
• We’ve been using physical security: – Firewall
• Requires centralised skills & slow to configure • Gets complicated
– VLANs • Never intended for security • Restricted number per physical network
Windows Server 8 & Security
• Software easier & quicker to configure – Automate with provisioning
• Port ACLs – Define allowed communication paths between virtual
machines based on IP range or MAC address.
• PVLAN (Private VLAN) – VLAN-like domains created in Hyper-V
• DHCP Guard – Isolate rogue virtual DHCP servers
Cloud & Network Performance
• Can aggregate bandwidth with NIC teaming
• Hosting company must control network bandwidth utilisation: – “Give him enough rope and he’ll hang himself”
– Prioritise important applications
– Limit tenants based on fees paid
– Guarantee SLAs
• Network Quality of Service (QoS)
QoS
• Configured using PowerShell
• Minimum bandwidth policy: – Enforce bandwidth allocation - SLA
– Redistribute unused bandwidth – Efficiency & consolidation
• Maximum bandwidth policy – Cross charge for expensive bandwidth
• Possibly combine with network resource metering
A 2008 R2 Clustered Host
• 6 NICs: – Parent
– VM
– Redirected I/O
– Live Migration
– 2 * iSCSI
• NIC teaming?
• Backup?
• Lot$ of NIC$. Consider costs of 10 GbE
Physical Isolation
• Traditional
• Multiple physical NICs
• ACLs for guests
Hyper-V Extensible
Switch
VM 1
VM 2
Live
M
igra
tio
n
Clu
ste
r /
Sto
rage
Man
age
Server
Data Center Bridging (DCB)
LAN Miniport iSCSI Miniport
Windows Network Stack
Windows Storage Stack
DCB
Traffic Classification
PowerShell WMI
Converged Fabric
• A new possibility • Consolidate all those NICs to a simpler network • Take advantage of:
– 10 GbE/Infiniband networking: Bandwidth & VM density
– NIC Teaming: Aggregation and fault tolerance, e.g. lots of 1 GbE NICs
– DCB: Converge very different protocols – QoS: Guarantee performance SLA
• Lots of variations
Management and Guest Isolation
• 10 GbE NIC for parent partition
• ACLs for guests
• DCB to converge protocols
• QoS for SLA
Hyper-V Extensible
Switch
VM 1
VM 2
Live
M
igra
tio
n
Clu
ste
r /
Sto
rage
Man
age
Server
Using Network Offloads for Increased Scale
• Scalability Offloads take advantage of all CPU cores – Receive Side Scaling for native
path
– Virtual Machine Queue for Hyper-V Switch path
Hyper-V Extensible
Switch
VM 1
VM 2
Live
M
igra
tio
n
Clu
ste
r /
Sto
rage
Man
age
Server
RSS VMQ
Converged Fabrics (1 NIC)
• ACLs for all switch ports
• QoS for Management OS traffic
Hyper-V Extensible
Switch
VM 1
VM 2
Live
Mig
rati
on
Clu
ste
r /
Sto
rage
Man
age
Server
Converged Fabrics (2 NICs)
• ACLs for all switch ports
• QoS for Management OS traffic
• NIC Teaming for LBFO
Hyper-V Extensible
Switch
VM 1
VM 2
Live
Mig
rati
on
Clu
ste
r /
Sto
rage
Man
age
Server
NIC Teaming
Sample Documented Configuration • No network legacy
concerns (green field)
• Hyper-V clustered
• Converged 10GbE with DCB for QoS
• File Server clustered with scale-out
SAN
10GBE Switch + DCB support
Windows File Server
Clu
ste
r
Sto
rage
Man
age
NIC Teaming
10 GbE 10 GbE
HBA
DCB
RSS
DCB
RSS
Hyper-V Extensible
Switch
VM 1
Live
Mig
rati
on
Clu
ste
r /
Sto
rage
Man
age
Hyper-V Server
NIC Teaming
NIC Teaming
10 GbE 10 GbE 1 GbE 1 GbE
VM n
DCB
RSS
DCB
RSS
1GBE Switch
QoS QoS
For More Information • The original Build Windows 2011 sessions:
– http://channel9.msdn.com/events/BUILD/BUILD2011
– SAC-439T
– SAC-437T
– SAC-430T
The End
Thanks to Hyper-V.nu
Aidan Finn
• @joe_elway
• http://www.aidanfinn.com