hyper v r2 deep dive
Post on 05-Dec-2014
Embed Size (px)
- 1. Hyper-V R2 High-Availability DEEP DIVE! Greg Shields, MVP, vExpert Head Geek, Concentrated Technology www.ConcentratedTech.com
2. This slide deck was used in one of our many conference presentations. We hope you enjoy it, and invite you to use it within your own organization however you like. For more information on our company, including information on private classes and upcoming conference appearances, please visit our Web site,www.ConcentratedTech.com .For links to newly-posted decks, follow us on Twitter: @concentrateddon or @concentratdgreg This work is copyright Concentrated Technology, LLC 3. Agenda
- Part I
- Understanding Live Migration s Role in Hyper-V HA
- Part II
- The Fundamentals of Windows Failover Clustering
- Part III
- Building a Two-Node Hyper-V Cluster with iSCSI Storage
- Part IV
- Walking through the Management of a Hyper-V Cluster
- Part V
- Adding Disaster Recovery with Multi-Site Clustering
4. Part I Understanding Live Migration s Role in Hyper-V HA 5. Do You Really Need HA?
- High-availability adds dramatically greater uptime for virtual machines.
- Protection against host failures
- Protection against resource overuse
- Protection against scheduled/unscheduled downtime
- High-availability also adds much greater cost
- Shared storage between hosts
- Higher (and more expensive) software editions
- Not every environment needs HA!
6. What Really is Live Migration? Part 1:Protection from Host Failures 7. What Really is Live Migration? Part 2:Load Balancing of VM/host Resources 8. Comparing Quick w/ Live Migration
- Simply put:Migration speed is the difference.
- In Hyper-V s original release, a Hyper-V virtual machine could be relocated with a minimum of downtime.
- This downtime was directly related to..
- the amount of memory assigned to the virtual machine
- the connection speed between virtual hosts and shared storage.
- Virtual machines with greater levels of assigned virtual memory and slow networks would take longer to complete a migration from one host to another.
- Those with less could complete the migration in a smaller amount of time.
- With QM, a VM with 2G of vRAM could take 32 seconds or longer to migrate!Downtime ensues
9. Comparing Quick w/ Live Migration
- Down/dirty details
- During a Quick Migration, the virtual machine is immediately put into aSaved state.
- This state is not a power down, nor is it the same as the Paused state.
- In the saved state and unlike pausing the virtual machine releases its memory reservation on the host machine and stores the contents of its memory pages to disk.
- Once this has completed, the target host can take over the ownership of the virtual machine and bring it back to operations.
10. Comparing Quick w/ Live Migration
- Down/dirty details
- This saving of virtual machine state consumes most of the time involved with a Quick Migration.
- Needed to reduce this time delay was a mechanism topre-copythe virtual machine s memory from source to target host.
- At the same moment the pre-copy would to log changes to memory pages that occur during the period of the copy.
- These changes tend to be relatively small in quantity, making the delta copy significantly smaller and faster than the original copy.
- Once the initial copy has completed, Live Migration then
- pauses the virtual machine
- copies the memory deltas
- transfers ownership to the target host.
- Much faster.Effectivelyzero downtime.
11. Part II The Fundamentals of Windows Failover Clustering 12. WhyClusteringFundamentals?
- Isn t this, after all, a workshop on Hyper-V?
- It is, but the only way to do highly-available Hyper-V is atop Windows Failover Clustering
- Many people have given clustering a pass due to early difficulties with its technologies.
- Microsoft did us all a disservice by making every previous version of Failover Clustering ridiculously painful to implement.
- Most IT pros have no experience with clustering.
- but clustering doesn thaveto be hard.It just feels like it does!
- Doing clustering badly means doing HA Hyper-V badly!
13. Clustering s Sordid History
- Windows NT 4.0
- Microsoft Cluster Servicewolfpack
- High-availability service that reduced availability
- As the corporate expert in Windows clustering, I recommendyou dont use Windows clustering.
- Windows 2000
- Greater availability, scalability.Still painful
- Windows 2003
- Added iSCSI storage to traditional Fibre Channel
- SCSI Resets still used as method of last resort (painful)
- Windows 2008
- Eliminated use of SCSI Resets
- Eliminated full-solution HCL requirement
- Added Cluster Validation Wizard and pre-cluster tests
- First version truly usable by IT generalists
14. What s New & Changed in 2008
- x64 EE gets up to 16 nodes.
- Backups get VSS support.
- Disks can be brought on-line without taking dependencies offline.This allows disk extension without downtime.
- GPT disks are supported.
- Cluster self-healing.No longer reliant on disk signatures.Multiple paths for identifyinglost or failed disks.
- IPv6 & DHCP support.
- Network Name resource now uses DNS instead of WINS.
- Network Name resource more resilient.Loss of an IP address need not bring Network Name resource offline.
- Geo-clustering!a.k.a. cross-subnet clustering.Cluster communications use TCP unicast and can span subnets.
15. So, What IS a Cluster? 16. So, What IS a Cluster? Quorum Drive & Storage for Hyper-V VMs 17. Cluster Quorum Models
- Ever been to a Kiwanis meeting?
- A clusterexists because it has quorum between its members.That quorum is achieved through a voting process.
- Different Kiwanis clubs have different rules for quorum.
- Different clusters have different rules for quorum.
- If a clusterloses quorum, the entire cluster shuts down and ceases to exist.This happens until quorum is regained.
- This is much different than a resource failover, which is the reason why clusters are implemented.
- Multiple quorum models exist, for different reasons.
18. Node & Disk Majority
- Node majority eliminates Win2003 s Quorum disk as a point of failure.Works on a voting system.
- A two-node cluster gets three votes.
- One for each node and one for the quorum.
- Two votes are needed for quorum.
- Because of this model, the loss of the quorum disk only results in the loss of one vote.
- Used when aneven number of nodes are in the cluster.
- Most-deployed model in production.
19. Node Majority
- Only shared storage devices get votes, replicated storage does not.
- Requires 3+ votes, so need a minimum of three members.
- Used when the number of cluster nodes is odd.
- Can use replicated storage instead of shared storage.
- Handy for stretch clusters.
20. File Share Witness Model
- Clustering without the nasty (expensive) shared storage!
- (Sort ofOKnot really)
- One file server can serve as witness for multiple clusters.
- Can be used for non- production Hyper-V clusters. (eval/demo only)
- Most flexible model for stretch clusters. Eliminates issues of complete site outage.
21. Witness Disk Model
- Nodes get no votes.Only the