vmms / hypervisors intel corporation 21 july 2008

Download VMMs / Hypervisors Intel Corporation 21 July 2008

Post on 19-Dec-2015

215 views

Category:

Documents

1 download

Embed Size (px)

TRANSCRIPT

  • Slide 1
  • VMMs / Hypervisors Intel Corporation 21 July 2008
  • Slide 2
  • INTEL CONFIDENTIAL Agenda -Xen internals -High level architecture -Paravirtualization -HVM -Others -KVM -VMware -OpenVZ
  • Slide 3
  • INTEL CONFIDENTIAL Xen Overview
  • Slide 4
  • INTEL CONFIDENTIAL Xen Project bio Xen project was created in 2003 at the University of Cambridge Computer Laboratory in what's known as the Xen Hypervisor project Led by Ian Pratt with team members Keir Fraser, Steven Hand, and Christian Limpach. This team along with Silicon Valley technology entrepreneurs Nick Gault and Simon Crosby founded XenSource which was acquired by Citrix Systems in October 2007 The Xen hypervisor is an open source technology, developed collaboratively by the Xen community and engineers (AMD, Cisco, Dell, HP, IBM, Intel, Mellanox, Network Appliance, Novell, Red Hat, SGI, Sun, Unisys, Veritas, Voltaire, and of course, Citrix) Xen is licensed under the GNU General Public License Xen supports Linux 2.4, 2.6, Windows and NetBSD 2.0
  • Slide 5
  • INTEL CONFIDENTIAL Xen Components A Xen virtual environment consists of several modules that provide the virtualization environment: Xen Hypervisor - VMM Domain 0 Domain Management and Control Domain User, can be one of: Paravirtualized Guest: the kernel is aware of virtualization Hardware Virtual Machine Guest: the kernel runs natively Hypervisor - VMM Domain 0 Domain Management and Control Domain U Paravirtual Guest Domain U Paravirtual Guest Domain U Paravirtual Guest Domain U HVM Guest Domain U HVM Guest Domain U HVM Guest
  • Slide 6
  • INTEL CONFIDENTIAL Xen Hypervisor - VMM The hypervisor is Xen itself. It goes between the hardware and the operating systems of the various domains. The hypervisor is responsible for: Checking page tables Allocating resources for new domains Scheduling domains. Booting the machine enough that it can start dom0. It presents the domains with a VirtualMachine that looks similar but not identical to the native architecture. Just as applications can interact with an OS by giving it syscalls, domains interact with the hypervisor by giving it hypercalls. The hypervisor responds by sending the domain an event, which fulfills the same function as an IRQ on real hardware. A hypercall is to a hypervisor what a syscall is to a kernel.
  • Slide 7
  • INTEL CONFIDENTIAL Restricting operations with Privilege Rings The hypervisor executes privileged instructions, so it must be in the right place: x86 architecture provides 4 privilege levels / rings Most OSs were created before this implementation, so only 2 levels are used Xen provides 2 modes: In x86 the applications are run at ring 3, the kernel at ring 1 and Xen at ring 0 In x86 with VT-x, the applications run at ring 3, the guest at ring non-root-0 and Xen at ring root-0 (-1) 3 0 3 1 0 3 0 The Guest is moved to ring 1 Native Paravirtual x86 HVM x86 Applications Guest kernel (dom0 and dom U) Hypervisor The Hypervisor is moved to ring -1
  • Slide 8
  • INTEL CONFIDENTIAL Domain 0 Domain 0 is a Xen required Virtual Machine running a modified Linux kernel with special rights to: Access physical I/O devices Two drivers are included in Domain 0 to attend requests from Domain U PV or HVM guests Interact with the other Virtual Machines (Domain U) Provides the command line interface for Xen daemons Due to its importance, the minimum functionality should be provided and properly secured Some Domain 0 responsibilities can be delegated to Domain U (isolated driver domain) Domain 0 Network backend driver Block backend driver Communicates directly with the local networking hardware to process all virtual machines requests Communicates with the local storage disk to read and write data from the drive based upon Domain U requests PV HVM Qemu-DM Supports HVM Guests for networking and disk access requests
  • Slide 9
  • INTEL CONFIDENTIAL Domain Management and Control - Daemons The Domain Management and Control is composed of Linux daemons and tools: Xm Command line tool and passes user input to Xend through XML RPC Xend Python application that is considered the system manager for the Xen environment Libxenctrl A C library that allows Xend to talk with the Xen hypervisor via Domain 0 (privcmd driver delivers the request to the hypervisor) Xenstored Maintains a registry of information including memory and event channel links between Domain 0 and all other Domains Qemu-dm Supports HVM Guests for networking and disk access requests
  • Slide 10
  • INTEL CONFIDENTIAL Domain U Paravirtualized guests The Domain U PV Guest is a modified Linux, Solaris, FreeBSD or other UNIX system that is aware of virtualization (no direct access to hardware) No rights to directly access hardware resources, unless especially granted Access to hardware through front-end drivers using the split device driver model Usually contains XenStore, console, network and block device drivers There can be multiple Domain U in a Xen configuration Domain U - PV Network front-end driver Block front-end driver Communicates with the Network backend driver in Domain 0 Communicates with the Block backend driver in Domain 0 Console driver XenStore driver Similar to a registry
  • Slide 11
  • INTEL CONFIDENTIAL Domain U HVM guests The Domain U HVM Guest is a native OS with no notion of virtualization (sharing CPU time and other VMs running) An unmodified OS doesnt support the Xen split device driver, Xen emulates devices by borrowing code from QEMU HVMs begin in real mode and gets configuration information from an emulated BIOS For an HVM guest to use Xen features it must use CPUID and then access the hypercall page Domain U - HVM Xen virtual firmware Simulates the BIOS for the unmodified operating system to read it during startup
  • Slide 12
  • Software and Solutions Group INTEL CONFIDENTIAL 12 In an operating system with protected memory, each application has it own address space. A hypervisor has to do something similar for guest operating systems. The triple indirection model is not necessarily required but it is more convenient from the performance point of view and modifications needed in the guest kernel. If the guest kernel needs to know anything about the machine pages, it has to use the translation table provided by the shared info page (rare) Pseudo-Physical to Memory Model Application Kernel Hypervisor Virtual Pseudo-physical Machine
  • Slide 13
  • Software and Solutions Group INTEL CONFIDENTIAL 13 Pseudo-Physical to Memory Model There are variables at various places in the code identified as MFN, PFN, GMFN and GPFN PFN (Page Frame Number)It means some kind of page frame number. The exact meaning depends on the context MFN (Machine frame number)Number of a page in the (real) machines address space GPFN (Guest page frame number)These are page frames in the guests address space. These page addresses are relative to the local page tables GMFN (Guest machine frame number) This refers to either a MFN or a GPFN, depending on the architecture
  • Slide 14
  • INTEL CONFIDENTIAL Virtual Ethernet interfaces Xen creates, by default, seven pair of "connected virtual ethernet interfaces" for use by dom0 For each new domU, it creates a new pair of "connected virtual ethernet interfaces", with one end in domU and the other in dom0 Virtualized network interfaces in domains are given Ethernet MAC addresses (by default xend will select a random address) The default Xen configuration uses bridging (xenbr0) within domain 0 to allow all domains to appear on the network as individual hosts
  • Slide 15
  • INTEL CONFIDENTIAL The Virtual Machine lifecycle OFF RUNNING SUSPENDED PAUSED Turn on Turn off Resume Pause Start (paused ) Stop Turn off Wake Sleep Migrate Xen provides 3 mechanisms to boot a VM: -Booting from scratch (Turn on) -Restoring the VM from a previously saved state (Wake) -Clone a running VM (only in XenServer)
  • Slide 16
  • INTEL CONFIDENTIAL A project: provide VMs for instantaneous/isolated execution Goal: determine a mechanism for instantaneous execution of applications in sandboxed VMs Approach: Analyze current capabilities in Xen Implement a prototype that addresses the specified goal: VM-Pool Technical specification of HW and SW used: Intel Core Duo T2400 @ 1.83GHz 1828 MHz Motherboard Properties Motherboard ID: Motherboard Name: LENOVO 1952D89 2048 MB RAM Software: Linux Fedora Core 8 Kernel 2.6.3.18 Xen 3.1 For the Windows images Windows XP SP2
  • Slide 17
  • INTEL CONFIDENTIAL Analyzing Xen spawning mechanisms # of CPUTime 193.5 sec 279 sec # of CPUTime 119.5 sec 222 sec VM RAM Size Image in Hard DiskImage in RAM Disk 256 MB16 sec13 sec 512 MB21 sec15 sec Booting from scratch HVM WinXP varying the #CPU PV Fedora 8 varying the #CPU Restoring from a saved state HVM WinXP 4GB disk / 1CPU PV Fedora 8 varying the #CPU VM RAM Size HDDRAM disk 256 MB15 sec9 sec 512 MB23 sec16 sec 1024 MB37 sec29 sec Cloning a running VM HVM WinXP 4GB disk / 1CPU Image sizeTime 2 GB145 sec 4 GB220 sec 8 GB300 sec
  • Slide 18
  • INTEL CONFIDENTIAL Dynamic Spawning with a VM-Pool To have a pool of virtual machines already booted and ready for execution, but in a paused state These virtual machines keep their RAM but they dont use processor time, interrupts and other resources Simple interface defined: get: retrieves and unpauses a virtual machine from the pool release: gives back a virtual machine to the pool and restarts the VM High level description:
  • Slide 19

Recommended

View more >