Building an Oracle Grid with Oracle VM on Dell Blade Servers … · • Scale the Grid by adding servers: Use 11g RAC add node procedure to add new node to the cluster and add the
45
Building an Oracle Grid with Oracle VM on Dell Blade Servers and EqualLogic iSCSI storage Kai Yu David Mar Sr. System Engineer Consultant Engineering Manager Dell Global Solutions Engineering Dell Global Solutions Engineering
Building an Oracle Grid with Oracle VM on Dell Blade Servers and
EqualLogic iSCSI storage
Kai Yu David MarSr. System Engineer Consultant Engineering ManagerDell Global Solutions Engineering Dell Global Solutions Engineering
ABOUT THE AUTHORS
2
Kai Yu is a Senior System Engineer and works within the Dell Oracle Solutions Lab. He has over 14 years Oracle DBA and Solutions Engineering experience. His specialties include Oracle RAC, Oracle EBS and OVM. He is an avid author of various Oracle technology articles and frequent presenter at Oracle Open World and Collaborate. He is also the Oracle RAC SIG US Event Chair.
David Mar is an Engineering Manager for the Dell | Oracle Solutions group. He directs a team across the globe to create Oracle RAC reference configurations and build best practices based upon tested and validated configurations. He has worked for Dell since 2000 and has a MS in Computer Science and BS in Computer Engineering from Texas A&M University.
3
COMPONENTS WITHIN THE GRID
Physical View
Logical View
Physical View
Scalability
dom1dom2
dom3 dom4
OVMServers
Scale –out blades
Database 1Database 2
Database 3Database 4
Volume 1Volume 2
Volume 3
OS
Compute Virtualization Roles / Components
• Oracle VM• Partitioning single server • Consolidation
• Oracle RAC• Scale-up through
scale-out of hw• DB Scalability only
OS
OS OS OS
OS OS
DB / app
OS
DB / app
Virtual Grid Physical Grid
VM Server Pool
OS OS
DB 1
DB 2
DB 3
RAC DB
Testing Oracle RAC infrastructure with minimal hardware Testing Oracle RAC pre-production areas App Servers testing and production Good to maximize server utilization rates Consolidation for low utilization servers Partitioning one application from another
OVM | VIRTUALIZATION’S ROLE
OVERVIEW OF EQUALLOGIC
4 Gigabit network ports per controller 2 GB cache per controller Intelligent automated management tools Self Managing Array Linear Scalability Data Protection Software
Volume 2Volume 3
Volume 1
Storage Pool
STORAGE LAYER ON EQUALLOGIC
EqualLogic and ASM work well together http://www.dell.com/oraclePerformance Implications of running Oracle ASM with Dell EqualLogicTM PS series storage
EqualLogic ASM Best Practices for Oraclehttp://www.dell.com/oracleBest Practices for Deployment of Oracle® ASM with DellTM EqualLogicTM PS iSCSI Storage System
Peer Storage architecture Setup Quickly Linear Performance Improvements
Three redundant fabrics16 half height blade servers
11
GRID Reference Configuration POC Project
1 – PHYSICAL GRID
Presenter
Presentation Notes
Physical Grid Design Physical Grid provides a consolidated 11g RAC database infrastructure to host multiple high volume databases. Each of databases on this Grid infrastructure can be configured to run on 1-8 database instances depending on the volume of the transactions. Providing such an infrastructure enables each database the capability to be dynamically migrated to different Grid nodes depending on the work load of the database instances. The physical Grid may also be scaled out by adding some additional nodes to the Grid to meet the demand of workloads. The empty slots of the blade chassis allow to add additional blade server to join the physical Grid. The physical Grid uses ASM to provide storage virtualization for the databases on the cluster.
2 – THE VIRTUAL GRID
Presenter
Presentation Notes
Virtual Grid Design The virtual Grid is based on virtual machines which are hosted within the VM server pool. Within the reference design we started with two servers, the minimal to provide some level of high availability. To start our pool was comprised of two M610 blade servers each with 24 GB of memory and 2 sockets each. The processors within each socket were 4 code Nehalem processors. With blades its very easy to scale this structure, users may add more M610 blade servers to scale out VM Server pool. Guest VMs On top of the VM Servers guest were created. Each guest VM used 2GB virtual memory and 2 virtual CPUs. As a best practice the total number of virtual CPUs on a virtual server should be less than or equal to the total number of CPU cores that exist with the VM Server. In our example reference design since we have 3 Oracle VM Servers, we are limited to 24 virtual CPUs. The total amount of virtual memory available for all of the Virtual Machines should be less than or equal to the total amount of RAM on each VM Server. Each server in our reference design includes 24GB of memory, so the aggregate of memory of our guests should not be larger than 24GB.
3 – MANAGEMENT
• Oracle Grid Control to manage both physical Grid and virtual Grid
• Consider HA for production management• Manage externally out of the Blade Cluster,
to avoid single point of failure • Dell Servers enable health monitoring from EM
Grid Control Management Infrastructure Configuration– OEL 4.7 64 bit was installed for Grid control server– Oracle Enterprise Manager Grid control 10.2.0.3 installed:
with OMS server and repository database and Grid control agent – Grid control 10.2.0.5 with patch 3731593_10205_x84_64
Apply to OMS server, repository, and the agent– Enable Virtual Management Pack with patch 8244731 to OMS,
Install TightVnc Java Viewer to OMS server– Restart OMS server
GRID IMPLEMENTATION: EQUALLOGICSTORAGE
18
EqualLogic shared storage configuration– Storage volumes for the physical Grid
– Storage volumes for the virtual Grid
Volume Size Raid Used for OS Mappingblade_crs 2GB 10 OCR/Votingdisk Yes
blade_data1 100GB 10 Data for DB1 ASM diskgroup1blade_data2 100GB 10 Data for DB2 ASM diskgroup2blade_data3 150GB 10 Data for DB3 ASM diskgroup3blade_data5 150GB 10 Data for DB4 ASM diskgroup5
Volume Size Raid Used for OS Mapping
blade_data4 400GB 10 VM Repositories /OVS
blade_data6 500GB 10 VM Repositories /OVS/9A87460A7EDE43EE92201B8B7989DBA6
vmcor1-5 1GB each 10 OCR/vogtingdisk 2XOCRs/3Xvotingdisks
– iSCSI storage connections • Open-iSCSI administration utility to configure host access to storage • volumes: eth2-iface on eth2, eth3-iface on eth3• Linux Device Mapper to establish mutipath devices to storage alias:
}– Check multipath devices in /dev/mapper: /dev/mapper/ocr-crsp1
Network Interface IO modules Connections IP addresseth0 A1 Public Network 155.16.9.71-78eth2 B1 iSCSI connection 10.16.7.241-255(odd)eth3 B2 iSCSI connection 10.16.7.240-254(odd)
eth4,eth5 C1,C2 Bonded to bond0 192.168.9.71-78VIP Virtual IP for 11g RAC 155.16.9.171-178
GRID IMPLEMENTATION: PHYSICALGRID
21
– Use block devices for 11g Clusterware and RAC databases• OEL 5 no longer supports raw devices,• Use block devices for 11g clusterware/RAC databases• /etc/rc.local to set proper ownerships and permissions
ASM instances provide the storage virtualization• 11g RAC software to provide the database service
ORA_HOME=/opt/oracle/product/11.1.0/db_1 • Grid control agent 10.2.0.5 to connect to OMS server
– Consolidate multiple databases
GRID IMPLEMENTATION: PHYSICALGRID
22
• Sizing the DB requirement: CPU, IO and memory• Determine storage needs and # of DB instances• Determine which nodes to run the database• Provision the storage volumes and create the ASM diskgroup• User DBCA to create the database using the ASM diskgroups• For some ERP applications, convert the installed DB to RAC
– Examples of Pre-created Databases
GRID IMPLEMENTATION: PHYSICALGRID
23
GRID IMPLEMENTATION: PHYSICALGRID
24
– Scale out the physical Grid Infrastructure• Consolidate more databases to the Grid• Increase capability of the existing databases• Scale out the storage:
a. add additional Equallogic array to the storage groupb. expand existing or create new volumesc. make new volume accessible to the serversd. create ASM diskgroup or add partition to the existing
diskgroup or resize the partition of an diskgroup [1]• Scale the Grid by adding servers: Use 11g RAC add node
procedure to add new node to the cluster and add the• Expand the database to additional RAC nodes:
Use the adding instance procedure of enterprise manager to add new database instance to the database [2]
• Dynamically move the database to less busy node:User the adding instance and removing instance of enterprisemanager to add new database instance to the database [2]
GRID IMPLEMENTATION: VIRTUAL GRID
25
Implementation Tasks– Virtual servers Installation– Virtual server network and stroage configuration.– Connect VM servers to the Grid Control– Management VM infrastructure through Grid control– Create guest VM using VM template– Manage the resources of guest VMs.
Virtual servers Installation– Prepare local disk and enable virtualization on BIOs– Install Oracle VM server 2.1.2 – Change Dom0 memory : /boot/grub/menu.lst:
edit line: kernel /xen-64bit.gz dom0_mem=1024M– Ensure VM agent working: #service ovs-agent status
– eth0 : public in dom0, presented to domU through xenbr0– eth2, eth3: iSCSI connections– eh4 and eth4 bonded to bond0 for the private interconnect
between VMs for 11g RAC configuration through xenbr1– Secure OVM agent is running: service ovs-agent status
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
GRID IMPLEMENTATION: VIRTUAL GRID
27
– Customize the default Xen bridge configurationa. Stop the default bridges”/etc/xen/scripts/network-bridges stopb. Edit /etc/xen/xend-config.sxp:
replace line: (network-script network-bridges) with (network-script network-bridges-dummy)
a. Edit /etc/xen/scripts/network-bridges-dummy : /bin/trueb. Create infcg-bond0, ifcfg-xenbr0, ifcfg-xenbr1 scripts in
bridge name bridge id STP enabled interfacesxenbr0 8000.002219d1ded0 no eth0xenbr1 8000.002219d1ded2 no bond0
– Configure shared storages on dom0 for VM servers• For OVM repositories : blade_data4: 400GB, Blade_data6_ 500GB• For 11g RAC shared disks: OCRs, votingdisks, ASM diskgroups• Configure iSCSI and multipath devices on dom0
Use iSCSI admin tool and Linux DM as the physical serverCheck the multipath devices : ls /dev/mapper/*
g) mount ocfs2 partitions: mount -a -t ocfs2h) create directories under /OVS: running_pool, seed_pool,sharedDiski) repeat steps a-h on all the VM servers
• Add additional volume to the OVS repositories
– Connect the VM servers to Enterprise Manager Grid Control• Meet pre-requisites:
a. Oracle user with oinstall group b. Oracle user’s ssh user equivalence between dom0 & OMS
Sdsadfasd
GRID IMPLEMENTATION: VIRTUAL GRID
31
c. Create /OVS/provxyd. Oracle user sudo privileges: visudo /etc/sudoers
Add line: oracle ALL=(ALL) NOPASSWD: ALLComment out line: Defaults requiretty
Create Guest VMs using VM template– Virtual Machine as node of Virtual Grid– Methods of guest VMs: VM template; install media; PXE boot– Import a VM template
• Download OVM_EL5U2_X86_64_11GRAC_PVM.gz to repository• Discover VM template from Grid control Virtual Central
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
• Disks shown in VM as the virtual disk partitions:
• Attached the storage to to the guest VM a) vm.cfg: disk = [''file:/OVS/sharedDisk/racdb.img,xvdc,w!', b) vm.cfg: disk = ['‘phy: /dev/mapper/vmracdbp1, xvdc,w!',
• Exposed Xen bridge for virtual network interfacevm.cfg:vif = ['bridge=xenbr0,mac=00:16:3E:11:8E:CE,type=netfront','bridge=xenbr1,mac=00:16:3E:50:63:25,type=netfront',]xenbr0 - eth0 in the guest VMxenbr1 eth1 in the guest VM
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
• Storage/Network configuration for 11g RAC on VMs:
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
Consolidate enterprise applications on the Grid– Applications and middleware on the virtual Grid
• Create a guest VM using Oracle OEL 5.2 template• Deploy application on the guest VM • Build the VM template of the VM• Create new guest VMs based on the VM template
– Deploy database service on the physical Grid• Provision adequate size of storage volume from SAN• Make the volume accessible to all the physical Grid Nodes• Create the ASM diskgroup• Create database service on the ASM diskgroup• Create application database schema on the database • Establish the application database connections
– Deploy DEV/Test Application suite on the virtual Grid• Multi-tier nodes are on the VMs• Fast deployment based on templates
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
CASE STUDIES OF GRID HOSTING APPLICATIONS
40
Grid Architecture to host applications– Oracle Applications E-Business suites on RAC/VM
• Three Applications Tier nodes on three VMs• Two Database Tier nodes on Two node RAC.
– Oracle E-Business Suite single node on Virtual Grid• Both Applications tier and Database tier node on VMs
– Banner applications on physical/virtual Grid • Two Applications Tier nodes on two VMs• Two Database Tier nodes on Two node RAC
– Provisioning Oracle 11g RAC on Virtual Grid• Two Database RAC nodes on two VMs• 11g RAC provisioning by Grid control Provisioning Pack
11g RAC provisionging procedure• Add additonal RAC node on new VM Grid control Provisioning
Pack Add node Procedure
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
CASE STUDIES OF GRID HOSTING APPLICATIONS
41
Grid Architecture to host applications
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
CASE STUDIES OF GRID HOSTING APPLICATIONS
42
Virtual Grid shown on Grid Control Virtual Central
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
SUMMARY
43
• Dell Grid POC Project: Pre-built Grid with physical and virtual grids• Grid control as the unified management solution for the Grid• Dell blades as the platform of the Grid: RAC node and VM servers• EqualLogic provides the shared storage for the physical and virtual
Grid• The Grid provides the infrastructure to consolidate the enterprise
applications as well as the RAC databases• Acknowledge the support of Oracle engineers: Akanksha Sheoran,
Rajat Nigam, Daniel Dibbets, Kurt Hackel, Channabasappa Nekar, Premjith Rayaroth, and Dell Engineer: Roger Lopez
• Related OpenWorld Presentations:– ID#: S308185, Provisioning Oracle RAC in a Virtualized Environment,
Using Oracle Enterprise Manager, 10/11/09 13:00-14:00, Kai Yu & Rajat Nigam
– ID#: S310132, Oracle E-Business Suite on Oracle RAC and Oracle VM: Architecture and Implementation, 10/14/09 10:15 -11:15 Kai Yu and John Tao
Presenter
Presentation Notes
High performance server virtualization: built with a server-centric focus, capable of handling mission-critical production environments in data centers; High performance server virtualization: Designed with PV kernels and HV in mind • Uses PV OS kernel for best perf. today • Uses HV hardware for unmodified images • Open driver model: uses native (e.g. Linux) device drivers Vmware uses Emulation/translation, which leads to poor scalability esp under IO load it carries the extra burden on an emulation later Rapid increase in overhead as workload scales-up • Additional VMs… • Additional load inside VMs, etc. \ OVM: PV kernel awares the VMs, Performance equals to bare metal hw Memory: no overhead excellent scalability Hypervisor - •More efficiently cooperates on remaining burden PV Kernel - •Virt. Aware •Generates less burden IO: replace hwareware PV:Thin virtual device drivers in PV kernel communicate with back-end drivers in dom0. • Dom0 drivers are essentially normal Linux (dom0 kernel) • Extensive, rapid availability for latest hardware • Built into guest OS Far less overhead vs. “real” driver for emulated device PV: Better scalability closer to bare metal performance under load Emulation / translation:Must emulate I/O hardware • Proprietary drivers may not be available Uses real device driver – adds emulation overhead
References:
44
1. Best Practices of Deployment of Oracle® ASM with Dell EqualLogic PS iSCSI Storage System, Dell White Paper
2 . Oracle Press Release: Oracle® Enterprise Manager Extends Management to Oracle VM Server Virtualization