oracle db high availability on ibm power systems · pdf filevio server oracle applications...
TRANSCRIPT
IBM Oracle Center | November 2011
Frédéric DuboisIT Specialist
Alain CyrIT Architect
Oracle DB High Availability on IBM Power Systems
Share Best Practices for demanding IT environments
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
IBM Power Technologies for OracleIBM Power Technologies for Oracle
HighHigh--Availability and Disaster RecoveryAvailability and Disaster Recovery
IBM Live Partition Mobility with Oracle Database DemoIBM Live Partition Mobility with Oracle Database Demo
Agenda
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Scalability with IBM POWER Systems and Oracle
3
Power 770
Power 750
Power 780
PS Blades
Power 740 2S/4UPower 720 1S/4U
Power 795
Power 730 2S/2UPower 710 1S/2U
Oracle Certifies for the O/S&
All Power servers run the same AIX=
Complete flexibility for Oracle workload deployment
Consistency� Binary compatibility� Mainframe-inspired reliability� Support for virtualization� AIX, Linux and IBM i OS
Power Systems with AIX deliver 99.997% uptime
Open Source Linux
HP UX 11/ HP Integrity
HP UX 11/ PA RISCSun Solaris / SPARC
IBM AIX POWER
Apple MAC
Red Hat EnterpriseWindows Server 2008Windows Server 2003
Corporate Enterprise Downtime
(Hours per Year)
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
IBM’s Ten-Year March to UNIX LeadershipThe largest shift of customer spending in UNIX history
4
15%
20%
25%
30%
35%
40%
45%
Q300
Q400
Q101
Q201
Q301
Q401
Q102
Q202
Q302
Q402
Q103
Q203
Q303
Q403
Q104
Q204
Q304
Q404
Q105
Q205
Q305
Q405
Q106
Q206
Q306
Q406
Q107
Q207
Q307
Q407
Q108
Q208
Q308
Q408
Q109
Q209
Q309
Q409
Q110
Q210
Q310
Q410
HP Sun/Oracle IBM
Source: IDC Server Tracker, Feb 2011
UNIX Server Rolling Four Quarter Average Revenue Share
POWER6Live Partition Live Partition
MobilityMobilityPOWER5MicroMicro--PartitioningPartitioning
POWER6Shared Processor PoolsShared Processor Pools
POWER6PowerVM Lx86PowerVM Lx86
POWER7Shared Storage PoolsShared Storage Pools
POWER6Active Memory Active Memory
SharingSharing
POWER4Dynamic LPARsDynamic LPARs
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Enablement: Product Roadmap Interlocks – Oracle and IBM
5
� A Collaborative Continuous Process between Oracle and IBM to ensure the Oracle Certification of IBM SWG and STG products at its most Current Releases with Oracle Product Releases *
� Applications Unlimited (PSFT, JDE, Siebel CRM, E-Business Suite)
� Fusion Applications
� Business Intelligence and EPM (BI Apps, OBI EE, Hyperion EPM)
� Retail GBU (Retek, 360Commerce, ProfitLogic)
� Communications GBU (Portal Software, MetaSolv)
� Insurance GBU (AdminServer, Skywire)
� Edge Applications: G-Log OTM, Agile PLM, Demantra
� Oracle Technology (DB and RAC, Fusion Middleware, Enterprise Mgr)
� Focus on Currency and Parity
� IBM Cross-Brand Technology Focus (IBM SWG and STG Products): extended technical advocates from Dev Labs
* Continuous evaluation as new companies are acquired
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Oracle DB Certification History
6
For 23+ years, IBM has delivered the best infrastructure components available in the market to support customers who have selected Oracle as their SW provider
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Fusion Apps is Available on Power/AIX (4Q’11)
•HUGE NEWS: Fusion Apps is available on AIX 6.1 conc urrent with Oracle’s Base Development Platform!!!
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Oracle Certifies Active Memory Expansion (AME) on IBM Power/AIX
Oracle Virtualization Matrix
Another Proof point that Oracle is committed to “red on blue”
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Oracle Power Technology Adoption Roadmap
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Mix Workload Types for Best Ressources Usage
10
Storage
Dev & Test
Oracle
Database
Oracle
Middleware
Oracle
Applications
0
6
12
18
Micro-partitionning
Shared CPU
� Reach 70 to 80% CPU average usage
� Consolidate different workload profiles and better optimize the resources
� Development environment could be consolidated into the production infrastructure, workloads can be isolated and resources can be shared
– CPU resources for Dvt can be given a lower priority than production as needed by business
• Shared Processor Pool• Memory dynamic LPAR• Different VIO Server to isolate
I/Os for Dvt environment
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Virtualize Network for Oracle with PowerVM virtualization
11
Storage
Dev & Test
Oracle
Database
Oracle
Middleware
Oracle
Applications
Virtual Ethernet� Partitions can be interconnected with Virtual
Ethernet network.
� Virtual Ethernet is easy, flexible, performance and it’s integrated at the low level of the system (microcode)
� Does not require any hardware for internal interconnection, can be bridged to the outside network infrastructure thought a VIO Server
Host Ethernet Adapter (also called IVE)� HEA switch can be configured and logical ports can
be assigned to partitions for network interconnection
� HEA (also called IVE) is a hardware switch adapter connected to the internal Bus of the processor(s) (GX bus) and can connect to the external network infrastructure
� Oracle supports both Virtual Ethernet and Host Ethernet Adapter technologies
VLA
Ns
Enterprise Network
Virt
ual
Eth
ern
et D
evic
es
VIO Server
You can build heterogeneous network configuration and mix virtual and physical infrastructure at the client LPAR
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Virtualize Storage for Oracle with PowerVM virtualization
12
Storage
Dev & Test
Oracle
Database
Oracle
Middleware
Oracle
Applications
Virtual Fiber Chanel (NPIV)
� Partitions can be interconnected with Virtual Fiber Chanel adapters to the SAN throught an VIO Server and an NPIV adapter.
� Virtual Fiber Chanel is is easy, flexible, and performance and it’s integrated at the low level of the system (microcode/Porwer Hypervisor). VIO Server maps the Virtual to Physical fiber chanels
� NPIV protocol does not require any hardware for internal interconnection, SAN disks are directly assigned from the SAN
Virtual SCSI� Physical Storage is assigned to the VIO
Server, and virtual mapping of the disks is done at the VIO Server level.
� VIOS provides Storage management techniques. (Multipathing, Logical Volume Manager)
� Oracle supports both NPIV and Virtual SCSI protocols
Virtual SCSI orFiber Chanel Devices
VIO Server
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
The Benefits of PowerVM for Oracle environments
13
0
6
12
18
Shared CPU
+ many more PowerVM features to reduce costs, improve flexibility, scalability, reliability
– SMT– MVSPP– Dedicated Shared Processor– NPIV
– Create Logical Partitions instead of using a server for each workload• Mix different Workloads to screw CPU peaks (Production and
Test/Dev)• Reduce global number of CPUs
– Virtualize Server resources• Optimizes CPU usage, using Shared Processor Pool• Create a Virtual I/O Server partition to share physical
resources and simplify I/O infrastructure
– LPM– AMS– AME– CuOD
Micro-partitionning
Oracle
Applications
Oracle
Middleware
Oracle
Database
Test & Dev
VIO Servers
Consolidate and Virtualize the Oracle Workloads to optimizes the IT resources
Storage Network
Power System
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Processor cores for Oracle: use DLPAR and micro partitions
14
� During the day Database partition has workload peaks– Node1 database is defined as uncapped mode, and is able to get free capacity from the
Shared Processor Pool (SPP).• Uncapped mode gets additional CPU up to the number of Virtual Processors• You can change the number of VPs using Dynamic LPAR• Weight parameter defines the priority for the uncapped partitions
– Test and Development partitions will be capped and will not pick up CPU cycles from the Shared Processor Pool
� Minimize license cost on core usage and define a Virtual Shared Processor Pool with a CPU capacity Entitlement.
– Host DB partitions in a Virtual Shared Processor Pool– Core licensing is based on the VSPP Capacity
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Power Technology for Oracle Building Block:IBM Power System, PowerVM Virtualization and Oracle Software
1515 11/3/2011
Oracle Apps AIX LPAR
Oracle DB AIX LPAR
Oracle DB AIX LPAR
Oracle DBAIX LPAR
VIOServer 2
VIOServer 1
NPIV FCAdapter(s)
Shared EthernetAdapter(s)
Shared Processor
Pool(s)
Technology Building Block is based onIBM Power system and PowerVM• Shared Processor pool
• Physical or Multitple virtual Pools• Uncapped partitions for PROD• Capped partitions for DEV
• VIO Server virtualization• VIO Server 2.1• 8Gbps NPIV FC adapters for Storage access• Ethernet adapters shared in VIO servers for virtual
network accessand /or Host Ethernet Adapters (IVE)
• AIX 7.1, 6.1, 5.3 or 5.2 (WPARs)• 2 VIO servers for Storage and network redundancy• Oracle single instance or Real Application Cluster
• CRS/ASM/RAC 11g (Support for virtualized network and NPIV virtual disks
• Consolidate DB Server Lpars up to 70 % to 80% of the server CPU capacity
NPIV FCAdapter(s)
Shared EthernetAdapter(s)
LAN
SAN
70-80%
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
IBM Power Technologies for OracleIBM Power Technologies for Oracle
HighHigh--Availability and Disaster RecoveryAvailability and Disaster Recovery
IBM Live Partition Mobility with Oracle Database DemoIBM Live Partition Mobility with Oracle Database Demo
Agenda
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
The Causes of Unavailability
• Cpu
• Memory
• Disk
• Electrical outage
• …
• Backup
• Upgrade
• diagnostic
• …
• Flood
• Electrical problem
• Air-conditioning
• ….
Live Partition Live Partition
Mobility improves itMobility improves it
Live Partition Live Partition
Mobility improves itMobility improves it
99,999% = 5 minutes outage per year
99,99% = 1 hour outage per year
99,95% = 4 hours outage per year
99,90% = 43 hours outage per year
99% = 3 days outage per year
98% = 7 days outage per year
97% = 11 days outage per year
Standard
Availability
Superior
Availability
High
Availability
Av
aila
bility
lev
els
Where are you ?
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Infra Virtualization
FlashCopy
RAID
VolumeCopy
SnapMirror
Oracle Database 11gOracle Database 10gBackup-RMAN-User backups-TSMRecovery- RMAN- Oracle Flashback-Media Recovery-TSM
Oracle Data Guard
IBM PowerHA
Real Application Clusters (RAC)
RAC + Data Guard
Extended RAC
Oracle Streams
IBM Infosphere
RAC + PowerHA
Remote replication
- Metro & Global Mirroring (PPRC)
- Stretch & Fabric Metro Cluster
OAS
Weblogic
IBM Websphere
SAN Volume Controller PowerVM: micro-p, VIOs, NPIV, AME, AMS, ...
Load Balancing Router
OracleAS Guard
IBM PowerHA
Load Balancing Router
Server Clustering
CFS
CRS PowerHA
ASM GPFS JFS2 ASM GPFS ASM GPFS
Load Balancing + Failover mode
Single Active / Passive Active / Active Extended Solution (MAA)
ApplicationTier
DatabaseTier
Storage
ASM JFS2
Remote replication
- Metro & Global Mirroring (PPRC)
- Stretch & Fabric Metro Cluster
CRS PowerHA CRS PowerHA
High Availability: Provide the Right HA/DR Architecture
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Cold Fail-Over Infrastructure
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle
Applications
PowerHA or Oracle Grid Infrastructure
Database
Oracle
Middleware
Protect all components
• Third Party Applications
• Oracle Applications• Oracle Middleware
• Oracle Database
• Cold Failover with downtime
• PowerHA for any productor
Oracle clusterware for Oracle products or DataGuard
Oracle
Applications
Database
Oracle
Middleware
Provide Different Level of Service
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
0
7,5
15
0
7,5
15
22,5
30
DB Instance 1
Production
Production Server Passive Server Cold Fail-Over architecture is an HA solution with downtime
IBM PowerHA or Oracle DataGuard• Requires a minimum downtime of the DB
• DataGuard replays the logs on a standby DB and uses a 2nd copy of the original database
• PowerHA can use the same Database storage volumes.
• CPU and Memory resources for the passive server are not wasted, they are reserved and can be automatically activated with Capacity on Demand
• Simple process for switch over operation to DR and recovery to production server
• Optimize the overall infrastructure with consolidation of other workloads (i.e. Development, Test, …) and Capacity on Demand
• Workload balancing across the servers with PowerVM features, LPM, micro-partitioning, .
CPU%
StandbyLPAR or
Instance 1
Stanby LPAR or Instance 2DB Instance 2
HA Step 1: Active/Passive is Cold Fail-over solution
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Use PowerVM and Oracle Grid Infrastructure
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle Applications
(Cluster Implementation)
Oracle Grid Infrastructure
Oracle Middleware
(Cluster Implementation)
Provide High Availability and Scalable Solutions for the entire stack
• No downtime on node failure
• Rolling Upgrade Patching
• Increase Workload treatment by adding nodes with no downtime
• Combine RAC and PowerVM to Scale Right, in and out of the box !
Real Application Cluster Database
Provide Different Level of Service
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Micro-partitionning Test
Server A
VIO Server
Active/active mode with Real Application Cluster for High Availability
Micro-partitionning Dev
Server B
VIO Server
Oracle
Real Application Cluster
Oracle
Middleware
Oracle
Applications
0
5
10
15
0
5
10
15
0
5
10
15
� Oracle Real Application Cluster (RAC) is flexible architecture
– Workload balancing across the nodes (partitions) of the servers
– Easy maintenance as 1 node can be stopped without Application disruption
� Combine RAC and PowerVM Virtualization features
– Define RAC nodes with Micro partitioning and uncapped mode and high priority
– Define Test and Development with Micro partitioning and low priority or capped mode
� More resources per application �increase average usage
Automated workload balancing
HA Step 2: Active/Active increases Application/DB availability
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
0
7,5
15
0
7,5
15
0
7,5
15
22,5
30
DB Instance 1Clusterware
Production Cluster 1
Production Cluster 2
Production Site
DB Instance 1Clusterware
DB Instance 2Clusterware
DB Instance 2Clusterware
DR SiteReduce downtime and delay the fail-over process
• Easy maintenance as cluster nodes can be stopped with minimum disruption
• Define RAC nodes with Micro partitioning and uncapped mode and high priority
But it is still is poor CPU usage• Workload peaks at the same time can’t take full
benefit of CPU virtualization
• Idle CPU is wasted and multiplied by number of servers in the cluster
• Could be more flexible infrastructure for provisioning, maintenance and failover operations
• Free resources are reserved in DR server to get the additional workload in case of a node failure/maintenance
CPU%
Oracle RAC ClusterAutomated workload balancing
StandbyDB
Instance 1
StanbyDB
Instance 2
Do not host only RAC(s) DB in the server …
HA Step 3: Combine IBM Power, active/active Oracle RAC + DR
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
• Test and Development are different workloads profiles than Production
• Mix Production/DR and Test environment to optimize resources
• Define Test and Development workloads as less priority without impact on activities
• Less hardware resources
• Simplified and Flexible IT infrastructure• Less administration and maintenance
VIOS1 VIOS2 VIOS1 VIOS2
LANLANSANSAN
VIOS1 VIOS2
LANLAN SANSAN
DB Instance 1Clusterware
Production Cluster 1
Production Cluster 2
Production Site
DB Instance 1Clusterware
DB Instance 2Clusterware
DB Instance 2Clusterware
DR Site
StandbyDB
Instance 1
StanbyDB
Instance 2
Production
Standby
Test/Dev
DB Instance 1Clusterware
DB Instance 1Clusterware
Test / Dev Cluster 1
Another singleDB
Storage services(i.e. Flash copy/PPRC/MetroMirror)
0
7,5
15
0
7,5
15
22,5
30
0
7,5
15
HA Step 3: the global picture
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Provide Different Level of SLA
Combine PowerVM Virtualization and Oracle Grid Infrastructure
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle Applications
(Cluster Implementation)
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle Grid Infrastructure
Oracle Real Application Cluster
Oracle Middleware
(Cluster Implementation)
Oracle
Applications
Oracle
Middleware
Database
Oracle
Applications
Oracle
Middleware
Database
Provide :
• Cold Failover
• Workload Live migration
• High Availability
• Scalability
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Storage Virtualization is . . .
Technology that makes one set of resources look and feel like another set of resources, preferably with more desirable characteristics…
A logical representation of resources not constrained by physical limitations
– Hides some of the complexity
– Adds or integrates new function with existing services
– Can be nested or applied to multiple layers of a system
Storage Virtualization
Logical Representation
Physical Resources
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Consolidate & Virtualize Storage
Storage Pools
Virtual
Disk
Virtual
Disk
Virtual
Disk
Virtual
Disk
Virtual
Disk
SAN
XiV
DS4000
DS5000
DS8000
EMCSym
SAN Volume Controller
HDS
HP
Use IBM SAN Volume Controler
Make changes to the storage without disrupting host applications
Combine the capacity from multiple arrays into severalStorage Pools
Apply common copy servicesacross the storage pool
Manage the storage pool from a central point
Oracle ASM IBM GPFSrootvg
Advanced Copy Services
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Storage Pools
Virtual
Disk
Virtual
Disk
Virtual
Disk
Virtual
Disk
Virtual
Disk
SAN
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle Applications
(Cluster Implementation)
Micro-partitionning
VIO Server
Micro-partitionning
VIO Server
Oracle Grid Infrastructure
Oracle Real Application Cluster
Oracle Middleware
(Cluster Implementation)
Oracle
Applications
Oracle
Middleware
RAC One DB
Oracle
Applications
Oracle
Middleware
Database
Oracle Single Access Name (my.cluster.com)
Load Balancers
XiV
DS4000
DS5000DS8000
EMCSym
Advanced Copy Services
SAN Volume Controller
HDS
HP
Oracle ASM IBM GPFS
Use Oracle & IBM Technologies
Optimize Server ressources usage !
Add/Remove server ressources on the fly …
Add/Remove server(s) on the fly …
Add/Remove Storage on the fly …
Reallocate ressources on the fly
� No disruptions !
Provide a Flexible and Optimized Architecture
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
IBM Power Technologies for OracleIBM Power Technologies for Oracle
HighHigh--Availability and Disaster RecoveryAvailability and Disaster Recovery
IBM Live Partition Mobility with Oracle Database DemoIBM Live Partition Mobility with Oracle Database Demo
Agenda
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Micro-partitionning
Oracle
Applications
Oracle
Middleware
Oracle
Database
Test & Dev
Micro-partitionning
Oracle
Middleware
Test & Dev
Use IBM Live Partition MobilityLPM migrates a partition from a source server to target server without shutdown
LPM is easy maintenance
LPM is Live Workload Management
Examples of LPM operations:– Database partition runs a batch
and Server A is overloaded, CPU is 100% busy, while Server B has free CPU capacity
• Migrate Test & Dev partition to server B and free corresponding resource for Database partition on server A
– You need to maintain Server A, Migrate Oracle database partition to Server B without disruption
Server A Server B
VIO ServerVIO Server
0
6
12
18
24
0
6
12
18
24
Other Oracle
Application
Make a Flexible IT: Manage and Move the Workload
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Provide Live Workload Flexibility with LPM: details
LPAR-1
Hypervisor
LPAR-4LPAR-1 LPAR-2 LPAR-3 LPAR-4
AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel AIX Kernel
P
LPAR-2VIOS
Def 1
Def 2
Def 3
Def 4
SAN
Hypervisor
Ethernet
Partition Mobility Requires:• POWER6• AIX 5.3 / 6.1 or Linux • All resources must be “Virtualized”
•No real resources• SAN storage environment
•SAN Boot, temp space, same network
Partition Mobility Steps�Validation�Copy memory pages
�Host to target systems�Transfer
�Turn off Host resources�Activate Target resources
P P P P P P P
P P
PP P P P
P
LPAR-3
P P P
P P P
Boot
Data
P P
P P
PP P
P P
P
Oracle Oracle
Def 2
P P
LPAR-3
MigrationController
VIOS
MigrationController
The number of Oracle cores needed does not change b efore and after the migration
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Oracle Live Partition Mobility (LPM) demo(is better explanation than just slides)
� Application Server (swingbench) simulates
users and generates workload to the Oracle
DB
� The Oracle DB is run as a Real Application
Cluster (RAC) with 3 nodes in the Power6
server.
� You need to:
� perform a maintenance on the left
server/location
� or run additional workload on the Power6
server
� or migrate the production to a new Power7
server
VIOS1 VIOS2 VIOS1 VIOS2
LANLANSANSAN
Power6
Node C
DB InstanceClusterware
OtherApps 1
AnotherDB
0
7,5
15
0
7,5
15
Node B
Node A
Swingbench
Power7
DNS
This is just a few (11) clicks on the HMCGUI and a few minutes process
Any otherApps
Web server
�LPM the RAC node and make it fly !!
HMC
LPM
OtherApps 1
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation33 11/3/2011
The non official demo scenario, just as an example
� Just try to migrate one RAC node without stopping the cluster on it.
� In case of any disruption of the node during the migration:
� the node will get out of the cluster and will reboot ;-(
� the workload will run on the remaining node(s). This is regular behavior of the RAC cluster
� This scenario is not yet supported, and certification tests are under progress, so you must not use it for production purpose. (only for test). This demo process without cluster stop on the migrated RAC node is a non official disclosure
� The infrastructure is based on PowerVM virtualization using VIO Servers on both source and target servers.
� It is basic installation without any customization and tuning
� SAN disks of RAC nodes are mapped and shared to all VIO servers. It’s done using VSCSI, but it’s easier to setup as NPIV and Virtual Fiber Adapters
� ASM is setup to share the disks across the nodes of the Oracle cluster. It could have been GPFS, it is also supported.
� The RAC interconnect network is trunked with other networks to the physical network infrastructure through a single physical adapter of the VIOS.
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation34 11/3/2011
LPM the RAC scenario
LPM is certified for Oracle single instance,
certification tests for RAC are under
progress
� Stop the Oracle cluster on the node,
keep AIX alive.
crsctl stop crs
� Click on the HMC GUI and LPM the node,
remaining RAC nodes run the workload
and get CPU cycles (uncapped micro-
partitions). You can also add RAM using
DLPAR
� Restart the Oracle cluster on the
migrated node
crsctl start crs
VIOS1 VIOS2 VIOS1 VIOS2
LANLANSANSAN
Power6
Node C
DB InstanceClusterware
OtherApps 1
AnotherDB
0
7,5
15
0
7,5
15
Node B
Node A
App serverSwingbench
Power7
DNSAny other
Apps
Web server
HMC
OtherApps 1
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101965
DOAG Conference 2011 (November 15th-17th) © 2011 IBM Corporation
Thanks for your attention
35