what’s new in 12c high availability - aioug · what’s new in 12c high availability ... pre 12c...
TRANSCRIPT
What’s New in 12c High Availability
Aman Sharma
@amansharma81 http://blog.aristadba.com
Who Am I?
Sangam14-2014 2
Aman Sharma
About 12+ years using Oracle Database
Oracle ACE
Frequent Contributor to OTN Database forum(Aman….)
Oracle Certified
Sun Certified
|@amansharma81 *
|http://blog.aristadba.com *
Sangam14-2014 3
Agenda
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
(Actual)Agenda
Sangam14-2014 4
Pre 12c Oracle RAC-Database Tier
• Software based clustering using Grid
Infrastructure software
• Cluster nodes contain only database
and ASM instances
• Homogenous configuration
• Dedicated access to the shared
storage for the cluster nodes
• Applications/users connect via nodes
outsides the cluster
• Reflects Point-to-Point model
Database Tier
Sangam14-2014 5
Database Tier
Application Tier
Pre 12c Oracle RAC-Application Tier
Sangam14-2014 6
Pre-12.1 Cluster vs 12c Flex Cluster
Sangam14-2014 7
Oracle RAC Using Point-to-Point System
• Requires a lot of resources
• Each node is connected to each other via interconnect for node-node heartbeat
• Each node is connected to the storage directly
• Possible Interconnect paths for N node cluster – N*(N-1)/2 Interconnect Paths for Node Heartbeat
– N connection paths for storage
• For 16 Node RAC • Heartbeat paths: 16(16-1)/2=120 • Storage paths:16
Sangam14-2014 8
Let’s Talk Big!
• Recap:
– N*(N-1)/2 Node Heartbeat paths
– N Storage paths
• For 16 Node RAC
– 120 Interconnects, 16 storage paths
• What about 500 node cluster?
– 124,750 Heartbeat connections
– 500 Storage Paths
Sangam14-2014 9
Introducing 12c Flex Cluster
ORCL1 +ASM1
Hub Node1
ORCL2 +ASM2
Hub Node2
ORCL3
Hub Node3
ORCL4 +ASM3
Hub Node4
Leaf Node1 Leaf Node1 Leaf Node4 Leaf Node2
Oracle CW Oracle CW
Database Tier
Application Tier
GNS
Flex ASM
Oracle CW Oracle CW Oracle CW
Oracle CW Oracle CW Oracle CW Oracle CW
Sangam14-2014 10
12c Flex Clusters-Overview • Based on Hub-Spoke topology
• Two different categories of cluster
nodes
– Hub Nodes • Runs database and ASM
instances
– Leaf Nodes • Loosely coupled
• Runs applications
• Connects to a Hub node
– Flex ASM • Required for Flex Cluster
• Hub nodes connect to Flex ASM
based storage
Sangam14-2014 11
11.2 RAC vs 12c Flex Cluster
• 16 Node cluster
– 120 Interconnects
– 16 Storage paths
• 500 node cluster?
– 124,750 Interconnects
– 500 Storage Paths
• 5 Hub Nodes
– 5 Hub, 16 Leaf
• 8 Interconnects
• 5 Storage paths
• 21 Hub-Leaf node connections
• 500 node cluster?
– 25 Hub,475 Leaf
• 300 Interconnects
• 25 Storage paths
• 775 Storage Paths
Sangam14-2014 12
Flex Cluster Benefits
Much lesser resource requirements
Much larger scalability. Number of nodes can be now up to 2000
More High Availability for the application tier Previously, application HA was dependent on the
application code
Application nodes also can now be able to use Server Pools
Better management of dependency mapping for applications
Sangam14-2014 13
Say Hello to Leaf & Hub Nodes
• 1 Leaf Node 1 Hub node
• Leaf Nodes don’t talk to each other(neither needs to)
• Leaf node(s) choose the Hub nodes when they join the cluster
• Applications running on Leaf nodes connect to the database using the Hub nodes
• Must less internode interactions are required(Hub-Spoke model)
Sangam14-2014 14
• Light weight
• Loosely coupled
• Works as Spoke
• Each Leaf node gets connected to a Hub node
• Heartbeat only to the Hub node
• Required to run applications and clients over them
• No direct access to the storage managed by Flex ASM(it’s accessibly only to Hub nodes)
Leaf Node2 Leaf Node1
Oracle CW Oracle CW
Leaf Nodes-Closer Look
Sangam14-2014 15
• Requires GNS to discover the Hub nodes
• No private inter-connect between the leaf nodes i.e. no inter-leaf node communication
• Uses the same Public and Private networks as are used by the Hub nodes
• If a Hub nodes goes down, connected Leaf node(s) get evicted
• Evicted Leaf node can be added back by restarting the Clusterware on it
Leaf Nodes-Closer Look(contd.)
Sangam14-2014 16
• Very less compared to the Hub nodes
• Contains only the application specific workload
• Do not contain
–Database instances
–ASM instances
–VIP’s
• Can be either virtual or physical
• Contains no Voting Disk or OCR
• Can be converted into Hub nodes if they have access to the storage
Leaf Nodes-Resource Requirements
Sangam14-2014 17
• For enabling Flex Cluster mode, GNS is mandatory
• GNS runs on one of the Hub nodes
• Leaf Nodes use GNS as naming service to locate the Hub nodes
• Applications, services running on Leaf nodes will be requiring GNS to locate the resources that they need in order to function
• Leaf nodes use GNS only at the time when they join the cluster for the 1st time
• Alike 11.2, GNS requires a static IP (GNSVIP)
Grid Naming Service(GNS) & Flex Cluster
Sangam14-2014 18
• In the previous versions, only one GNS/cluster was allowed
• For multiple clusters, multiple GNS VIP’s were required
• Causes more resource requirements
• In 12c, GNS configuration can be shared among clusters
• GNS configuration needs to be exported before being shared with the clusters
• Use the option USE SHARED GNS when doing the next cluster installation
12c-Shared GNS Configuration
$ srvctl export gns -clientdata /tmp/gnsconfig
Sangam14-2014 19
• Just the same as what the cluster nodes were in
pre-12c clusters
• Have access to the ASM managed storage
• Runs database instances, ASM(Flex) instances
and resources for the applications
• Maximum number of Hub nodes can be 64 in
12.1(HUBSIZE)
So What Are Hub Nodes?
Sangam14-2014 20
To convert a Standard Cluster: • Check the current cluster mode
$crsctl get cluster mode status
• Check GNS is enabled or not #srvctl status gns
• If GNS is not added, add it #srvctl add gns –vip 192.168.10.12 –domain cluster01.example.com
• Set Flex Cluster mode #crsctl set cluster mode flex
Stop & start clusterware on each node #crsctl stop crs #crsctl start crs
• Note: Flex clusters can’t be converted back to Standard cluster
Enabling Flex Cluster Mode
Sangam14-2014 21
• Show the current role of the node
• Change the node role
• Requires a CRS restart on the node
• Checking the maximum number of Hub nodes allowed(HubSize)
Flex Cluster Administration-Example Commands
$ crsctl get node role status –node rac01 Node ‘rac01’ active role is ‘hub’
$ crsctl set node role –node rac01 leaf
$ crsctl get cluster hubsize
Sangam14-2014 22
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Agenda
Sangam14-2014 23
Feature starting from 11.2 Offers the traditional facility of logical division
of cluster Nodes are allocated to the pools Resources are hosted over the pools Resource can be an application, a database, a
process Policy-managed interface Resource allocation is based on priority
24
Server Pools- Recap
Sangam14-2014
• Server Pools are now available for both Hub and Leaf nodes
• Provide better resource management by isolating workloads
• Leaf Nodes and Hub can never be in the same server pool
• Server pool management for Leaf nodes is independent from server pools containing Hub Nodes
Hub & Leaf Node Server Pools
Sangam14-2014 25
OLTP_SP DSS_SP
MIN_SIZE=1,Max_SIZE=3 IMP=3
MIN_SIZE=2,Max_SIZE=2 IMP=2
Apache Siebel
Leaf
Hub
Flex Cluster – Server Pool Enhancements
Sangam14-2014 26
• Enhances the concept of Server Pools introduced in 11.2
• Previously, only server pool attributes would determine node placement in server pools
• From 12c Flex clusters, two new concepts
– Server Categorization • Extended node attributes for servers to decide the
allocation in server pools
– Cluster Configuration Policy Sets • Workload based management of servers in the server
pools
Flex Cluster – Policy Based Cluster Administration
Sangam14-2014 27
OLTP_SP
SERVER_CATEGORY
Server Configuration Attributes ACTIVE_CSS_ROLE: HUB| LEAF CONFIGURED_CSS_ROLE: HUB| LEAF CPU_CLOCK_RATE: MHz CPU_COUNT CPU_EQUIVALENCY CPU_HYPERTHREADING MEMORY_SIZE NAME RESOURCE_USE_ENABLED:1|0 SERVER_LABEL
Server Category Attributes NAME ACTIVE_CSS_ROLE:HUB| LEAF EXPRESSION:
:=: equal eqi: equal, case insensitive
>: greater than <: less than
!=: not equal co: contains
coi: contains, case insensitive st: starts with en: ends with
nc: does not contain nci: does not contain, case insensitive
Flex Cluster – Server Categorization
Sangam14-2014 28
[root@rac0 ~]# crsctl status server rac0 -f NAME=rac0 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=hub RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Generic ora.ORCL STATE_DETAILS=AUTOSTARTING RESOURCES ACTIVE_CSS_ROLE=hub
[root@rac0 ~]# crsctl status server rac3 -f NAME=rac3 MEMORY_SIZE=1997 CPU_COUNT=1 CPU_CLOCK_RATE=3 CPU_HYPERTHREADING=0 CPU_EQUIVALENCY=1000 DEPLOYMENT=other CONFIGURED_CSS_ROLE=leaf RESOURCE_USE_ENABLED=1 SERVER_LABEL= PHYSICAL_HOSTNAME= STATE=ONLINE ACTIVE_POOLS=Free STATE_DETAILS=AUTOSTART QUEUED ACTIVE_CSS_ROLE=leaf
Flex Cluster – Server Categorization in Action
Sangam14-2014 29
[root@rac0 ~]# crsctl status category NAME=ora.hub.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION= NAME=ora.leaf.category ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=leaf EXPRESSION= [root@rac0 ~]# crsctl status server -category ora.hub.category NAME=rac0 STATE=ONLINE NAME=rac1 STATE=ONLINE NAME=rac2 STATE=ONLINE
Flex Cluster – Listing Server Categories
Sangam14-2014 30
[root@rac0 ~]# crsctl add category testcat -attr "EXPRESSION='(MEMORY > 1900)'“ [root@rac0 ~]# crsctl status server -category ora.leaf.category NAME=rac3 STATE=ONLINE[root@rac0 ~]# crsctl status category testcat NAME=testcat ACL=owner:root:rwx,pgrp:root:r-x,other::r-- ACTIVE_CSS_ROLE=hub EXPRESSION=( MEMORY > 1900 )
Flex Cluster – Creating Server Category
Sangam14-2014 31
• Policy based server pool assignment
• Default policy-CURRENT • Managed by a Policy set
• Policy set contains 2 attributes • SERVER_POOL_NAMES • LAST_ACTIVATED_POLICY
• Policy set may contain 0 or more than one policies
• Each Policy contain definitions for one server pool only
Flex Cluster – Cluster Policy Set
Sangam14-2014 32
POOL1 POOL2 POOL3
MIN_SIZE=2,Max_SIZE=2 IMP=0
MIN_SIZE=1,Max_SIZE=1 IMP=0
MIN_SIZE=1,Max_SIZE=1 IMP=0
4 Node Cluster
Sangam14-2014 33
app1
app2
app3
Day Time: app1 uses two servers app2 and app3 use one server, each Night Time: app1 uses one server app2 uses two servers app3 uses one server Weekend: app1 is not running (0 servers) app2 uses one server app3 uses three servers
Node allocation
should be done
depending
on the
requirements at
different timings
Varying Times & Varying Workloads
Sangam14-2014 34
SERVER_POOL_NAMES=Free pool1 pool2 pool3 POLICY NAME=DayTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY=
POLICY NAME=NightTime SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=2 MIN_SIZE=2 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY=
POLICY NAME=Weekend SERVERPOOL NAME=pool1 IMPORTANCE=0 MAX_SIZE=0 MIN_SIZE=0 SERVER_CATEGORY= SERVERPOOL NAME=pool2 IMPORTANCE=0 MAX_SIZE=1 MIN_SIZE=1 SERVER_CATEGORY= SERVERPOOL NAME=pool3 IMPORTANCE=0 MAX_SIZE=3 MIN_SIZE=3 SERVER_CATEGORY=
Flex Cluster – Proposed Cluster Policy Set
Sangam14-2014 35
Modify the Default policy set to manage the three server pools:
$ crsctl modify policyset –attr "SERVER_POOL_NAMES=Free pool1 pool2 pool3"
Add the required three policies:
$ crsctl add policy DayTime $ crsctl add policy NightTime $ crsctl add policy Weekend
$ crsctl modify serverpool pool1 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy DayTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool1 -attr "MIN_SIZE=0,MAX_SIZE=0" -policy Weekend $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy NightTime $ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy Weekend $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime $ crsctl modify serverpool pool3 -attr "MIN_SIZE=3,MAX_SIZE=3" -policy Weekend
Modify the server pools:
Flex Cluster – Cluster Policy Set Creation
Sangam14-2014 36
Activate the policy-Weekend $ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Weekend”
$ crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- app1 1 ONLINE OFFLINE STABLE 2 ONLINE OFFLINE STABLE app2 1 ONLINE ONLINE mjk_has3_1 STABLE app3 1 ONLINE ONLINE mjk_has3_0 STABLE 2 ONLINE ONLINE mjk_has3_2 STABLE 3 ONLINE ONLINE mjk_has3_3 STABLE --------------------------------------------------------------------------------
Server allocations after the policy being applied
Flex Cluster – Cluster Policy Set Creation
Sangam14-2014 37
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Agenda
Sangam14-2014 38
• Multitenant Databases contain Containers and
Pluggables
• Supported with 12c RAC
• Each PDB is going to be running as a service
• Each PDB service can run on one or more
RAC instances
• Each PDB service can be deployed over server
pool(s)
12c Multitenant Database & 12c RAC
Sangam14-2014 39
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Agenda
Sangam14-2014 40
Flex Cluster – Bundled Agents(XAG)
ORCL1 +ASM1
Hub Node1
ORCL2 +ASM2
Hub Node2
ORCL3
Hub Node3
ORCL4 +ASM3
Hub Node4
Ag
Leaf Node1 Leaf Node1 Leaf Node4 Leaf Node2
Oracle CW Oracle CW
Database Tier
Application Tier
GNS
Flex ASM
Oracle CW Oracle CW Oracle CW
XAG XAG XAG XAG
Sangam14-2014 41
Oracle CW Oracle CW Oracle CW Oracle CW
• Oracle CW can be used to provide HA to applications
• HA for applications was available earlier through the applications API’s and Services
• With 11.2.0.3, agents were available as standalone (http://oracle.com/goto/clusterware)
• 12.1 introduced Bundled Agents(XAG)- supplied with the GI software itself
• In 12c, XAG agents can reside on both Leaf and Hub nodes
http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/ogiba-2189738.pdf
Flex Cluster – Bundled Agents(XAG) Introduction
Sangam14-2014 42
• GI provides pre-configured public core network resource- ora.net1.network
• Applications bind Application VIP’s(APPVIP) to this network layer
• AGCTL-interface to add an application resource to the GI, managed by the bundled agents
• Shared storage access-ACFS/NFS/DBFS
• Applications for which XAG are available : – Apache HTTP & Tomcat
– Golden Gate
– Siebel
– JD Edwards
– PeopleSoft
– MySQL
GI & Bundled Agents
Sangam14-2014 43
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
Agenda
Sangam14-2014 44
• DBA’s, from 12c, can predict the impact of an operation
• Can be used with both CRSCTL and SRVCTL commands
• Available for the following category of events
Resources: Start, stop, relocate, add,modify Server pools: Add, remove, and modify Servers: Add, remove, and relocate Policy: Change active policy Server category: Modify
12c Cluster- What-If Command
Sangam14-2014 45
[root@rac0 ~]# crsctl eval stop res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.LISTENER.lsnr' (rac0) will be in state [OFFLINE] 2 Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] --------------------------------------------------------------------------------
[root@rac0 ~]# crsctl eval start res ora.rac0.vip -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 N Error code [223] for entity [ora.rac0.vip]. Message is [CRS-5702: Resource 'ora.rac0.vip' is already running on 'rac0']. --------------------------------------------------------------------------------
12c Cluster- What-If Command
Sangam14-2014 46
[root@rac0 ~]# crsctl eval delete server rac0 -f Stage Group 1: -------------------------------------------------------------------------------- Stage Number Required Action -------------------------------------------------------------------------------- 1 Y Resource 'ora.ASMNET1LSNR_ASM.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.DATA.dg' (rac0) will be in state [OFFLINE] Y Resource 'ora.LISTENER.lsnr' (rac0) will be in state [OFFLINE] Y Resource 'ora.LISTENER_SCAN1.lsnr' (1/1) will be in state [OFFLINE] Y Resource 'ora.asm' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns' (1/1) will be in state [OFFLINE] Y Resource 'ora.gns.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.net1.network' (rac0) will be in state [OFFLINE] Y Resource 'ora.ons' (rac0) will be in state [OFFLINE] Y Resource 'ora.orcl.db' (1/1) will be in state [OFFLINE] Y Resource 'ora.proxy_advm' (rac0) will be in state [OFFLINE] Y Resource 'ora.rac0.vip' (1/1) will be in state [OFFLINE] Y Resource 'ora.scan1.vip' (1/1) will be in state [OFFLINE] Y Server 'rac0' will be removed from pools [Generic ora.ORCL] 2 Y Resource 'ora.gns.vip' (1/1) will be in state [ONLINE] on server [rac1] Y Resource 'ora.rac0.vip' (1/1) will be in state [ONLINE|INTERMEDIATE] on server [rac1] <<output bridged for abbreviation>> -------------------------------------------------------------------------------- Sangam14-2014 47
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Agenda
Sangam14-2014 48
• Outage on Database or
Application level can
cause In-flight work loss
• User’s reattempt for
transaction may lead to
logical errors i.e.
duplication of data
• Handling of such
exceptions at application
level is not easy
Application
Database
1
2
3
4
5
error
error
In-doubt
Transaction Issues Before 12c
Sangam14-2014 49
• Transaction Guard
– Transaction Guard provides a generic protocol and
API for applications to use for at-most-once
execution in case of planned and unplanned
outages and repeated submissions
• Application Continuity
– Enables the replay of in-flight, recoverable
transactions following the outage of database
Solution: Transaction Guard & Appl. Continuity
Sangam14-2014 50
• Part of both Standard & Flex cluster
• Returns the outcome of the last transaction
after a recoverable error using Logical
Transaction ID(LTXID)
• Used by Application Continuity(automatically
enabled)
• Can be used independently also
What Is Transaction Guard
Sangam14-2014 51
• Database Request – Unit of work submitted by SQL, PL/SQL etc.
• Recoverable Error – Error due to any issue independent of application i.e. network,
node, database, storage errors
• Reliable Commit Outcome – Outcome of the last transaction(preserved by TG using LTXID)
• Session State Consistency – Describes how the application changes the non-transaction state
during a database
• Mutable Functions – Functions that change their state with every executions
What Is Transaction Guard
Sangam14-2014 52
• LTXID=Logical Transaction ID
• Used to fetch the outcome of the last transaction’s commit status
• DBMS_CONT_APP.GET_LTXID_OUTCOME
• Client is supplied unique LTXID for each authentication and for each round-trip for client driver for commit operations
• Both client and database hold LTXID
• Transaction Guard ensure that each LTXID is unique
• LTXID is present at the commit for default retention period-24 hours
• While obtaining the outcome, LTXID is blocked to ensure it’s integrity
What Is Logical TX ID(LTXID)?
Sangam14-2014 53
Receive a FAN down event (or recoverable error)
FAN aborts the dead session
If recoverable error (new OCI_ATTRIBUTE for OCI, isRecoverable for JDBC)
Get last LTXID from dead session using getLTXID or from your callback
Obtain a new session
Call GET_LTXID_OUTCOME with last LTXID to obtain COMMITTED and USER_CALL_COMPLETED status
If COMMITTED and USER_CALL_COMPLETED
Then return result
ELSEIF COMMITTED and NOT USER_CALL_COMPLETED
Then return result with a warning (that details such as out binds or row count were not returned)
ELSEIF NOT COMMITTED
Cleanup and resubmit request, or return uncommitted result to the client
Transaction Guard-Pseudo Workflow
Sangam14-2014 54
• Supported – Local Transactions
– Parallel Transactions
– Distributed & Remote Transactions
– DDL & DCL Transactions
– Auto-commit and commit-on success
– Pl/SQL with embedded Commit
• Unsupported – Recursive transactions
– Autonomous transactions
– Active Data Guard with read/write DB links for forwarding transactions
– Golden Gate & Logical Standby
• API supported for – 12c JDBC Type 4
– 12c OCI/OCCI Client drivers
– 12c ODP.net
Transaction Guard-(Un)Supported Transactions
Sangam14-2014 55
• Database release 12.1.0.1 or later
• Grant execute on DBMS_APP_CONT to <user>;
• Configure Fast Application Notification(FAN)
• Locate and define Transaction History
table(LTXID_TRANS)
• Configure following parameters for Service
– COMMIT_OUTCOME = TRUE
– FAILOVER_TYPE=TRANSACTION
– RETENTION_TIMEOUT=<value>
Configuring Database for Transaction Guard
Sangam14-2014 56
Adding an Admin-managed Service srvctl add service -database orcl -service GOLD -prefer inst1 -available inst2 -commit_outcome TRUE -
retention 604800
DECLARE params dbms_service.svc_parameter_array; BEGIN params('COMMIT_OUTCOME'):='true'; params('RETENTION_TIMEOUT'):=604800; dbms_service.modify_service('<service-name>',params); END; /
Modifying a Single Instance Service
Sample Service Configuration for Transaction Guard
Sangam14-2014 57
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Agenda
Sangam14-2014 58
• Masks the issues for the applications
• Replays the in-flight transactions
• Uses Transaction Guard implicitly
What Is Application Continuity
Sangam14-2014 59
Application Continuity-Workflow
Sangam14-2014 60
Image courtesy-Oracle documentation
• For Java Client
– Increase memory for replay queues
– Additional CPU for garbage collection
• For Database Server
– Additional CPU for validation
• Transaction Guard
– Bundled with the kernel
– Minimal overhead
Application Continuity-Resource Requirements
Sangam14-2014 61
• Use disableReplay() API
• Check for
– UTL_FILE, UTL_MAIL, UTL_FILE_TRANSFER, UTL_HTTP, UTL_TCP, UTL_SMPT, DBMS_ALERT
• Disable the replay when application
– Assumes that location value doesn't change
– Assumes that rowid value doesn't change
– Uses Autonomous Transactions, External Pl/SQL
Disabling Application Continuity
Sangam14-2014 62
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Agenda
Sangam14-2014 63
• ASM instances run locally on a node
• ASM clients can access ASM only from the local node
• Loss of local ASM Instance causes the unavailability of the clients connected to it
ASM of Past Times
Sangam14-2014 64
Image courtesy-Oracle documentation
• 1:1 mapping of ASM instance with the clients is not required
• Number of ASM instances= Cardinality(3)
• Uses a dedicated network called ASM Network
• ASM network is used exclusively for communication between ASM instances and clients
• If local ASM instance fails, client failover to another Hub node running ASM instance
• Mandatory for 12c Flex Cluster
12c’s Flex ASM
Sangam14-2014 65
Image courtesy-Oracle documentation
ORCL1 +ASM1
Hub Node1
ORCL2 +ASM2
Hub Node2
ORCL3
Hub Node3 Oracle CW Oracle CW
GNS
Oracle CW Oracle CW
Public Network
Storage Network
ASM Storage
CSS Network
ASM Network
Dedicated ASM Network in 12c Flex ASM
Sangam14-2014 66
Dedicated ASM Network in 12c Flex ASM
Sangam14-2014 67
Flex ASM- Failover
Sangam14-2014 68
• Flex ASM can be managed using – ASMCA – CRSCTL – SQL*PLUS – SRVCTL
$ asmcmd showclustermode ASM cluster : Flex mode enabled
$ srvctl status asm -detail ASM is running on mynoden02,mynoden01 ASM is enabled.
$ srvctl config asm ASM instance count: 3
SQL> SELECT instance_name, db_name, status FROM V$ASM_CLIENT; INSTANCE_NAME DB_NAME STATUS --------------- -------- ------------ +ASM1 +ASM CONNECTED orcl1 orcl CONNECTED orcl2 orcl CONNECTED
Administering Flex ASM
Sangam14-2014 69
• Pure 12c Mode – Cardinality != Number of Nodes
– Supports DB instance failover to other ASM instances
– Supports any DB instance to connect to ASM instance
– Managed by Cardinality
• Mixed Mode – Flex ASM with Cardinality = Number of Nodes
– ASM instance on all the nodes
– Allows 12c DB instances to connect to remote ASM instances
– Pre-12c DB instances can connect to local ASM instance
• Standard Mode – Standard ASM installation and configuration
– Can be converted to Flex ASM mode using • ASMCA
• converttoFlexASM.sh
12c ASM- Mixed Mode Configuration
Sangam14-2014 70
• Flex Cluster
• Flex Cluster- Server Pool Enhancements
• Multitenant database with 12c RAC
• Bundled Agents(XAG)
• What-If command
• Transaction Guard
• Application Continuity
• Flex ASM
• Cloud File System
Agenda
Sangam14-2014 71
• Next generation file system
• 12c Cloud File System integrates:
– ASM Cluster File System(ACFS)
– ASM Dynamic Volume Manager(ADVM)
• Using Cloud FS, applications, database,
storage in private clouds
12c Cloud File System(Cloud FS)
Sangam14-2014 72
Overview of Cloud FS in 12c
Sangam14-2014 73
Image courtesy- Google Images
Cloud FS-Advanced Data Services
• Support for all types of files
• Enhanced Snapshots(snap-of-snap)
• Auditing
• Encryption
• Tagging
Sangam14-2014 74
Take Away
• 12c has revolutionized HA stack, yet again
• Flex cluster and Flex ASM are new paradigms
• Multitenancy is the solution for database consolidation
• Using Flex Cluster along with Multitenancy gives you a much better foundation for creating a private cloud
• Cloud FS is the foundation for next generation storage solution for Oracle’s clusters
Sangam14-2014 75
76
Thank You!
@amansharma81
http://blog.aristadba.com
|
|
Sangam14-2014