oracledatabase12cnewfeatures.pptx

Upload: duong

Post on 05-Jan-2016

6 views

Category:

Documents


0 download

TRANSCRIPT

Oracle Database 12c New Features

Oracle Database 12cNew Features

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#1Oracle Database 12cOracle Database 12c introduces significant new (HA) capabilities that Drastically cut down planned and unplanned downtimeEliminate compromises between HA and PerformanceTremendously boost operational productivityThese take Availability to unprecedented new levelsNext-generation Maximum Availability Architecture (MAA)Optimized for OracleExtreme Availability

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

Maximum Availability ArchitectureActive Data GuardData Protection, DRQuery OffloadGoldenGateActive-activeHeterogeneousRMAN, Oracle Secure BackupBackup to tape / cloudActive ReplicaEdition-based Redefinition, Online Redefinition, Data Guard, GoldenGate Minimal downtime maintenance, upgrades, migrationsRACScalabilityServer HAFlashbackHuman error correctionProduction Site

Application ContinuityApplication HAGlobal Data Services Service Failover / Load Balancing

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#3Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Database outages can cause in-flight work to be lost, leaving users and applications in-doubtOften leads toUser painsDuplicate submissionsRebooting mid-tiersDeveloper pains

In-Flight Work: Dealing With Outages

Current SituationApplication Servers

Database ServersEnd User

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Customer Pain Points:Database sessions outages (planned and unplanned) have a significant impact on the user experience.Doubtful outcome: users are left not knowing what happened to their funds transfers, orders, payments, bookings ...Usability: users see an error, lose screens of uncommitted data, and need to login again and re-enter or resubmit, sometimes leading to logical corruption.Disruption. DBAs sometimes need to reboot mid-tiers.Developer Pain Points:Current approach for outages places the onus on developers to write exception handling in every possible place.Every code module needs exception code to know if transactions committed.Exception handling must work for all transaction sources. Rebuilding non-transactional state is near impossible for an application that is modifying state at runtime.

5Solving Application Development PainsTransaction GuardA reliable protocol and API that returns the outcome of the last transactionNew in Oracle Database 12cApplication ContinuitySafely attempts to replay in-flight work following outages and planned operationsCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Let me introduce you to Transaction Guard and Application Continuity.

Transaction Guard is an API that applications use in their error handling. With Transaction Guard, the end user experience can be improved dramatically by the application returning to the user whether the transaction just submitted committed or not. This can minimize messages at the users screen such as do not resubmit, do not reload your screen, call customer support. Users can know that their purchase was processed or not.

Application Continuityis a feature that masks recoverable outages from end users and applications. Application Continuity attempts to replay the transactional and non-transactional work, in a non-disruptive manner based on a request against the database. The replay is issued after a recoverable error has occurred that makes the database session unavailable for the application to continue from where it had received an error.

Application Continuity improves the end-user experience because it masks many planned and unplanned outages. The applications error handling should be invoked less often. When replay is successful, the outage appears to the end user as if the execution was slightly delayed.

Some terms:

Here are some terms and concepts to understand in order to use Application Continuity or Transaction Guard:

Database RequestA database request is a unit of work submitted from the application.It typically corresponds to the SQL and PL/SQL, and local calls and database RPC calls, of a single web request on a single database connection. It is generally demarcated by the calls made to check-out and check-in the database connection from a connection pool.For recoverable errors, Application Continuity reestablishes the database session and repeats the database request safely.

Recoverable Error (enhanced) A recoverable error is an error that arises due to an external system failure, independent of the application session logic that is executing.Recoverable errors occur following planned and unplanned outages of foregrounds, networks, nodes, storage, and databases.The application receives an error code that can leave the application not knowing the status of the last operation submitted. Recoverable errors have been enhanced in Database 12c, to include more errors and now includes a public API for OCI. Applications should no longer list error numbers in their code.

Reliable Commit OutcomeIn Oracle, a transaction is committed by updating its entry in the transaction table. Oracle generates a redo-log record corresponding to this update and writes out this redo-log record. Once this redo-log record is written out to the redo log on disk, the transaction is deemed committed at the database. From the client perspective, the transaction is committed when an ORACLE message (termed Commit Outcome), generated after that redo is written, is received by the client. However, the COMMIT message is not durable. Transaction Guard obtains the Commit Outcome reliably when it has been lost following a recoverable error.

Mutable FunctionsMutable functions are functions that can change their results each time that they are called. Mutable functions can cause replay to be rejected because the results visible to the application change at replay. Considersequence.NEXTVAL that is often used in key values.If a primary key is built with a sequence value and this is later used in later foreign keys, or other binds, at replay the same function result must be returned if the application could be using it.

Application Continuity provides mutable object value replacement at replay for granted Oracle function calls to provide opaque bind-variable consistency.If the call uses database functions that are mutable, includingsequence.NEXTVAL,SYSDATE,SYSTIMESTAMP, andSYSGUID, then, the original values returned from the function execution are saved and are reapplied at replay. If an application decides to not grant mutables, and results are returned to the client, replay for these requests may be rejected.

Session state consistencyNon-transactional state is state such as NLS settings, cursors, events, and global PL/SQL package state. After a COMMIT statement has been executed, if state was changed in that transaction, it is not possible to replay the transaction to reestablish that state if the session is lost.When configuring Application Continuity, almost all applications should use the default mode DYNAMIC. In DYNAMIC mode, replay is disable from COMMIT until the end of the request. This is not a problem for most applications as almost all requests have zero or one commit, and commit is most often the last statement in a database request.

Transaction Guard

Preserve and Retrieve COMMIT OutcomeAPI that supports known commit outcome for every transactionWithout Transaction Guard, upon failures transaction retry can cause logical corruptionWith Transaction Guard, applications can deal gracefully with error situations, vastly improving end-user experienceUsed transparently by Application Continuity

Application ServersDatabase ServersEnd UserCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Transaction Guard is a reliable protocol and API that applications use to provide to provide a reliable commit outcome. The API is embedded in error handling and should be called following recoverable errors. The outcome indicates whether or not the last transaction was committed and completed. Once the commit outcome is returned to the application, this outcome persists. That is, if Transaction Guard returns committed or uncommitted the status stays this way. This enables the application or user to make a next stable decision.

Why use Transaction Guard ?

The application uses Transaction Guard to return the known outcome committed or uncommitted. The user or application can make their own decision the next action to take. For example, to resubmit when the last transaction on the session has not committed, or to continue when the last transaction has committed and the last call has completed.

Transaction Guard is used by Application Continuity and automatically enabled by it, but it can also be enabled independently.Transaction Guard prevents the transaction being replayed by Application Continuity from being applied more than once.If the application has implemented an application level replay, then it may require that application is integrated with transaction guard to provide idempotence.

Understanding Transaction Guard

In the standard commit case, the database commits a transaction and returns a success message to the client. In the illustration shown in the slide, the client submits a commit statement and receives a message stating that communication failed.This type of failure can occur due to several reasons, including a database instance failure or network outage.In this scenario, the client does not know the state of the transaction.

Oracle Database solves the communication failure by using a globally unique identifier called a logical transaction ID. When the application is running, both the database and client hold the logical transaction ID.The database gives the client a logical transaction ID at authentication and at each round trip from the client driver that executes one or more commit operations.

The logical transaction ID uniquely identifies the last database transaction submitted on the session that failed. For each round trip from the client in which one or more transactions are committed, the database persists a logical transaction ID.This ID can provide transaction idempotence for interactions between the application and the database for each round trip that commits data.

When a recoverable outage occurs, the exception handling is modified to get the logical transaction Id, and call a new PL/SQL interface DBMS_APP_CONT.GET_LTXID_OUTCOME that returns the reliable commit outcome. (See development and deployment section)

Preserving Commit OutcomeClient receives a Logical Transaction ID (LTXID)Client is guaranteed the outcome of the last submissionGlobal protocol that blocks out of order flowsSafe for applications to return success/resubmitUsed by Application Continuity

Typical Usage========Database crashed: (i) FAN aborts dead session (ii) application gets an error; (iii) Connection pool removes orphan connection from the poolIf recoverable error Get last LTXID of the dead session using getLogicalTransactionId or from your callbackObtain a new session Call DBMS_APP_CONT.GET_LTXID_OUTCOME* If committed then return result, application may continue Else return uncommitted; application cleans up and resubmit request* If uncommitted, prevent transaction from eventually committing Transaction Guard solution coverage======================Clients JDBC-thin, OCI, OCCI, ODP.netDatabase - Uses logical transaction ID (LTXID)Commit Models - Local TXN - Auto-commit, Commit on Success - Commit embedded in PL/SQL - DDL, DCL, Parallel DDL - Remote, Distributed Excludes XA in 12.1

============

Transaction Guard supports all listed transaction types. The primary exclusions in 12c are XA and read/write database links from Active Data Guard.

To configure Transaction Guard, set the service attribute COMMIT_OUTCOMEValues TRUE and FALSE Default FALSEApplies to new sessions

Optionally change the service attribute RETENTION_TIMEOUTUnits secondsDefault 24 hours (86400)Maximum value 30 days (2592000)

Oracle Database 12c provides Transaction Guard interface and APIs for JDBC thin, OCI, OCCI, and ODP.Net.

7

Application Continuity

Masks Unplanned/Planned OutagesReplays in-flight work on recoverable errorsMasks many hardware, software, network, storage errors and outages when successfulImproves end-user experience and productivity without requiring custom app development

Transaction Replayed

Application ServersDatabase ServersEnd UserCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#When replay is successful, Application Continuity masks many recoverable database outages from the applications and the users. It achieves the masking by restoring the database session, the full session (including session states, cursors, variables), and the last in-flight transaction (if there is one).

Without Application Continuity, database recovery does not mask outages that are caused by network outages, instance failures, hardware failures, repairs, configuration changes, patches and so on.

If the database session becomes unavailable due to a recoverable error, Application Continuity attempts to rebuild the session and any open transactions to the correct states; If the transaction is successful and does not need to be re-executed, the successful status is returned to the application.If the replay is successful, the request continues safely without duplication.If the replay is not successful, the database rejects the replay and the application receives the original error. To be successful, the replay must return to the client the exact same data that the client received previously in the request, and that the application potentially made a decision on.

How Application Continuity works:

Here are the steps:The client application sends a database request that is received by the JDBC replay driver.The replay driver sends the calls that make up the request to the database, receiving directions for each call from the database.The replay driver receives a Fast Application Notification (FAN) notification or a recoverable error.The replay driver performs the following actions:- It checks the request has replay enabled and checks timeouts. Assuming all is good -- It obtains a new database session, and if a callback is registered, runs this callback to initialize the session.- It checks with the database to determine whether replay can progress, for example, whether the last transaction was committed or rolled back.If replay is required, then the JDBC replay driver resubmits calls, receiving directions for each call from the database. Each call must establish the same client-visible state. Before the last call is replayed, the replay driver ends the replay, and returns to normal runtime mode.

Solution Coverage===========

Client JDBC-Thin driver UCP, WebLogic Server, Third Party Java Apps

DatabaseSQL, PL/SQL, JDBC RPC - Select, ALTER SESSION, DML, DDL, COMMIT/ROLLBACK/SAVEPOINT Transaction models - Local, Parallel, Remote, Distributed Mutable function supportHardware acceleration on current Intel & SPARC chips

==========

Application Continuity is supported for thin JDBC, Universal Connection Pool and, WebLogic Server. It is included with Oracle Real Application Clusters (Oracle RAC), RAC One, and Oracle Active Data Guard.

Application Continuity recovers the database request including any in-flight transaction and the database session states. The requests may include most SQL and PL/SQL, RPCs, and local JDBC calls

Application Continuity uses Transaction Guard. Transaction Guard tags each database session with a logical transaction ID (LTXID), so that the database recognizes if a request committed the transaction before the outage.

Application Continuity offers the ability to keep the original values for some ORACLE functions such as Seq.NEXTVAL that change their values each time that they are called. This improves the likelihood that replay will succeed.

On current SPARC and Intel-based chips, the validation that Application Continuity uses is supported by firmware at the database server.

BENEFIT=====Transaction Guard & Application Continuity: Intelligent, application-transparent fault tolerance

8Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

Databases in Replicated EnvironmentsChallengesNo seamless way to efficiently use all the databases

No automated load balancing and fault tolerancePrimaryActive StandbyActive StandbyGoldenGate

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#ChallengesDatabase utilization hampered by geographic fragmentationLoad balancing and fault tolerance hard to automate globallyResource allocation and management dictated by geographyResulting inSub-optimal resource utilizationHampered or no enterprise-wide data integrationUnclear strategy for consolidation & distribution candidates

10

Global Data ServicesGlobal Data ServicesExtends RAC-style service failover, load balancing (within and across data centers), and management capabilities to a set of replicated databasesTakes into account network latency, replication lag, and service placement policiesAchieve higher availability, improved manageability and maximize performanceLoad Balancing and Service Failover for Replicated Databases

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#GDS tested with WLS 12.1.2.

In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica

11Global Data ServicesReporting client routed to best databaseBased on location, response time, data, acceptable data lagReports will automatically run on least loaded serverReporting client failoverIf preferred database not available, will route to another database in same region or a remote databaseGlobal service migrationAutomatically migrates services based on failover/switchover - if primary database is down, start Call Center service on the new primaryActive Data Guard Example

Active Data GuardReporting Service

Call Center Service

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Update service runs on primary; Reporting service runs on primary or Active Data Guard standby

Global service may be started in another database based on policies (e.g., singleton service, minimum of 3 instances, )

12Global Data ServicesCall Center Client connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestIf a database fails, its global services restarted on another replicaGoldenGate Example

GoldenGate

Call Center Service

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#In an active/active configuration, a global service can be available on all the GoldenGate replicasCan also start different services on each replica useful for conflict avoidanceClients connections and requests transparently routed to the closest / best databaseRuntime load balancing metrics give client real-time information on which database to issue next requestSupports all Oracle connection pools (UCP, WLS, OCI, ODP.NET)If a database fails, its global services restarted on another replica

13Global Data ServicesUse Case: Active Data Guard without GDS

PrimaryActive StandbyData GuardOrder History ViewOrder CaptureCritical E-Commerce App accessing Active Data Guard StandbyWhat happens when Active Standby is down?OrdersService

HistoryService

PrimaryActive StandbyData GuardOrder History ViewOrder CaptureOrdersService

HistoryService

?

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Orders Capture Application run on Primary.History orders Application offloaded to Active Standby.

Without GDS, to bring the History App online:Steps- Change the properties (role definition) of the History Service via srvctl- Manually start the History Service on the primary database- Restart the apps- Connect to the History Service on the primary database

Drawbacks- Unplanned application downtime- Manual, Time-consuming, Error-prone

14Global Data ServicesUse Case: Active Data Guard with GDS: All HAWhen Active Standby is down GDS fails over History Service to primary, redirects connection through FAN/FCF

PrimaryActive StandbyData GuardOrdersService

HistoryServiceGlobal Data ServicesOrder HistoryViewOrder CaptureHistoryServiceCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#15Global Data Services: ConceptsGDS Region: Group of databases and clients in close network proximity, e.g., East, WestGDS Pool: Databases that offer a common set of global services, e.g., HR, SalesGlobal Service: Database Service provided by multiple databases with replicated dataLocal service + {region affinity, replication lag, database cardinality}Global Service Manager (GSM): Provides main GDS functionality: service management and load balancingClients connect to GSM instead of database listenerAt least one GSM per region or multiple GSMs for High AvailabilityAll databases/services register to all GSM ListenersGDS Catalog: stores all metadata, enables centralized global monitoring & managementGlobal service configuration stored in GDS CatalogGDSCTL: Command-line Interface to administer GDSCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#At least 2-3 GSMs per region is recommendedOne GSM per region is assigned as MasterMaster GSM is responsible for publishing FAN events to the clients via ONS serverIf Master GSM dies, other GSM in the region takes overIf all GSMs in the region dies, Master GSM from the other region takes overIf all the GSMs in all the regions fail, the clients can still connect to the local listenersGDS Catalog Database can be replicated for HA/DR

16Global Data Services: Summary Globally Replicated, High Availability Architecture

GDS Framework dynamically balances user requests across multiple replicated sitesBased on location, load, and availabilityProvides global availabilitySupports automatic service failoverGDS integrates disparate databases into a unified data cloud

GSM - Global Service Manager

LocalStandbyLocalStandbyData Center #2EMEAData Center #1APACActive Data GuardActive Data GuardPrimaryLocalStandbyActive Data Guard

GDSCTLGDS Catalog PrimaryGDS CatalogStandbyMaster Oracle GoldenGateActive Data GuardSALES POOL (sales_reporting_srvc, sales_entry_srvc)HR POOL(hr_apac_srvc, hr_emea_srvc)All GDS client databases connected to all GSMs

MasterRemoteStandbyReader Farm

Active Data Guard

Global ServiceManagers

Global ServiceManagers

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#RequirementsTo be able to load balance across data centersOptimal resource utilizationGlobal scalability and availabilityCapability to centrally manage global resourcesSolution Global Database Services (GDS)

Global Data Services (GDS) allows:

Load-balancing of application workloads across regionsExtends RAC-like connect time & run time load balancing globallyAddresses inter-region resource fragmentation, so that underutilized resources in one region can be used to satisfy another regions workload. Thus enabling optimal resource utilization

Global scalability and availabilityEasy to elastically add/remove databases from the GDS infrastructureSupports seamless service failover

Centralized management of global resourcesEasier management for globally distributed multi-database configurations

17Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Zero Data Loss ChallengeThe longer the distance, the larger the performance impact

Synchronous Communication Leads To Performance Trade-OffsPrimaryStandbyCommitCommit AckNetwork SendNetwork Ack

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

PrimaryStandbyASYNCData Guard Async TodaySome Data Loss Exposure Upon DisasterCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#20Far Sync: light-weight Oracle instance: standby control file, standby redo logs, archived redo logs, no data filesReceives redo synchronously from primary, forwards redo asynchronously in real-time to standbyUpon Failover: Async standby transparently obtains last committed redo from Far Sync and applies: zero data loss failoverSecond Far Sync Instance can be pre-configured to transmit in reverse direction after failover/switchoverTerminal standbys required to be Active Data Guard Standbys

Active Data Guard Far Sync New in 12.1Zero Data Loss For Async DeploymentsCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

PrimaryStandbyFar SyncInstance

Active Data Guard Far SyncOperational Flow

ASYNCSYNCCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#22

PrimaryStandbyFar SyncInstance

Active Data Guard Far SyncOperational Flow (contd.)No Compromise Between Availability and Performance!

ASYNCSYNC Zero Data LossCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#23Best data protection, least performance impactLow cost and complexityBest way to implement a near DR + Far DR modelRelevant to existing Data Guard ASYNC configurationsData Guard Failover? No Problem! Just do it No Data Loss!

Active Data Guard Far SyncBenefitsCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Example: Reserve Bank of India report Working Group on Information Security, Electronic Banking, Technology Risk Management and Cyber Frauds, http://www.rbi.org.in/scripts/PublicationReportDetails.aspx?UrlPage=&ID=609Chapter 7 of this report Business Continuity Planning has some specific guidelines wrt RPO and RTO, and this particular guideline has sparked a lot of interest within the Banking IT community:

Given the need for drastically minimizing the data loss during exigencies and enable quick recovery and continuity of critical business operations, banks may need to consider near site DR architecture. Major banks with significant customer delivery channel usage and significant participation in financial markets/payment and settlement systems may need to have a plan of action for creating a near site DR architecture over the medium term (say, within three years).

24Active Data Guard Real-Time CascadingEliminates Propagation DelayPrimaryStandby 1Standby 2In 12.1, Standby 1 forwards redo to Standby 2 in real-time, as it is received: no propagation delay for a log switchStandby 2 (Active Data Guard Standby) is up-to-date for offloading read-only queries and reportsSYNC or ASYNCASYNCIn 11.2, Standby 1 waits till log switch before forwarding redo from archived logs to Standby 2

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#25Data Guard Fast SyncReduced Primary Database Impact for Maximum Availability

PrimaryStandbyRedoLogsStandbyRedoLogsCommit

CommitAcknowledge

For SYNC transport: remote site acknowledges received redo before writing it to standby redo logsReduces latency of commit on primaryBetter DR increased SYNC distanceIf network round-trip latency less than time for local online redo log write, synchronous transport will not impact primary database performance

NSSRFSLGWR

Commit

CommitAcknowledge

Acknowledgereturned on receipt

PrimaryStandbyRedoLogsStandbyRedoLogsNSSRFSLGWR

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Set via NOAFFIRM attribute

26Data GuardOther New Features in Oracle Database 12cRolling Upgrade With Active Data GuardAutomate complexity through simple PL/SQL Package: DBMS_ROLLING (12.1.0.1 onwards), with simple Init, Build, Start, Switchover, Finish proceduresAdditional Data Type Support: XML OR, Binary XML, Spatial, Image, Oracle Text, DICOM, ADTs (simple types, varrays),

Validate Role Change ReadinessEnsure Data Guard configuration ready for switchover with automated health checks verify no log gaps, perform log switch, detects any inconsistencies, ensures online log files cleared on standby, DML on Global Temporary TablesTemporary undo is not logged in redo logsEnables DML on global temporary tables on Active Data Guard: more reporting supportSet by default on Active Data Guard standby Unique SequencesPrimary allocates a unique range of sequence numbers to each StandbyEnables more flexible reporting choices for Active Data Guard

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#1) DDL to create temporary tables must be issued on the primary databaseEnables more reporting apps to leverage Active Data GuardNew init.ora parameter TEMP_UNDO_ENABLED=============Global Sequences Sequences created using the default CACHE and NOORDER options can be accessed from an Active Data Guard standby databasePrimary allocates a unique range of sequence numbers to each standbyEnables more flexible reporting choices for Active Data GuardSession SequencesUnique range of sequence numbers only within a sessionSuitable for reporting apps leveraging global temporary tables===============In an Active Data Guard environment, sequences created by the primary database with the default CACHE and NOORDER options can be accessed from standby databases as well. When a standby database accesses such a sequence for the first time, it requests that the primary database allocate a range of sequence numbers. The range is based on the cache size and other sequence properties specified when the sequence was created. Then the primary database allocates those sequence numbers to the requesting standby database by adjusting the corresponding sequence entry in the data dictionary. When the standby has used all the numbers in the range, it requests another range of numbers.

The primary database ensures that each range request from a standby database gets a range of sequence numbers that do not overlap with the ones previously allocated for both the primary and standby databases. This generates a unique stream of sequence numbers across the entire Data Guard configuration.

Because the standby's requests for a range of sequences involve a round-trip to the primary, be sure to specify a large enough value for the CACHE keyword when you create a sequence that will be used on an Active Data Guard standby. Otherwise, performance could suffer.

Restrictions: Sequences created with the ORDER or NOCACHE options cannot be accessed on an Active Data Guard standby

===============Supported types in 11.2BINARY_DOUBLEBINARY_FLOATBLOBCHARCLOB and NCLOBDATEINTERVAL YEAR TO MONTHINTERVAL DAY TO SECONDLONGLONG RAWNCHARNUMBERNVARCHAR2RAWTIMESTAMPTIMESTAMP WITH LOCAL TIMEZONETIMESTAMP WITH TIMEZONEVARCHAR2 and VARCHARXMLType stored as CLOBLOBs stored as SecureFile

Additional Data Types Supported in Oracle Database 12cXML OR and Binary XMLXDB repository operations and other commonly used XDB operationsADTs with attributes of simple types and varrays, with inheritance and type evolutionVARCHAR32Commonly used AQ operations ANYDATA with non-opaque typesSpatial, Image, Oracle Text, DICOMComplete SecureFile supportDBFSScheduler job definitions

Still unsupported in 12.1BFILECollections (nested tables)ROWID, UROWIDUser-defined typesADTs with attributes of nested tables, refs and bfiles Top level nested tables, varrays, refs and bfiles If primary key involves ADT columnsSecurefile FRAGMENT_OPERATION. This was intended only for internal consumption, but it got documented. It is supported via EDS.

=====================DGMGRL command: validate databaseValidates each databases current statusVerifies there are no archive log gapsPerforms a log switch on primary to verify the log is applied on all standbysShows any databases or RAC instances that are not discoveredDetects inconsistencies between database properties and values stored in databaseEnsures online redo log files have been cleared in advance of role transitionChecks for previously disabled redo threadsEnsures primary and all standbys are on the same redo branch

Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

Fine-grained Table Recovery From BackupSimple RECOVER TABLE command to recover one or more tables (most recent or older version) from an RMAN backupEliminates time and complexity associated with manual restore, recover & exportEnables fine-grained point-in-time recovery of individual tables instead of the contents of the entire tablespace

RMAN Backups

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#RMAN automatically creates auxiliary instance on target database host where relevant backups are restored and recoveredRecovered table(s) in auxiliary instance are:Imported directly into target database, orExported to a Data Pump dump fileUseful in scenarios where Flashback cannot be used:Flashback Drop: Table has been purged out of recycle binFlashback Table: Point-in-time needed is older than UNDO_RETENTION

29

Cross-Platform Backup & RestoreSimplifies procedure for platform migrationMinimize read-only impact with multiple incremental backups

Simplified Platform MigrationSource Database (AIX)Backup to Disk/Tape(data files, optional endian conversion, metadata export)Restore Backup(optional endian conversion, metadata import)Destination Database (Solaris)

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#To create the backup set containing data that must be transported to the destination database, use the BACKUP command on the source database. To indicate that you are creating a cross-platform backup, the BACKUP command must contain either the FORTRANSPORT or TO PLATFORM clause.

================

You can transport an entire database from a source platform to a different destination platform. While creating the cross-platform backup to transport a database, you can convert the database either on the source database or the destination database.

Back up the source database using the FOR TRANSPORT or TO PLATFORM clause in the BACKUP command. Using either of these clauses creates a cross-platform backup that uses backup sets.

Example 285 creates a cross-platform backup of the entire database. This backup can be restored on any supported platform. Because the FOR TRANSPORT clause is used, the conversion is performed on the destination database. The source platform is Sun Solaris and the cross-platform database backup is stored in db_trans.bck in the /tmp/xplat_backups directory on the source host.

Example 285 Creating a Cross-Platform Database Backup for Restore on Any Supported Platform

BACKUPFOR TRANSPORTFORMAT '/tmp/xplat_backups/db_trans.bck'DATABASE;

Example 286 creates a cross-platform backup of the entire database that can be restored on the Linux x86 64-bit platform. Because the TO PLATFORM clause is used, conversion is performed on the source database. The backup is stored in thebackup set db_trans_lin.bck in the /tmp/xplat_backups directory on the source host.

Example 286 Creating a Cross-Platform Database Backup for Restore on a Specific Platform

BACKUPTO PLATFORM='Linux x86 64-bit'FORMAT '/tmp/xplat_backups/db_trans_lin.bck'DATABASE;

Restore the backup sets that were transferred from the source by using the RESTORE command with the FOREIGN DATABASE clause.

Example 287 restores the cross-platform database backup created in Example 285. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination. The backup set containing the cross-platform database backup is stored in the /tmp/xplat_restores directory on the destination host. The TONEW option specifies that the restored foreign data files must use new OMF-specified names in the destination database. Ensure that the DB_CREATE_ FILE_DEST parameter is set.

Example 287 Restoring a Cross-Platform Database Backup on the Destination Database

RESTOREFROM PLATFORM Solaris[tm] OE (64-bit)FOREIGN DATABASE TO NEWFROM BACKUPSET '/tmp/xplat_restores/db_trans.bck';

Example 288 restores the cross-platform database backup that was created in Example 286. The destination database is on the Linux x86 64-bit platform. The backup set containing the cross-platform backup that needs to be restored is stored in /tmp/xplat_restores/db_trans_lin.bck. The restored foreign data files are stored in the /oradata/datafiles directory using names that begin with df_.

Example 288 Restoring a Cross-Platform Database Backup that was Created for a Specific Platform

RESTOREALL FOREIGN DATAFILESFORMAT /oradata/datafiles/df_%UFROM BACKUPSET /tmp/xplat_restores/db_trans_lin.bck;

===================

In 11.2.0.3 -> only Exadata target -> 1389592.1 - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backups

Minimize read-only impact with multiple incremental backupsSuccessive incrementals converted & applied to restored data filesFinal incremental while tablespace in read-only mode, with separate Data Pump metadata export and import

Create a cross-platform level 0 inconsistent backup of the tablespace my_tbs when the tablespace is read/write mode. This backup is stored in a backup set named my_tbs_incon.bck in the directory /tmp/xplat_backups.

BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 0TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon.bck';

Create a cross-platform level 1 incremental backup of the tablespace my_tbs that contains the changes made after backup in Step 2 was created. The tablespace is still in read/write mode. This incremental backup is stored in my_tbs_incon1.bck in the directory /tmp/xplat_backups.

BACKUPFOR TRANSPORTALLOW INCONSISTENTINCREMENTAL LEVEL 1TABLESPACE my_tbs FORMAT '/tmp/xplat_backups/my_tbs_incon1.bck';

ALTER TABLESPACE my_tbs READ ONLY;

Create the final cross-platform level 1 incremental backup of the tablespace my_tbs. This backup contains changes made to the database after the backup that was created in Step 3. It must include the export dump file that contains the tablespacemetadata.

BACKUPFOR TRANSPORTINCREMENTAL LEVEL 1TABLESPACE my_tbsFORMAT '/tmp/xplat_backups/my_tbs_incr.bck'DATAPUMP FORMAT '/tmp/xplat_backups/my_tbs_incr_dp.bck'DESTINATION '/tmp';

Move the backup sets and the export dump file generated in Steps 2, 3, and 5 from the source host to the desired directories on the destination host.

Restore the cross-platform level 0 inconsistent backup created in Step 2.Use the FOREIGN DATAFILE clause to specify the data files that must be restored. The FROM PLATFORM clause specifies the name of the platform on which the backup was created. This clause is required to convert backups on the destination database.

RESTOREFROM PLATFORM Solaris[tm] OE (64-bit)FOREIGN DATAFILE 6FORMAT '/tmp/aux/mytbs_6.df',7FORMAT '/tmp/aux/mytbs_7.df',20FORMAT '/tmp/aux/mytbs_20.df',10FORMAT '/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon.bck';

Recover the foreign data files obtained in Step 8 by applying the first cross-platform level 1 incremental backup that was created Step 3.

RECOVERFROM PLATFORM Solaris[tm] OE (64-bit)FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incon1.bck';

Recover the foreign data files obtained in Step 8 by applying the final cross-platform level 1 incremental backup that was created in Step 5. This backup was created with the tablespaces in read-only mode.

RECOVERFROM PLATFORM Solaris[tm] OE (64-bit)FOREIGN DATAFILECOPY '/tmp/aux/mytbs_6.df','/tmp/aux/mytbs_7.df','/tmp/aux/mytbs_20.df','/tmp/aux/mytbs_10.df'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr.bck';

Restore the backup set containing the export dump file. This dump file contains the tablespace metadata required to plug the tablespaces into the destination database.

RESTOREFROM PLATFORM Solaris[tm] OE (64-bit)DUMP FILE 'my_tbs_restore_md.dmp'DATAPUMP DESTINATION '/tmp/dump'FROM BACKUPSET '/tmp/xplat_restores/my_tbs_incr_dp.bck';

30Backup and recover specific pluggable databases with new PLUGGABLE DATABASE keywords: RMAN> BACKUP PLUGGABLE DATABASE , ;Familiar BACKUP DATABASE command backs up CDB, including all PDBsPDB Complete RecoveryRESTORE PLUGGABLE DATABASE ;RECOVER PLUGGABLE DATABASE ;PDB Point-in-Time RecoveryRMAN> RUN {SET UNTIL TIME 'SYSDATE-3';RESTORE PLUGGABLE DATABASE ;RECOVER PLUGGABLE DATABASE ;ALTER PLUGGABLE DATABASE OPEN RESETLOGS; }Familiar RECOVER DATABASE command recovers CDB, including all PDBs

Oracle Multitenant Backup & RestoreFine-Grained Backup & Recovery to Support ConsolidationCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#RMAN DUPLICATE leverages restore process (from backup or source DB) to create new clone or standby databaseClone the entire CDB or ROOT + selected PDBsCommands:RMAN> DUPLICATE TARGET DATABASE TO ;RMAN> DUPLICATE TARGET DATABASE TO PLUGGABLE DATABASE , ;For in-place cloning or creating a new PDB within a CDB, use SQL:SQL> CLONE PLUGGABLE DATABASE ..SQL> CREATE PLUGGABLE DATABASE ..

31Better PerformanceOther New Features in Oracle Database 12cEnhanced Multi-section Backup capability: now supports image copies and incremental backupsMore efficient synchronization of standby database using simple RMAN command: RECOVER DATABASE FROM SERVICEEnhanced Active DuplicateCloning workload moved to destination server via auxiliary channels, relieving resource bottlenecks on sourceCloning can now use RMAN compression and multi-section capability to further increase performance

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Previously multi-section possible only for full backup sets. Now:

BACKUP INCREMENTAL LEVEL 1SECTION SIZE 100MDATAFILE '/oradata/datafiles/users_df.dbf';

BACKUP AS COPYSECTION SIZE 500MDATABASE;

==========================================================Rolling Forward a Physical Standby Database and Synchronizing it with the Primary Database

In this example, the DB_UNIQUE_NAME of the primary database is MAIN and that of the physical standby database STANDBY. You want to refresh the physical standby database with the latest changes made to the primary database. You can use the RECOVER command with the FROM SERVICE clause to fetch an incremental backup from the primary database and then apply this backup to the physical standby database. The service name of the primary database is main_tns and the compression algorithm used is Basic.

When the RECOVER command is executed, the incremental backup is created on the primary database and then transferred, over the network, to the physical standby database. RMAN uses the SCN from the standby data file header and creates the incremental backup starting from this SCN on the primary database. If block change tracking is enabled for the primary database, it will be used while creating the incremental backup.

To refresh a physical standby database with changes made to the primary database, use the following steps:

1. Connect to the physical standby database as a user with the SYSBACKUP privilege.%RMANRMAN> CONNECT TARGET "sbu@standby AS SYSBACKUP";

Enter the password for the sbu user when prompted.

2. Specify that the compression algorithm used is Basic.RMAN> SET COMPRESSION ALGORITHM 'basic';

3. Ensure that the tnsnames.ora file in the source database contains an entry corresponding to the physical standby database. Also ensure that the password files on the source and physical standby database are the same.

4. Recover the data file on the physical standby database by using an incremental backup of the primary database. The following command creates a compressed, multisection incremental backup on the primary database to recover the standbydatabase.

RECOVER DATABASEFROM SERVICE main_tnsSECTION SIZE 120MUSING COMPRESSED BACKUPSET;

===================

RMAN can transfer the files required for active database duplication as image copies or backup sets.

When active database duplication is performed using image copies, after RMAN establishes a connection with the source database, the source database transfers the required database files to the auxiliary database. Using image copies may require additional resources on the source database. This method is referred to as the push-based method of active database duplication.

When RMAN performs active database duplication using backup sets, a connection is established with the source database and the auxiliary database. The auxiliary database then connects to the source database through Oracle Net Services and retrieves the required database files from the source database. This method of active database duplication is also to as the pull-based method.

Using backup sets for active database duplication provides certain advantages. RMAN can employ unused block compression while creating backups, thus reducing the size of backups that are transported over the network. Backup sets can be created in parallel on the source database by using multisection backups. You can also encrypt backup sets created on the source database.

Factors That Determine Whether Backup Sets or Image Copies Are Used for Active Database Duplication

RMAN only uses image copies to perform active database duplication when no auxiliary channels are allocated or when the number of auxiliary channels allocated is less than the number of target channels.

RMAN uses backup sets to perform active database duplication when the connection to the target database is established using a net service name and any one of the following conditions is satisfied:

- The DUPLICATE ... FROM ACTIVE DATABASE command contains either the USING BACKUPSET, USING COMPRESSED BACKUPSET, or SECTION SIZE clause.

- The number of auxiliary channels allocated is equal to or greater than the number of target channels allocated.

Note:

Oracle recommends that you use backup sets to perform active database duplication.

Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Automatic Storage Management (ASM) OverviewASM Cluster Pool of StorageDisk Group BDisk Group AShared Disk Groups

Wide File Striping

One to One Mapping of ASM Instances to Servers

ASM InstanceDatabase InstanceASM DiskRAC Cluster

Node4Node3Node2Node1Node5ASMASMASMASMASMASM InstanceDatabase InstanceDBADBADBBDBBDBCDBBCurrent StateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#From the users view, ASM exposes a small number of Disk Groups. These Disk Groups consists of ASM disks and files that are striped across all the disks in a Disk Group. The Disk Groups are global in nature and database instances running individually or in clusters have shared access to the Disk Groups and the files within them. This is illustrated in this picture. The green database has files in Disk Group A and are striped across all its disks. Disk Group A is shared with both the green database and the purple database.Notice the ASM instance on every server in the cluster. The ASM instances communication amongst themselves and form an ASM cluster.These simple ideas delivered a powerful solution that eliminates many headaches DBAs and Storage Administrators once had with managing storage in an Oracle environment. 34Flex ASM: Eliminate 1:1 Server MappingNew: ASM Storage Consolidation in Oracle Database 12cASM Cluster Pool of StorageDisk Group BDisk Group AShared Disk Groups

Wide File Striping

Databases share ASM instances

ASM InstanceDatabase InstanceASM DiskRAC Cluster

Node5Node4Node3Node2Node1Node5 runs as ASM Client to Node4 Node1 runs as ASM Client to Node2 Node1 runs as ASM Client to Node4 Node2 runs as ASM Client to Node3 ASMASMASMASM InstanceDBADBADBBDBBDBCDBBCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Oracle Flex ASM enables an Oracle ASM instance to run on a separate physical server from the database servers. With this deployment, larger clusters of Oracle ASM instances can support more ASM clients (database instances) while reducing the Oracle ASM footprint for the overall system.

With Oracle Flex ASM as with Standard ASM, you can consolidate all the storage requirements into a single set of disk groups. However, these disk groups are managed by a small set of Oracle Flex ASM instances running in the cluster. If a host running an ASM instance fails, ASM clients using that ASM instance, failover to a surviving ASM instance on a different host. You can specify the number of Oracle ASM instances with a cardinality setting. The default is three instances.

The configurations of Oracle ASM in Oracle database 12c are:

- Standard ASM: With this mode (Standard Oracle ASM cluster), Oracle ASM instances continue to support existing standard architecture in which database clients are running with Oracle ASM instances on the same host computer.

- Oracle Flex ASM:With this mode, database clients running on nodes in a cluster can access Oracle Flex ASM instances remotely for metadata, but perform block I/O operations directly to Oracle ASM disks. All the nodes with in the cluster must have direct access to the ASM disks.

You can choose the Oracle ASM deployment model during the installation of Oracle Grid Infrastructure and you can use Oracle ASM Configuration Assistant (ASMCA) to enable Oracle Flex ASM after the installation / upgrade was performed. This functionality is only available in an Oracle Grid Infrastructure configuration, not an Oracle Restart configuration.

Oracle Flex ASM is managed by ASMCA, CRSCTL, SQL*Plus, and SRVCTL. To determine whether an Oracle Flex ASM has been enabled, use the ASMCMD showclustermode command.

$ asmcmd showclustermodeASM cluster : Flex mode enabled

You can also use SRVCTL to determine whether Oracle Flex ASM is enabled. If enabled, then srvctl config asm displays the number of Oracle ASM instances that has been specified for use with the Oracle Flex ASM configuration. For example:

$ srvctl config asmASM instance count: 3

Clients are automatically relocated to another instance if an Oracle ASM instance fails. If necessary, clients can be manually relocated using the ALTER SYSTEM RELOCATE CLIENT command. For example:

SQL> ALTER SYSTEM RELOCATE CLIENT 'client-id';

When you issue this statement, the connection to the client is terminated and the client fails over to the least loaded instance.

Every database user must have a wallet with credentials to connect to Oracle ASM. CRSCTL commands can be used by the database user to manage this wallet. All Oracle ASM user names and passwords are system generated.

35Flex ASM: Supporting Oracle Database 11gPrevious Database Versions Will Host Local ASM InstanceASM Cluster Pool of StorageDisk Group BDisk Group AShared Disk Groups

Wide File Striping

Databases share ASM instances

ASM InstanceDatabase InstanceASM DiskRAC Cluster

Node5Node4Node3Node2Node1ASMASMASMDBADBADBBDBBDBCDBBASMASM11.2DB11.2DBCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#When consolidating pre-Oracle 12c databases and Oracle 12c on the same system using a cluster with Oracle Flex ASM enabled, the administrator will have to ensure that a local ASM instance is running on each node in the cluster. This is achieved by issuing a post installation SRVCTL command on Oracle Clusterware level and increasing the number of Oracle Flex ASM instances to the number of servers in the cluster (srvctl modify asm count ALL).

This setup preserves the Oracle Database 12c failure protection on local ASM instance failure and enables database consolidation across versions, maintaining the pre-12c behavior for pre-12c databases.

================

Scrubbing Disk GroupsOracle ASM disk scrubbing improves availability and reliability by searching for data that may be less likely to be read. Disk scrubbing checks logical data corruptions and repairs them automatically in normal and high redundancy disks groups. The scrubbing process repairs logical corruptions using the mirror disks. Disk scrubbing can be combined with disk group rebalancing to reduce I/O resources. The disk scrubbing process has minimal impact to the regular I/O in production systems.

You can perform scrubbing on a disk group, a specified disk, or a specified file of a disk group with the ALTER DISKGROUP SQL statement. For example, the following SQL statements show various options used when running the ALTER DISKGROUP disk_group SCRUB SQL statement.

SQL> ALTER DISKGROUP data SCRUB POWER LOW;

SQL> ALTER DISKGROUP data SCRUB FILE 'EXAMPLE.265.767873199' REPAIR POWER HIGH FORCE;

When using ALTER DISKGROUP with the SCRUB option, the following items apply:

The optional REPAIR option automatically repairs disk corruptions. If the REPAIR option is not specified, then the SCRUB option only checks and reports logical corruptions of the specified target.The optional POWER value can be set to AUTO, LOW, HIGH, or MAX. If the POWER option is not specified, the power value defaults to AUTO and the power adjusts to the optimum level for the system.If the optional WAIT option is specified, the command returns after the scrubbing operation has completed. If the WAIT option is not specified, the scrubbing operation is added into the scrubbing queue and the command returns immediately.If the optional FORCE option is specified, the command is processed even if the system I/O load is high or scrubbing has been disabled internally at the system level.

36Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template# Online Redefinition EnhancementsImproved sync_interim_table performanceAbility to redefine table with VPD policiesImproved resilience of finish_redef_tableBetter handling of multi-partition redefinitionOther HA EnhancementsOnline Datafile MoveRelocate a datafile while users are actively accessing data: ALTER DATABASE MOVE DATAFILE Maintains data availability during storage migration

Separation of DutiesSYSDG / SYSBACKUP: Data Guard & RMAN specific administrative privileges No access to user data: enforce security standards throughout the enterprise

Additional Online OperationsDrop index online / Alter index unusable online / Alter index visible / invisible onlineDrop constraint online / Set unused column onlineOnline move partition: ALTER TABLE MOVE PARTITION ONLINE

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Online Redefinition===========Improved sync_interim_table performance with optimized Materialized View Log processingAbility to redefine table with VPD policies with a new parameter copy_vpd_opt in start_redef_tableImproved resilience of finish_redef_table with better lock managementBetter handling of multi-partition redefinitionMultiple partitions can be specified together in a single redefinition sessionBetter availability for partition redefinition with only partition-level locksImproved performance by logging changes for only the specified partitions

Additional Details==========11.2 log handling enhancementsCommit-scn based MV LogDeferred MV Log purgeRemove MV Log setup and purge out of the refresh processFor a single MV refresh solely depending on the MV Log, the log handling could spend up to 2/3 of the total refresh execution time. Removing the log handling overhead could make the refresh 3X faster.

===================================

Before:No easy way to redefine multiple partitions, must launch a separate START_REDEFINITION and FINISH_REDEFINITION for each partition:Takes a long time to get all partitions redefinedDifficult to redefine all partitions needed in a single maintenance windowGoals/Benefits:Pay the non-recurring overheads of START_REDEFINITION and FINISH_REDEFINITION once (creation of MV Log, metadata modification, etc)Easy to move a large number of partitions to new tablespace(s) in an online mannerEnables the atomic redefinition of more than one partition.Support partial completion with new continue_after_errors parameter

Step 1: Start redefinition with multi-partitionsDBMS_REDEFINITION.START_REDEF_TABLE (GROCERY,SALES,int_table=>tbl1,tbl2,tbl3, part_name=>sales_p1,sales_p2,sales_p3, continue_after_errors=>TRUE);Step 2: Synchronize interim tables for multi-partitions DBMS_REDEFINITION.SYNC_INTERIM_TABLE (GROCERY,SALES, int_table=>tbl1,tbl2,tbl3, part_name=>sales_p1,sales_p2,sales_p3 , continue_after_errors=>TRUE);Step 3: Finish redefinition with multi-partitionsDBMS_REDEFINITION.FINISH_REDEF_TABLE (GROCERY,SALES, int_table=>tbl1,tbl2,tbl3, part_name=>sales_p1,sales_p2,sales_p3, continue_after_errors=>TRUE);If any failures and continue_after_errors=TRUE, record the error and process the next partitionIf any failures and continue_after_errors=FALSE, rollback by exchanging all successfully exchanged partition(s)

part_name is used today, but cant take a list

===================================

Support VPD policies with a new parameter copy_vpd_opt in start_redef_table

Option 1: Not copied (Default)No VPD policiesError when existing VPD policies on original table

Option 2: Copy VPD policies automaticallyColumn names and types unchanged

Option 3: Copy VPD policies manuallyApplicable when:Column names or types are changedUsers want to modify VPD policies

===================================

Existing issues (in finish_redef_table):The execution can be unpredictably long and sometimes never finish => forcing interrupt and abortHard to get dml lock before moving into the final operationThe final refresh after acquiring locks could take very long => a wider blackout window blocking DMLsEnhancements:Providing timeout to gracefully exit from finish_redef_tableUtilize dml-lock-wait timeout window to refresh the interim tableGet dml lock in wait mode to get better lock chance

===================================

Before: redefine partition P, lock the table T and log to all partition changes DMLs cannot occur to other partitionsUnnecessary change logging

12c (only lock/log the partition)DMLs allowed on other partitionsRefresh with only needed changes

======================================================

The privileges explicitly granted to SYSDG are (see admin/catadmprvs.sql):

< SYSTEM PRIVILEGES >alter databasealter sessionalter systemselect any dictionary

< OBJECT PRIVILEGES >execute on sys.dbms_drsselect on sys.dba_captureselect on sys.dba_logstdby_eventsselect on sys.dba_logstdby_logselect on sys.dba_logstdby_historyselect on appqossys.wlm_classifier_plandelete on appqossys.wlm_classifier_plan

Also, SYSDG is implicitly allowed to perform the following operations:

- STARTUP- SHUTDOWN- CREATE RESTORE POINT- DROP RESTORE POINT- FLASHBACK DATABASE- SELECT fixed tables/views (e.g., X$ tables, GV$ and V$ views)

================================================

Additional Online Operations---------------------

Drop index online (create/rebuild index online in 10g and 11g)Alter index unusable onlineAlter index visible / invisibleDrop constraint online (create constraint online in 11g)Set unused column online (add column online in 11g)Add column with default is fast (metadata-only operation), and online (only not-null in 11g)Online move partitionEdition-based redefinition simplification

=======================================

ONLINE MOVE Partition: you can move a partition while DMLs are ongoing on the partition you are moving.

DDL operations that took subtle X-locks here and there, leading to a pile-up of DMLs in systems like SAP. We fixed one or two of these cases for SAP in the 11.2 time frame and added the following DDLs in 12.1: CREATE/DROP INDEX, ADD/DROP CONSTRAINT, ADD/SET UNUSED COLUMN (all Beta 1).

Make it easier to use Edition-Based RedefinitionCan editions-enable a database with tables that depend on UDT (such as AQ payload), without schema reorganizationSupports MVs, indexes, and virtual columns (based on PLSQL or views) on editioned objectsGreatly reducing the need to separate application objects into different schemas

-------------------------

Moving a Table to a New Segment or Tablespace

The ALTER TABLE...MOVE statement enables you to relocate data of a nonpartitioned table or of a partition of a partitioned table into a new segment, and optionally into a different tablespace for which you have quota. This statement also lets you modify any of the storage attributes of the table or partition, including those which cannot be modified using ALTER TABLE. You can also use the ALTER TABLE...MOVE statement with a COMPRESS clause to store the new segment using table compression.

Tables are usually moved either to enable compression or to perform data maintenance. For example, you can move a table from one tablespace to another. Most ALTER TABLE...MOVE statements do not permit DML against the table while the statement is executing. The exceptions are the following statements:- ALTER TABLE ... MOVE PARTITION ... ONLINE- ALTER TABLE ... MOVE SUBPARTITION ... ONLINEThese two statements support the ONLINE keyword, which enables DML operations to run uninterrupted on the partition or subpartition that is being moved. For operations that do not move a partition or subpartition, you can use online redefinition to leave the table available for DML while moving it.

---------------------------------------------------------

Moving a Table Partition or Subpartition Online

Use the ALTER TABLE...MOVE PARTITION statement or ALTER TABLE...MOVE SUBPARTITION statement to move a table partition or subpartition, respectively. When you use the ONLINE keyword with either of these statements, DML operations can continue to run uninterrupted on the partition or subpartition that is being moved. If you do not include the ONLINE keyword, then DML operations are not permitted on the data in the partition or subpartition until the move operation is complete.

When you include the UPDATE INDEXES clause, these statements maintain both local and global indexes during the move. Therefore, using the ONLINE keyword with these statements eliminates the time it takes to regain partition performance after the move by maintaining global indexes and manually rebuilding indexes.

To move a table partition or subpartition online:

In SQL*Plus, connect as a user with the necessary privileges to alter the table and move the partition or subpartition.

Run the ALTER TABLE ... MOVE PARTITION or ALTER TABLE ... MOVE SUBPARTITION statement.

Example 20-9 Moving a Table Partition to a New Segment

The following statement moves the sales_q4_2003 partition of the sh.sales table to a new segment with advanced row compression and index maintenance included:

ALTER TABLE sales MOVE PARTITION sales_q4_2003 ROW STORE COMPRESS ADVANCED UPDATE INDEXES ONLINE;

Oracle Database 12cHigh Availability Key New FeaturesApplication ContinuityGlobal Data ServicesData Guard EnhancementsRMAN EnhancementsFlex ASMOther HA EnhancementsGoldenGate UpdateCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Log-based Changed Data

Oracle & Non-OracleDatabase(s)Message BusOracle Database12c *Oracle GoldenGate 12c*Low-Impact, Real-Time Data Integration & Transactional Replication

Data Integrator

New DB/HW/OS/APP Fully Active Distributed DBReporting DatabaseData Warehouse

ODSZero Downtime Upgrade & MigrationQuery & Report OffloadingData Synchronization within the EnterpriseReal-time BI, Operational Reporting, MDMEvent Driven Architecture, SOAActive-Active High AvailabilityMessage BusGlobal Data Centers

Exact Copy of PrimaryDisaster Recovery for Non-Oracle Databases*: GoldenGate 12c for Oracle Database 12c will be available in FY14

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#[Note for speakers information only: GoldenGate 12c for Oracle Database is planned to GA in summer 2013]

Oracle GoldenGate is a software product that offers real-time data integration, and transactional data replication across heterogeneous system. Its key capability is to move change data by reading transaction/redo logs and distributes these transactions to different targets in real time. There are many uses cases for Oracle GoldenGate beyond point-to-point data replication.

As you can see in the diagram, Oracle GoldenGate has the ability to capture and deliver changes from both Oracle and non-Oracle databases, legacy systems, and JMS message busses. Additionally, it has the ability to do this in numerous deployment models to facilitate data synchronization across the enterprise. With Oracle GoldenGate 12c version (coming in Calendar year 2013) it will support capture and delivery to Oracle Database 12c.

By synchronizing old and new systems in real time GoldenGate provides the means to implement zero downtime migrations of systems (hardware/OS/database and applications). Also, as the product is able to replicate in real-time bidirectionally and can be deployed in an active-active model, it allows seamless transitions during unplanned outages and also improve system performance by distributing load.

Another solution is query offloading . In this use case GoldenGate is used to create a complete replica database or move only a subset of the production data in real time to a separate system so that read-only queries can be run on the replica and do not create a burden on the production system.Other use cases including real time operational reporting, feeding data warehouses with real time data and distributing data across systems. GoldenGate can also publish and capture changed data to and from JMS based messaging systems.

GoldenGate Zero Downtime Migration/UpgradeSeamless Migration and Upgrades to Oracle Database 12c*Consolidate/migrate/ maintain systems without downtimeMinimize risk with failback optionValidate data before switchoverUse Active-Active replication for phased user migration

Nn-OracleERPOracle Database 12c

Non-OracleERPCompare & Verify using Oracle GoldenGateVeridata*: GoldenGate 12c for Oracle Database 12c will be available in FY14

Real-Time ReplicationFor MigrationsOptionalFailback Data FlowSwitchover

Oracle10.2CRM

Oracle 11.2DWCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Oracle GoldenGate enables zero down time upgrade migration or consolidation by synchronizing the Oracle DB 12c with the existing Oracle or non-Oracle databases in real time. During the synchronization the production systems can continue to support transaction processing. As soon as the new system on Oracle Database 12c is in sync with legacy systems, users can do immediate switchover, thus experiencing minimal to zero downtime.

While the target Oracle DB 12c is instantiated (via DB tools or ODI) with bulk data transfer, GoldenGate captures the change data committed in the production systems and stores them in its queue. Once the target system is ready it delivers the change data to the consolidated system to make sure they are in synch. After that point the target can be tested with production load. At this point Oracle GoldenGate Veridata can verify that there is no data discrepancy. When the new system is ready users can be switched over without any database downtime.

With bidirectional replication capabilities GG can capture new transactions happening in the new consolidated environment and deliver to the legacy systems to keep them up to date for fail back option. The other option is to run both legacy and new environment concurrently where GG does the bidirectional synchronization in real time. This allows phased migration of users, and completely seamless transition into the new system with minimized risk. In addition to removing downtime and minimizing risk, GoldenGate allows the IT team to test the new environment without time pressure.

41

Oracle GoldenGate for Active-Active DatabasesIncrease ROI on Existing Servers & Synchronize DataUtilize secondary systems for transactionsEnable continuous availability during unplanned & planned outagesSynchronize data across global data centersUse intelligent conflict detection & resolution

*: GoldenGate 12c for Oracle Database 12c will be available in FY14Oracle Database 12c

Oracle10.2App2

Oracle 11.2App3

Non-OracleApp

Heterogeneous Bi-Directional Real-Time ReplicationCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Active Active database replication is a key use case for GoldenGate when it comes to achieving high availability. GoldenGates bidirectional real-time data replication works across heterogeneous systems.

Multi master database replication with GG helps eliminate any downtime planned or unplanned, because of the ability to work with the remaining databases if one db fails.. It also increases system performance by allowing transaction load distribution between completely parallel systems.

Data can be filtered to move only certain tables or rows. There are no distance limitations. GG offers out of the box conflict management to handle possible data collisions that comes with multi-master replication.

42In-Memory & Analytic FeaturesCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Oracle Database In-Memory OptionLeading edge In-Memory technology Seamlessly integrated into Oracle DatabaseDelivers Extreme Performance forAnalytics and Ad-Hoc reporting on live dataEnterprise OLTP and Data WarehousingScale-up and Scale-outTrivial to Deploy for All Applications and Customers

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#44Adaptive Query OptimizationAdaptive PlansAdjust query plans at runtime based upon current dataAdaptive Statistics Adapt optimizer statistics at runtimeLearn for future queries

Adaptive Query Optimization Adaptive Plans

Adaptive Statistics

Join MethodsParallel distribution MethodsAt compile timeAt run timeCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#45Adaptive Execution PlansGood SQL execution without intervention

HJTable scanT2Table scanT1NLIndex Scan T2Threshold exceeded, plan switchesTable scanT1HJTable scanT2Plan decision deferred until runtimeFinal decision is based on statistics collected during execution If statistics prove to be out of range, sub-plans can be swapped Bad effects of skew eliminated

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Plan decision deferred until runtimeFinal decision is based on statistics collected during execution Alternate sub-plans are pre-computed, and stored in the cursorStatistic collectors are inserted at key points in the planEach sub-plan has a valid range for stats collectedIf stats prove to be out of range sub-plans can be swapped Requires buffering near the swap point to avoid returning rows to userOnly join methods and distribution method can change

46Oracle In-Database AnalyticsStatistical FunctionsData Mining & Predictive AnalyticsText SearchText MiningGraph AnalysisSpatial AnalysisSemantic AnalysisIn-Database MapReduce

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#47Why In-Database Analytics? Performance and ScalabilityLeverages power and scalability of Oracle Database. Fastest Way to Develop ApplicationsSpecialized APIs and flexible SQL accessLowest Total Costs of Ownership No need for separate analytical serversDifferentiating FeaturesCopyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#These are some of the key differentiating features for Oracle NoSQL Database.

Flexibility A key differentiator is the combination of the simple key-value data model and configurable ACID transactions maximizes the flexibility and configurability of NoSQL DB. It allows more applications to use a common, enterprise grade distribute storage technology for multiple NoSQL applications. Applications dont have to conform to an out-of-the box transaction model which is often limited or non-existent. Instead they can specify the transaction semantics on a per-operation basis. Applications dont have to conform to a pre-defined data document, columnar or graph data model. They can use the NoSQL DB key-value pair model in a what that is most suited to the application. Key-value pairs are the simplest and most flexible data model. Keys are simple structures or strings that encapsulate a record hierarchy. Values can be simple byte arrays, complex application structures or JSON objects. Key-value pair records provide very simple and very fast (1-2 milliseconds per operation) data access. Key-value pairs can be used to model Document storage (like Berkeley DB XML), Columnar storage (like time series vectors) and Graphs (like RDF data). Oracles Key-value pairs utilize a flexible key definition that allows the application developer to leverage it for both data distribution and data clustering. Transactions and transactional consistency is a key element to every application. NoSQL DB supports ACID transactions in the storage layer (within a data partition), and allows the application to configure the transactional behavior.

Easy to use Smart topology is about: a) distributed topology awareness, b) automated configuration and load balancing, c) automated failure detection and failover handling. This is a key differentiator because it makes NoSQL DB much easier to configure and manage. In a nutshell, Smart topology helps customers because it: Automatically allocates resources Guarantees even distribution of master nodes Guarantees HA distribution of replicas Minimizes impact of storage node failures Avoids outages due to admin mistakes Automation simplifies administration The NoSQL DB driver (linked into each client application) and the Storage Nodes both maintain a map of the current topology and its state. This allows NoSQL DB to optimize query operations and minimize impact of storage node failures. NoSQL DB does NOT require complex topology planning and management. You simple tell NoSQL DB how many storage nodes are available and a couple of simple configuration parameters (replication factor and storage node capacity) and the system will optimally configure itself, ensuring proper load balancing and resilience to failure. If topology changes cause the system to become unbalanced (certain storage nodes become overloaded for example) the system can automatically rebalance itself remaining online the entire time. The storage nodes will automatically detect and respond to storage node failure. If the master fails, a new master is elected. If a replica fails, its status us updated as offline and queries are served by the remaining replicas.

NoSQL Database comes integrated with: Oracle Database via External TablesHadoop MapReduce via the KVInputFormatOracle Event Processing. NoSQL DB can serve as a data source for data lookups.Oracle Coherence. NoSQL DB can serve as the backing store for Coherence, faulting in objects that are not in the Coherence cache grid and write out objects that have been modified. RDF/Jena. NoSQL DB can store RDF graph data and perform SPARQL queries. This is a keywhen you consider that NoSQL applications function within a IT infrastructure ecosystem. Interaction and interoperability with the RDBMS, DW, Application in-memory caches, Business Rules engines, etc. is a crucial characteristic in leveraging the value of the NoSQL data within an overall enterprise data management solution. differentiator because other NoSQL providers tend to have a single product or product silos. This is especially important

48Key FeaturesOracle Advanced AnalyticsFastest Way to Deliver Scalable Enterprise-wide Predictive Analytics

In-database data mining algorithms and open source R algorithms SQL, PL/SQL, R languagesScalable, parallel in-database executionWorkflow GUI and IDEsIntegrated component of DatabaseEnables enterprise analytical applications

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#49In-Database Analytics:The On-Going Evolution of SQL45Introduction of Window functionsEnhanced Window functions (percentile,etc)Rollup, grouping sets, cube

Statistical functionsSQL model clausePartition Outer JoinSQL PivotRecursive WITHListAgg, Nth value windowPattern matchingTop N clauseIdentity ColumnsColumn Defaults

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Emphasize the long history of analytical functionality in SQL and the database.50

Oracle Database 12cOracle Database 12c offers a tremendously sophisticated set of high availability (HA) capabilitiesThese capabilitiesFurther reduce downtimeSignificantly improve productivityEliminate traditional compromises Extreme Availability: Summary

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#

Oracle Database 12c:Engineered for Clouds and Big Data

Copyright 2013, Oracle and/or its affiliates. All rights reserved.52

Big Data

Database as a ServiceCloud

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#52Private Database Cloud ArchitecturesOracle Database 11g

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Private Database Cloud ArchitecturesOracle Database 12c

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Oracle Database ArchitectureRequires memory, processes and database files

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#New Multitenant ArchitectureMemory and processes required at multitenant container level only

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Multitenant ArchitectureComponents of a Multitenant Container Database (CDB)

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Multitenant ArchitectureMultitenant architecture can currently support up to 252 PDBsA PDB feels and operates identically to a non-CDBYou cannot tell, from the viewpoint of a connected client, if youre using a PDB or a non-CDB

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Unplug / PlugSimply unplug from the old CDB and plug into the new CDB Moving between CDBs is a simple case of moving a PDBs metadataAn unplugged PDB carries with it lineage, opatch, encryption key info etc.

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Unplug / PlugExample

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Common Data DictionaryBefore 12.1: Oracle and user meta data intermingle over time

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Oracle Data and User Data

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Horizontally Partitioned Data Dictionary

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Multitenant Architecture - DynamicsPDBs share common SGA and background processesForeground sessions see only the PDB they connect to

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Manage Many as One with MultitenantBackup databases as one; recover at pluggable database level

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Manage Many as One with MultitenantOne standby database covers all pluggable databases

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Multitenant for Simplified UpgradesApply changes once, all pluggable databases updated

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Multitenant for PatchingFlexible choice when patching & upgrading databases

Copyright 2013, Oracle and/or its affiliates. All rights reserved.Insert Information Protection Policy Classification from Slide 12 of the corporate presentation template#Improved Agility with Changing WorkloadsExpand Cluster to support flexible cons