oracle exadata life cycle & services - · pdf fileoracle exadata life cycle &...

69
Oracle Exadata Life Cycle & Services Best Pracses in the Exadata Life Cycle A Whitepaper by avato consulng ag update

Upload: ledang

Post on 31-Jan-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

Oracle Exadata Life Cycle & Services

Best Practises in the Exadata Life Cycle

A Whitepaper by avato consulting ag

update

page 2/69

update

© 2014 avato consulting ag

Author:Ronny EgnerLutz FröhlichTorsten LoofStefan PanekFrank Schneede (Oracle Germany)

Siemensstr. 24-26, 63755 Alzenau/GermanyPhone: +49 6023 967490www.avato-consulting.com

page 3/69

update

Contents1 Exadata Projects on Every Corner - Time for a First Conclusion . . . . . . . . . . . 5

2 System Design and Project Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1 Analysis and Documentation of the Existing System Landscape. . . . . . . . . . . . . . 6 2.2 Workshop with Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Proof of Concept (PoC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.5 Preparation Data Centre. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.6 Detailed Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.7 Detailed Configuration by Means of the Exadata Deployment Assistant . . . . . . 10 2.8 High Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.9 Backup/Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 InstallationandMigration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 3.1 Planning of the Installation Process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Installation of the Hardware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.3 Installation of the Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.4 Database and Application Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4.1 Migration via Export/Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.4.2 Migration via Transportable Tablespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.4.3 Migration with Golden Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.5 Fallback Scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.6 Backup/Recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

4 ExadataOperations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 4.1 Integration into the Support Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.1 Horizontal Versus Vertical Support Organisation . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

4.1.2 Oracle DBA or Oracle DMA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4.2 Oracle Service & Support Offers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2.1 Oracle Premier Support & Platinum Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4.2.2 Oracle ASR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

4.2.3 Further Oracle ACS Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.3 Support by Oracle Partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.4 Exadata Patch Management: Mystery or Predictable Service? . . . . . . . . . . . . . . 30 4.4.1 What is Patched When and How? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.4.2 Bundle Patch, PSU, CPU, QDPE and QFSDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.4.3 Time Involved for the Patch Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

page 4/69

update

4.4.4 Patch Cycles in Practise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.4.5 Recommendations for Successful Patch Installation. . . . . . . . . . . . . . . . . . . . . . . . . 39

5 Management-&SupportTools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .40 5.1 Provsioning/Rapid Provsioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.1.1 Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.1.2 Rapid Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.2 Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.3 Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 5.4 Patching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

6 OptimisationintheLifeCycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 6.1 Cell Offloading/Smart Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.1.1 Storage Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6.1.2 Column Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6.1.3 Predicate Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6.2 HCC (Hybrid Columnar Compression) Architecture . . . . . . . . . . . . . . . . . . . . . . . 51 6.3 Encryption/Decryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.4 Virtual Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.5 Parallelisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.6 I/O Ressource Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.7 Flash Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

7 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56

8 Appendix – Exadata Deployment Assistant Step by Step . . . . . . . . . . . . . . . .57

9 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69

page 5/69

update

1 Exadata Projects on Every Corner - Time for a First Conclusion

Whether as a basis for large and transaction-rich OLTP, as a Data Warehouse or also as a consolidation plat-form - more and more often, Exadata is being used as a central building block of the IT infrastructure and many Exadata projects have already been completed. In the foreground of the platform development were scalability, high performance databases and mass data processing, in order to be able to use the system for data warehousing as well as for OLTP applications. Oracle positions Exadata, besides, as an excellent possibi-lity to speed up database consolidation on a platform that is easy to implement.

From an Oracle point of view, the systems are Engineered Systems. With the Exadata Database Machine, an entire Engineered IT infrastructure is being supplied. The Oracle Database Machine fascinates not only by excellent performance. Among the rest, it is designed to clearly optimise the deployment and the operation of Oracle databases. It is, so to speak, an „all-round talent“ and a solution for very different challenges. It is time to draw a first Conclusion from a project and operational perspective. What are the requirements to ensure that applications use the platform in the best possible? How is Exadata optimally adapted into existing operating environments?

Besides, the following questions are being answered: Which activities result and what do Best Practise appro-aches look like here? Where does Oracle take on tasks and responsibility in the Life Cycle? At what points, on the other hand, are customers and Oracle partners being called for? In which areas does an optimisation of services require a rethinking in the IT organisation to bring about the greatest possible benefit? Of course, there are standard approaches - but at which points can and should one deviate from these?

In this document, the entire Life Cycle of an Oracle Exadata Database Machine is being examined with the help of the essential phases. This Whitepaper, accordingly, presents experience made during the introducti-on, implementation, operation and optimisation of Oracle Exadata Systems. In addition, the Whitepaper points out different application scenarios and looks at subjects that constitute special challenges on account of the technologies used.

page 6/69

update

2 System Design and Project Planning

and afterwards installed via script by Oracle ACS (“Advanced Customer Support”) and are configured. The basic configuration comes from Oracle Engineering. All individual parameters are uploaded via Sheets or the Exadata Deployment Assistant. When filling out the Parameter-Sheets, the customer gets support from Orac-le ACS. In the next step, Oracle ACS automatically executes all required scripts and the Post-Check.

Oracle advertises that this method ensures an optimal configuration for applications with the ideal mix of high performance, high availability and exceptional scalability. Furthermore, Oracle actively supports with the design by consultation on site. This support can extend up to a Proof of Concept.

Because an absolutely new IT infrastructure is being introduced here, numerous tasks still arise from the customer side so that an optimal result can be achieved.

2.1 AnalysisandDocumentationoftheExistingSystemLandscapeAt the beginning of every project which deals with the introduction of a new IT infrastructure, there is an ex-tensive requirements analysis. One possible approach is to record the actual architecture and to prepare this for a common Workshop with Oracle. The following key figures are to be considered here:

• Processor requirements• Oracle Databases Server Memory and SGA Planning• Network Planning• Disk Input/Output Operation • Storage capacity planning including growth• Consideration of the ASM redundancy• Disaster Recovery Planning• Fast Recovery Area size and planning• Number and size of the databases• transaction characteristics• User numbers• Current bottlenecks (basis can be AWR reports)

In most cases, there are monitoring or capacity planning tools in order to simplify the provision of these key figures.

page 7/69

update

2.2 WorkshopwithOracleAfter conclusion of the actual review and a sufficient Capacity planning where the growth during the next years is considered, a Workshop is coordinated with Oracle. In the Workshop, the key figures are validated, checked and completed if necessary. Oracle can answer many questions from experience with comparable Exadata customer projects. Finally, a decision template is worked out. On the one hand, this serves as a basis for economic calculations (TCO/ROI considerations) and on the other hand as a basis for a possible Proof of Concept.

2.3 Proof of Concept (PoC) A Proof of Concept (PoC) is hard to avoid in the decision for a new IT infrastructure because the risk is subs-tantially minimized through this. Is the expected performance actually being achieved? What is the procedu-re for a PoC for an Exadata project and what has to be especially considered here?

In advance, the objectives of the Proof of Concept are defined. In the course of this, runtimes of Load proces-ses or Batch Jobs and the response time behaviour of an OLTP application can play a part. The execution of the tests is carried out either in a test centre or on a provided machine. Central points for the PoC, beside the performance tests above-mentioned, are above all also compression features as well as a possible migration of the application to Oracle 11.2. The installations are always based on the latest release. Currently it is pos-sible to also use 12.1 as a DB version, besides 11.2. With the Storage Server, there is a choice between the versions 11.2.3.3 and 12.1.1.1.

Oracle provides the necessary Exadata infrastructure and support, for example in the construction of the database and the application environment. Detailed test series are carried out only if all prerequisites and requirements are fulfilled on the part of the customer. The results provide the basis for the target architec-ture.

page 8/69

update

2.5 PreparationDataCentreBesides the planning of the Exadata model (or the Exadata models), a thorough planning must be carried out in the Data Centre. An Oracle Support employee provides support, on this occasion, on site. The access to the Data Centre, electricity supply, air conditioning and other parameters are tested.

2.4 ModelSelectionThe next step is the target configuration. As already mentioned, a resilient Capacity planning and the consi-deration of possible reserves are necessary. If, for example, the Exadata is too scarcely calculated in the Sto-rage, this could mean that bottlenecks might result already after a short time. Or, in case the Flash Recovery area was calculated as being too small, the backup concept must be reworked. Experience shows that a reser-ve of at least 25% should be included in the plan.

Target architecture (figures of the current X4 model Series):

Computer system Eighth Rack Quarter Rack Half Rack Full Rack

Number of Computer

Nodes

2 2 4 8

Number of Processor Cores 24 48 96 192

Total Memory in GB 1024 1024 2048 4096

Number of Storage Servers 3 3 7 14

Number of Disks 18 36 84 168

Capacity with High

Performance Disks in TB

21 43 100 200

Capacity with High Capacity

Disks in TB

72 144 336 672

InfiniBand Switches 2 2 3 3

page 9/69

update

2.6 Detailed DesignFor the conclusion of the configuration, the following areas should be looked at in more detail:

Network:• Domain-Name, IP-Address of the Name server and Network time server;• Management Network Addresses;• Client Network Addresses and Client Gateway Network Addresses;• Additional network addresses for Bonding;• All preparations in coordination with the network team within the company (for example to avoid double

allocation of addresses);

Storage/ASM:• There are two kinds of hard disks:

* High Performance with 1.2 TB gross capacity;* High Capacity with 4 TB gross capacity;

(Note: In an Exadata system, no different disk types can be installed. A „mixed operation“ with 1.2 TB and 4

TB Disks within a system is not possible. Storage Extensions with other Disks can be used.)

* Flashcache Storage: Beside the hard disks, more than 44 TB are available for performance optimi-sation, in addition. Hence, bottlenecks are almost excluded in the I/O area.

• Recommendation: High Capacity disks with 4 TB capacity; The total capacity increases enormously thereby, even though there are I/O performance losses. Here, a comparison of the gross capacity 1.2 TB versus 4 TB Disk (It is important to note that, after the ASM con-figuration, only just under 50% of the gross storage space are available.):

• Oracle ASM: Exadata uses Oracle ASM for Storage Administration. Hence, one should consider whether one works with „normal“ or „high Redundancy“ in the disk groups. This also has effect on the total capa-city of the Storage.

Computer system Quarter Rack Half Rack Full Rack

High Performance Disks in TB 43 100 200

High Capacity Disks in TB 144 336 672

page 10/69

update

2.7 DetailedConfigurationbyMeansoftheExadataDeploymentAssistantThe detailed configuration is carried out in cooperation with the manufacturer. Oracle uses the „Exadata De-ployment Assistant“ for it. This tool offers a menu-driven step by step configuration of the overall system. As a rule, a summary with the possibility for the correction is indicated after every configuration step.

In the essentials, information is expected in the configuration process in the same manner as is also necessary for standard Oracle databases during the installation and configuration. Nevertheless, at some places, infor-mation specific for Exadata is necessary. Though many parameters can still be adapted after the installation, however, this can mostly only be done with considerable effort.

Details on the Assistant and a Step by Step example can be found in chapter „8 Appendix - Exadata Deploy-ment Assistant Step by Step“

• Disk Groups:* The names of the Disk Groups can be freely chosen - the standard at Oracle ACS is „+DATA“ for user

data and „+RECO“ for archives and backup data.* Distribution of the total capacity on the Disk Groups: The distribution must be calculated - the total

capacity is mostly implemented in the ratio 40% DATA Disk Group and 60% RECO Disk Group. It has to be noted, for example, whether a backup is written on Disk and which growth rates are to be expected.

Database server / operating system:• Two possible operating system variants:

* Oracle Enterprise Linux x86_64;* Oracle Solaris 11 x86_64;

(Note: With a Exadata System X3-8, no Solaris x86 can be installed)

• The vast majority of installations were carried out on the basis of Oracle Enterprise Linux. However, there are meanwhile also isolated Exadata systems on the basis of Solaris x86. Among the rest, the decision is dependent on which know-how is present within the company in the area of operating systems.

• Recommendation: Oracle Enterprise Linux;

In addition, it should be mentioned that, with Exadata, one gets a Oracle ACS „Consulting Package“ of 20 man days. It is recommended to include this resource actively in the planning.

page 11/69

update

2.8 HighAvailabilityBesides a careful planning of the system configuration, the subject Disaster Recovery plays an important part within the scope of an Exadata planning for many customers. However, solution approaches such as the con-figuration of Stretched Clusters are not feasible with an Exadata architecture. Exadata integrates all hardware as well as software components in a rack as a ViS (Vertical integrated System). Hence, the only Disaster Reco-very solution is to establish a second Exadata System in a remote Data Centre and to use Data Guard. With this solution one achieves the maximum in security in the form of a Maximum Availability Architecture MAA. Further information on this subject can be found here: Deploying Oracle Maximum Availability Architecture with Exadata Database Machine [1]. This second system is of double benefit, because it can be used, in addi-tion, for the development and test databases or as a Reporting system - provided that Active Data Guard is used.

A cheaper solution is standard x86-Hardware on the basis of Oracle Enterprise Linux and as a storage solution either SUN ZFS Storage or a Pillar Axiom Storage System. This configuration can also be used with Data Guard as a Disaster Recovery solution. It is important to note that special features of Exadata, such as Smart Scan, Storage Index, Flash Cache and also IORM are not present. Therefore, this solution is eliminated for perfor-mance-critical implementations.

2.9 Backup/RestoreThe implementation of a backup procedure is not part of the „Installation Package“ of Oracle ACS. Hence, the customer must develop an own concept early on. Basically the backup of Exadata does not differ from the backup of other Oracle databases. In the Exadata environment, too, RMAN (RMAN scripts) is mostly used. Nevertheless, one should pay attention to a few peculiarities:

• The Compute Nodes must be included in the backup strategy and the procedures.• When using an existing Tape Infrastructure, a Media Server for integration of Fibre Channel must be

used. Alternatively, one can resort to InfiniBand Fibre Channel Gateways.• If it is planned to secure the system via Snapshot and to use HCC, ZFS or Pillar Axiom Storage-System must

be used.

For more details on this topic, see Chapter „3.6 Backup/Recovery“.

page 12/69

update

3 InstallationandMigration

During the installation of Exadata Systems, many questions arise. Aspects of the vertically integrated techno-logy and the new features are to be considered:

• Smart Scan• Storage Index• Flash Cache• Hybrid Columnar Compression• I/O Resource Manager

The installation is divided into different phases that are described in this chapter.

3.1 PlanningoftheInstallationProcessThe installation process is divided into two areas:

1. Coordination /ODEA: Existing standards in the company are co-ordinated once more and the configurati-on is prepared with the help of the Oracle Exadata Deployment Assistant (OEDA). The output of the OEDA is an essential factor for the quality of the installation.

2. Installation: The installation is carried out by Oracle ACS. This is an automated process which is controlled via script.

3.2 InstallationoftheHardwareThe delivery of the systems is done through a shipping agency. In the first step, the systems are brought to their correct place in the Data Centre and are erected. However, the commissioning happens only after some days there , since the system must „acclimatise“. Afterwards the system is commissioned by Oracle and Orac-le ACS can begin with the set-up.

3.3 InstallationoftheSystemThe installation of Exadata is reserved to Oracle ACS and proceeds as follows:

• Completion of the configuration assistant;• Data transfer to the DBM configurator & generation of the parameter files;• Upload of the files to the Exadata System;• Start of the CheckIP Script to check the network configuration and first Boot;• Start of the Onecommand Script;

page 13/69

update

Concerning the installation process, the following should also be noted:

• Database Software Installation * Oracle ACS installs the at the time current software on the Exadata System („current“ with respect

to Grid Infrastructure and RDBMS Software).* The software version can be found in MOS Note 888828.1. This MOS Note, beside the Oracle Infor-

mation Center (Doc ID 1306791.2), is a good entry point for all information on the subject Exadata.* Besides, the customer should transfer his existing OFA (Oracle Optimal Flexible Architecture)

Standards to the Exadata Installation for the purpose of standardisation.

• Acceptance of the systems* Oracle ACS concludes the installation with an Exachk Performance Test and hands the system over

to the customer. It is recommended to repeat the Exachk performance test after configuration ch-anges (Oracle Exadata Database Machine exachk or HealthCheck Doc ID 1070954.1).

* A cluster acceptance test was not carried out by Oracle during the installations accompanied by avato. In the course of the quality assurance, extensive cluster tests are absolutely recommendab-le. Thus one ensures, for example, that no Single Point of Failure is present and the cluster is fully functional.

• Documentation* The system documentation handed over by Oracle should be supplemented by company-specific

and customer-specific information.

page 14/69

update

3.4 DatabaseandApplicationMigrationAfter completion of the installation, the Exadata system is configured and is operational. In the next step, the database is migrated onto the system.

The following overview introduces the essential methods for a migration and their boundary conditions:

Requirements Methods

Export /

Import

Datapump

Export /

Import

Transporta-

ble

Data Guard

physical

Standby

Data Guard

Logical

Standby

Streams GoldenGate

Oracle Version Table

spaces

Data Guard

physical

Standby

Data Guard

Logical

Standby

Streams GoldenGate From 9i independent

OracleEdition Standard

Edition

Standard

Edition

Standard

Edition

Enterprise

Edition

Enterprise

Edition

Enterprise

Edition

independent

Downtime high high medium low low low low

System load (source) high high low low low medium medium

Installation&configurati-

oneffort (Database

parameters, Script parameters)

medium medium low low medium high high

Complexity low low medium medium high high high

Invasive procedures

(engaginginapplicationrequired)

no no no no no If applicable If applicable

Manualdevelopment

effort

low low high low medium high high

FallbackScenario*effort low low medium low low medium medium

Dependingonplatform/

processorarchitecture

no no no yes ** no no

Datareorganisation yes yes no no / yes yes

AdaptationofStorage

Features / parameters

yes yes no yes yes

Data type Support very good very good very good very good good good good

Additionallicences no no no If applicable

EE

If applicable

EE

If applicable

EE

yes

* Fallback scenario that goes beyond the time of migration (e.g. 1 week)** MOS Artice ID: 413484.1

page 15/69

update

The subject of Migration will not be discussed to the full extent in this Whitepaper on account of the high individuality. In the table, for example, the migration from an external system was not considered. In practise, the following central aspects of migration have emerged:

• Handling of Downtimes;• Migration to a x86- architecture;• Release (Exadata is always based on the current release);

Our experience has shown that, in more than 90% of the migrations carried out, the database Utility „Data Pump“ is used, although a long Downtime is required here. To minimise the Downtime and to carry out a change to a x86 infrastructure at the same time, the export / import process has proven itself.

Further recommendations concerning migration:

• drafting of detailed check lists according to the Step by Step process;• Involvement of the specialist department during the migration;• Multiple testing of the migration process & „dress rehearsal“;

Below, some migration processes will be discussed in more detail.

3.4.1 MigrationviaExport/ImportIn practise, a process has established itself with which the Downtime can be minimised, as well as the change to x86 be carried out.Export/Import (and/or since Oracle 10g Data Pump) is the “Allrounder” for the migration of data volumes up to approx. 1 TB. Bigger data volumes should be migrated by means of Transportable Tablespace if possible.

The advantage of the export / import process is that it can be used up to version 8. Moreover, it is indepen-dent of the previous operating system of the database. In addition, it offers the possibility to ex and/or import only certain data and to carry out certain changes to the data during the import (for example, a compression of the data).The relatively low speed is disadvantageous. In particular the necessary new construction of the Indices and Constraints can last longer than the following import without corresponding tuning.

page 16/69

update

In practise, the following possibilities have proved themselves to accelerate the export process:

• Use of the PARALLEL Degree Parameters:A roughly linear increase of the export speed is achieved in practise by a subdivision of the tables to be exported among several Worker Processes. Nevertheless, this speed increase is possible only up to the physical maximum reading speed of the Storage and/or the maximum writing speed of the Storage.

• Separation of the export file from the Data Files:It is recommended to separate the export to be written from the Data Files to be read files so that both Storages involved do not need to change repeatedly between reading and writing access. In this manner, a sequential reading and/or writing process is achieved.

• Depositing the export files on the Exadata System:For a speedy import, it is advisable to deposit the dump file already on the Exadata system so that it does not get restricted on the Exadata system owing to the speed limited to 1 Gbit/s. Alternatively, the dump file can be deposited on a Storage that is integrated by a high-speed network (for example, via InfiniBand or 10 Gigabit Ethernet).

• Compressed export:If the time necessary for the export is too high in spite of parallelisation and separation of the files, the export should be executed compressed. This requires the Advanced Compression licence or a separate arrangement with Oracle in order to be able to use this feature for the migration. If both are not present, the source server can be equipped with a high number of as fast as possible network connections and the Dump Files generated by the export files can be distributed to these servers.

According to experience, most performance problems are to be expected with the export from tables with big LOBs. This is above all because LOBs are exported sequentially - even if a corresponding PARALLEL Degree was stated during export. The export of the table with the LOBs takes longer than the export of all other dates and metadata. This provides for delays. Hence, one should export all data and metadata without tables with LOBs and begin an individual export per table with LOBs in parallel in each case. The export of the Non-LOB tables can already be imported into the new database after termination and the Constraints/Indices can be establis-hed.

The Separation of the data into „important“ and „insignificant“ also aids to save time. In this case, important data are those without which the application can not function. Insignificant data are, for example, archive data whose migration is less time critical.

page 17/69

update

The scaling of the import process should be tested beforehand via several nodes. An increase in output is not always is achievable. Under the already mentioned requirements, the tables can be imported relatively fast - however, the generation of the Indices and Constraints takes substantially longer.

The practise recommends generating the Constraints and Indices manually and not with Data Pump.

This step allows for a maximum degree of parallelisation.

Caution is required if the tables are generated by the DBA before the import and contain many partitions. This combination provides for long import times on account of additional checks, which the Data Pump must carry out during import, already. For every partition and/or Subpartition to be imported, approx. ten seconds are needed for checking. This is only justifiable for a small number of partitions.

3.4.2 MigrationviaTransportableTablespaceIf data volume or time is not sufficient, another process offers itself: Transportable Tablespace. You can find further information here: Reduce Transportable Tablespace Downtime using Cross Platform Incremental Ba-ckups (MOS Note: 1389592.1).

With this process, the Data files with the data are copied. Differently than export / import, this process has the disadvantage that all data are migrated in one or several Tablespace(s) without possibility to exclude data from the migration. The advantage is in the higher speed since, for example, the time for the new constructi-on of the Indices and Constraints is longer needed.

Transportable Tablespaces are subject to restrictions, however - for example, to the following ones:

• The database version of the target database may not be „more current“ than that of the source database.• The source database must be at least Version 10g Release 2.• Provided that the platforms of source and target differ, the Data files must be converted with RMAN be-

fore import.• Certain object types, such as for example Indices, which are based on user-defined functions, are not

supported and must be deleted before the transport and afterwards be created anew.• The data in the Tablespace(s) must be „self-contained“ i.e. all properties of the diagram must be registe-

red by the Tablespaces to be migrated.

page 18/69

update

In spite of the restrictions listed, this process is well suited to the quick migration of big databases. The pro-cess of the migration is as follows:

1. At the source:a. Set Respective Tablespace(s) to READ ONLY;b. Export with the Parameter “TRANSPORTABLE=y” and the specification of the Tablespace;c. Copy the Data files to target;d. After conclusion of the copy process, set the Tablespaces to READ-WRITE ;

2. At the target:a. Provided that necessary: Convert the Data files with the RMAN (and depositing in the ASM);b. Import the Data by means of Data Pump;c. creation of previously deleted, incompatible objects;

The longest time is needed for copying the Data files. This time, nevertheless, can be reduced to a few minu-tes, regardless of their size. To this end, the Data files are secured during operation as RMAN Image Copy. An image Copy is a 1:1 copy of the Data files which can be used directly by the database. The changes can be transferred incrementally into the image Copy by the RMAN and hence very fast. If one stores this copy via NFS on the target system, the duration of the copy process is reduced to a minimum. However, the process described must be supplemented around another incremental backup after the activation of the status READ-ONLY of the Tablespace.

The time for converting of the Data files cannot be reduced with the version 11 g of release 2 of the database. However, since the data to be read are already present locally or on broadband integrated Storage (InfiniBand or 10 - however, not Gigabit Ethernet) the conversion on the Exadata platform is very quick thanks to paralle-lisation.

With this process, the authors have migrated a 20 TB database within five hours. A comparable speed is scar-cely achievable with export / import.

page 19/69

update

3.4.3 MigrationwithGoldenGateGolden Gate can be used as the third method of migration. Golden Gate is a replication technology by Oracle, which allows migrations, independent of the data volume, almost without Downtime. With this technology, the data are continuously replicated from the source to the target. Provided that the table is not yet present, it is copied and is approximately synchronously replicated after completion. Golden Gate is also capable of not only carrying out a replication between different Oracle versions but also between different database systems (for instance MS SQL → Oracle or Oracle → MySQL or MS SQL → DB2). On account of this feature, a migration can be implemented after initial synchronisation with minimum Downtime - regardless of the size of the database. However, it is a disadvantage that training in Golden Gate takes a lot of time and a license is required for using it.

For more information about Golden Gate in the Exadata environment can be found here: Oracle GoldenGate on Oracle Exadata Database Machine [2].

3.5 Fallback Scenario Even the best preparation is no guarantee for it that at the end, there will be no errors. Hence, one should work out a Fallback Scenario in advance.

If, on the production database, they are already working with Oracle Flashback technology, one could set a so-called Restore Point at the beginning of the migration and in the error case „recover“ back to this. Another method could be to activate a Standby database with the initial state.

3.6 Backup/RecoveryThe backup of Exadata does not differ from the backup of an Oracle RAC database. The difference lies rather in the Location of the backup. There are the following possibilities:

• Within the Exadata Rack* In the Fast Recovery Area;* In a ASM Disk Group (Non-Fast Recovery Area);

• Outside the Exadata Rack* In an ASM Disk Group;* On a NFS Mount Point;* Directly from the database to Tape;* from Disk Backup to Tape;

page 20/69

update

Beside the abovementioned boundary conditions and depending on the specific RPO/RTO requirements of the customer, a backup concept can be created and be implemented.

For an effective backup, there are two scenarios:

Scenario 1:• DB with physical Standby;• Backups are drawn by the Primary in the form of an Image Copy very quick recovery by Switch of the

Data file in the error case;• The Standby NOT running on the Exadata Storage but on the ZBA (ZFS Backup Appliance) – this is possib-

le also with HCC. Backups are Snapshots of the Standby DB. These can be used* to generate UAT- and DEV-DBs;* for backing up the database to tape;

• Disadvantage: A ZBA is necessary;• Advantage: Very small RTO and the construction of test databases is very quick; the individual test DB

hardly needs any space, since Snapshots are being used;

Scenario 2:• DB with physical Standby;• Pure backups of the Standby DB on tape with the RMAN;• Disadvantage: Provides a less large RTO (depends very strongly on the DB size and the speed of the net-

work and the tape drives). Besides, the construction of test databases is substantially slower and, accor-dingly, needs a lot of space (2 DBs = double space 3 DBs = triple space etc.);

You can find further information about here: Backup and Recovery Performance and Best Practices for Exada-ta Cell and the Oracle Exadata Database Machine [3] and Backup and Recovery Performance and Best Practices using Oracle Sun ZFS Storage Appliance and Oracle Exadata Database Machine [4].

page 21/69

update

4 ExadataOperations

Exadata is a Vertical Integrated System – with integrated Storage and a coupling of the components via Infini-Band. Thus, new challenges arise in many areas for the Operations. This applies particularly for the Hardware Support, however, also for the organisation of the Operations Teams. Besides, questions arise with regard to Support effort and service offers by the manufacturer.Oracle advertises the platform with the argument that the Support efforts are clearly lower. This, according to manufacturer, is not only due to the standardiation of databases but above all due to the fact that the ef-forts in the areas Storage, OS, Hardware and network are reduced. Hereinafter, this statement by Oracle will be examined in more detail.

4.1 IntegrationintotheSupport-OrganisationAs a rule, the classical Operations and Support organisation knows four to five „areas“ that carry out the Sup-port specializing across applications: Network, Storage, Operating System, Database (Backup) and Applicati-on. Empirical values have shown that one can well orientate oneself by the values stated by Oracle: 60% of the activities relate to the database, 20% to the Administration of the Storage cells and approx. 20% to the Linux Administration as well as other components (like InfiniBand and Network).

With the exception of the Application Support, Exadata integrates all levels with special technologies in one system and hence requires an extensive coordination and an intensive co-operating of different Support units. At first glance, it looks aggravating for some Support areas that, besides new technology, basically also the use of the Oracle Enterprise Managers CC is necessary. In addition to the support provided by Oracle and/or Orac-le partners, two approaches to come up for the Support during the introduction of Exadata:

• Oracle DMA Team (Database Machine Administrator)Instead of to the classical horizontal Support organisation, one goes over in subareas to a vertical struc-ture. In the process, task areas are transferred to the DBA team and this team is extended into an Oracle DMA team. Beside the standard subjects, the Oracle DMA team deals in future also with the Storage (ASM), operating system as well as InfiniBand network.

• Tool- introduction and Skill constructionIn the classical horizontal Support structure, Enterprise Manager Cloud Control is introduced across teams. Besides, the construction of the corresponding know-how with regard to the system and to the tool takes place in all teams.

page 22/69

update

4.1.1 HorizontalVersusVerticalSupportOrganisationAs already described, the introduction of Exadata requires a clear operating concept. Which services are ob-tained by Oracle and/or Oracle partners? How is the own organisation structured? Quite essential points beside the specifics of a Vertical integrated System are the Software Stack used as well as the management and monitoring tool.

Specifics of integrated systems:

• Requirements originate for the operation that, on the basis of the current structure and the service com-panies used, may not be able to be resolved. These include, for example:

* Setup and Installation: Is extensively performed by Oracle and/or Oracle partners (Further informa-tion, see Chapter „3 Installation and Migration“);

* Hardware Support (Storage, InfiniBand), ASR (Auto Service Request): Is extensively performed byOracle (in isolated cases also by Oracle Partners);

* Requirements for tools and processes on account of the special hardware (for example Disk Repla-cement, InfiniBand);

Software Stack: DMS & OS

• With Exadata, Oracle technologies are being used that were perhaps not used in this form up to now. Hence, a Support organisation must bring along extensive know-how and sufficient practical experience in the fields of Oracle Clusterware, ASM, RAC and Data Guard. Since OEL is used for almost all installa-tions, it is advantageous if the support is familiar with OEL. Besides, experience with InfiniBand is also helpful for the error analysis (read and interpret protocol files).

Management Tool:

• The Enterprise Manager Cloud Control is the central tool for Management and Monitoring of the plat-form. In the event that this is not yet being used, it should be present at the latest at the time of Exadata.

In which areas do essential changes arise for a Support organisation and which models can be derived out of this? Basically, there is a second model as an alternative to the established horizontal model (with clear sepa-ration between Application, Database, System and Storage): A vertical Support structure, in which the areas Database, System and Storage are summarised into a DMA team (Database Machine Administrator).

page 23/69

update

The following table describes the main differences and the pros and cons of both models:

Horizontal Support Structure Vertical Support Structure

Support Organisation Database Administration (DBA)

System Administration (SA)

Storage Administration

New support group for Appliance:

ODM Team

Coordination efforts in Ap-

pliance Support

High: Many activities require the participation

of several groups. Different teams have

interfaces to Oracle Support.

Low: One team can (almost) cover all acti-

vities and there is just one interface to

Oracle.

Management Tools Variety of tools or central tool application (SA

Team uses Oracle EM)

Complete management of the systems th-

rough a console (incl. storage)

Team Skill SA Team must build up know-how about:

• Grid, ASM

• Oracle Enterprise Linux

• Basis Database

• InfiniBand

• Oracle Storage

The primary know-how is database tech-

nology.

Except for OEL, the required know-how is

usually present.

Patching Several teams are involved jointly. Instead of several teams (DBA / SA / Stora-

ge) there is only one team (introduction of

a four-eye principle).

Incident diagnosis Cross-Team Diagnosis error diagnosis team-internally (in > 90%

of cases)

Database Provisioning Cross-Team activity Complete via EM

page 24/69

update

4.1.2 OracleDBAorOracleDMA?If one decides in favour of the vertical Support structure a new role is introduced: The DBA (Database Admi-nistrator) becomes the DMA (Database Machine Administrator). What kind of know-how and experience is mandatory for operation?

• Oracle Enterprise Linux (in addition, if necessary, Oracle Solaris x86)• Database experience with the current Version 11.2 and / or 12c• RAC, possibly Data Guard• Enterprise Manager Cloud Control 12c • Backup Software

What kind of “Exadata specialist knowledge” must be present?

• Exadata Architecture• Special Exadata Software Features (further information in Chapter „6 Optimisation in the Life Cycle“)• InfiniBand, Storage Cells, Flashdisks, KVM Switches, etc.

In order to administer an Exadata system, the know-how should approximately be made up as follows: 60% DBA, 20% System administration and 20% Exadata. A DBA with RAC, Clusterware (and possibly Data Guard) Know-how is an excellent basis.

4.2 OracleService&SupportOffersThe Oracle offers differ depending on the phase in the Life Cycle and supporting Organisation.

Design Oracle Pre-Sales consults during the planning phase and with the Design (Further information, see

Chapter „2 System Design and Project Planning“)

Installation Oracle ACS installs the systems (Further information, see Chapter „3 Installation and Migration“)

Migration Oracle Partner/Oracle Consulting

Operations/Support Oracle has two standard support options:

* Premier Support

* Platinum Services

These can be expanded by further Oracle ACS options:

* Oracle Exadata Start-up Pack

* Oracle Solution Support Center

* Oracle Advanced Monitoring and Resolution

* Additional Services

page 25/69

update

4.2.1 OraclePremierSupport&PlatinumSupportOracle offers Premier Support as well as Platinum Support in the standard, for support of operations teams. The main element is 24/7 Hardware and Software Support.What is the difference between Premium and Platinum Support and for whom / which the application scena-rio is the complimentary Platinum Support worthwhile? What requirements must be met for the Platinum Service?

Source: Oracle

Oracle advertises the free Platinum Service with the following services and benefits (source: Oracle):• Fault Monitoring

* 24/7 Fault monitoring;* Event filtering and qualification;* Reporting on event management; * A single global knowledge base, tool set and client portal;

• Respond and Restore* 24/7 Response Times:

- 5 min fault notification - 15 min restoration or escalation to development - 30 min joint debugging

* Escalation process and hotline with dedicated escalation managers; Expert support staff available24/7;

• Update and Patch* Assess and analyze: Produce quarterly patch plan;* Plan and deploy: Proactively plan and deploy recommended patches every quarter across all

systems and software components;

page 26/69

update

When is the Platinum service helpful and for which scenarios is the service less useful? To decide this, it should be precisely considered what the service can do and what it presupposes. Here, the Patch Service is in the foreground. To be able to offer such a service, Oracle needs on the one hand direct access to the systems via the Oracle Advanced Support Gateway. On the other hand, Oracle must be sure to keep systems on the customer side on a uniform Patch-Level. Besides, no individual Maintenance windows can be agreed within the scope of such a service. If one now looks at three potential application scenarios for Exadata, the follo-wing differences concerning the Platinum Service arise:

• Consolidation platformThe nature of every consolidation and hence also of every consolidation platform consists of a high stan-dardisation degree - combined with a strictly regulated process for maintenance and Patching. In such an environment, the Oracle Platinum Service can absolutely make sense.

• OLTP, Data warehouseApplications with high transaction volume as well as Data warehouses are complex surroundings in gene-ral, with individual maintenance agreements and application landscapes. Elaborate test procedures and coordinations for maintenance / changes are generally necessary here. These costs are mostly the down-side of a highly standardized service.

Characteristics and prerequisites:

• Hardware Oracle Advanced Support GatewayRecomm. 8 Cores, 48 GB RAM, 6 * 300 GB Disk, Firewall Port enabled

• Certified ConfigurationExadata X2, X3 and X4 with current patch level (OS, DB, Exadata SW)(Further information can be found here: Certified Platinum Configurations [5])

• Number of DB Homes and databases for Patching/Resolution:Maximally 4 databases, 2 DB Homes for Eighth, Quarter, Half Rack as well as a maximum of 8 databases, 2 DB Homes for Full Rack

Further information on the Premium / Platinum Service can be found here: Oracle Platinum Services Support Policies [6] or Oracle Website - Overview Platinum Services [7].

page 27/69

update

4.2.2 Oracle ASROracle ASR is an essential component of a comprehensive Support concept by Oracle ACS. ASR is the compo-nent that summarises the hardware-sided events of a proactive monitoring and passes them on to Oracle ACS. There, the analysis takes place and the Oracle Field Support is steered by Oracle ACS. According to Orac-le, the greatest possible availability as well as a minimised risk are thereby ensured.

Source: Oracle

The Oracle ASR manager should be operated on a dedicated server under Linux or Solaris. Besides servers (CPU, Memory, System Boards, power supply as well as fan) and Storage (Disk Controller, Disks, Flash Cards and Flash Modules), network components (InfiniBand modules, Switches) are also monitored.

Further details on installation and configuration can be found here: Oracle Auto Service Request Exadata Da-tabase Machine Quick Installation Guide [8].

page 28/69

update

4.2.3 FurtherOracleACSServicesOracle Advanced Customer Support Services offers customers a graded set of services that are not limited to the services during installation. In addition, they can be combined with other Oracle services. The manufac-turer‘s offer also extends to the phase after handover to Operations and includes optimisations, monitoring and support.

According to Oracle, the proactive services that are liable to costs are rendered by a dedicated Support team. Besides, Oracle makes sure that Incidents are immediately detected, are taken on by the team and are solved as quickly as possible with the support of Oracle‘s extensive Knowledge Base. As already mentioned , the Premium / Platinum Support can be upgraded by Service Sets (for example “Oracle Exadata Start-up Pack“, “Oracle Solution Support Center” und “Oracle Advanced Monitoring and Resolution”). In addition, Oracle Consulting also offers project-related services - such as for example Oracle Migration Factory and Implemen-tation Services.

Source: Oracle

4.2.3.1 Oracle Exadata Start-up PackOracle Exadata Start-up Pack is a bundle of services offered by Oracle Advanced Customer Support Services together with Oracle Consulting. Objective is to especially design the deployment process, in addition to the installation, in an effective and fast manner and to ensure a high quality of the installed systems.

Essential services:• Advisory Service

Assessments and consultation on the basis of Best Practise- solutions throughout the whole Deploy-ment-Process;

• Installation and ConfigurationStandardised system installation and system configuration of all components;

page 29/69

update

• Production ReadinessTechnical Reviews and instruction by Advanced Support Engineers, in order to optimally prepare systems for production;

• Quarterly Patch ServiceProactive, quarterly patch service for one year (already included in Oracle Platinum Service);

4.2.3.2 OracleSolutionSupportCenterA Technical Account Manager and a team of “Advanced Support Engineers” are closely working together with the customer, either onsite or remote and hence offer 24/7 Support. This dedicated team knows the IT en-vironment on site and the business requirements, important projects, central technologies, as well as opera-ting requirements and processes.

Essential services:• Critical questions are passed on immediately to the team via a 24/7 hotline.• The team closely co-operates with Oracle Support and product Development and in this manner can sub-

stantially accelerate problem solving and clearly reduce recovery time. • The customer receives regular Feedback and Input on Patch-level, configuration and Performance-ques-

tions.

4.2.3.3 OracleAdvancedMonitoringandResolutionOracle Advanced Monitoring and Resolution Services offers an extensive integrated monitoring across the entire environment and the IT stack - from application and database up to server, Storage and network com-ponents. Oracle offers proactive monitoring here and supplies solution approaches for performance prob-lems to the customer.

Essential services:• Oracle promises that Incidents are automatically detected on the basis of a Knowledge Base grown over

decades.• According to Oracle , Advanced Support Engineers use special diagnosis tools in order to be able to find

faults faster and to analyse Root Causes within a short time. • Regular reviews of the environment are carried out, including the patch level and the configuration.

4.2.3.4 AdditionalServicesThis concerns individual services for installation and configuration for Oracle Exadata and for Oracle Exadata Storage Expansion Rack.

page 30/69

update

4.3 Support by Oracle PartnersWhich performances are rendered by Oracle partners and how can they provide support in the Life Cycle?

Design Consulting:

• PoC during the planning phase

• Design of High Availability

• Integration into Backup-Infrastructure

Installation The installation is carried out by Oracle ACS As already described, essential standards and na-

ming conventions of the customer should be considered for the installation. Here the support

by Oracle ACS by consultants who precisely know the environment and the standards of the

customer is an advantage.

Migration Support by Oracle Partners:

• Exadata is always installed with the newest release version.

• Exadata is a x86-based System.

• Exadata offers many features that are not available in this form on other plat-

forms. Even though many applications run on account of highly competitive

hardware performanters, experience shows that the performance yield may be

clearly higher owing to some adaptations and optimisation

Further information, see Chapter „6 Optimisation in the Life Cycle“.

Operations/Support • Exadata is an integrated system. For many customers, the question arises

whether Exadata should be integrated in the Standard Support or whether to

resort to offers by specialised partners. Further information, see Chapter „4 Ex-

adata Operations“.

• Patch Management

Further information, see the following chapter.

4.4 ExadataPatchManagement:MysteryorPredictableService?The following section describes the Exadata Patch Management and provides an insight into the main func-tions and dependencies.

An always current overview of the available patches for Exadata including any critical bugs can be found in MOS (My Oracle Support – formerly “Metalink”) in Note ID 888828.1, the 14-day reading of which is recommended to the Exadata DBA.

page 31/69

update

4.4.1 WhatisPatchedWhenandHow?For better understanding of Patch Management, one should get an overview about which components of Exadata are patched how and in which intervals. This is listed in the following table:

Component Patch through Patch cycle Depends on

Exadata PDU Separate Patch If required (hitherto 1 Patch

in 3 years)

-

Infiniband Switches Separate Patch If required (hitherto 2

Patches in 3 years)

-

Ethernet Switches Separate Patch If required (hitherto no

Patch)

-

Exacheck Separate Patch If required (1-2x a year) -

Exadata Cell Node

Exadata Compute Node

Cell Patch Approx. every 3 months,

there is a new version of

the Cell Software

-

Cell Patch Approx. every 3 months,

there is a new version of

the Cell Software

Cell Patch Approx. every 3 months,

there is a new version of

the Cell Software

Oracle Database Bundle Patch every 3 months , together

with the Cell Software

Cell Patch and Grid

Infrastructure

As can be seen, the power supply and the Ethernet and/or InfiniBand Switches are used relatively rarely and only when required. This is done, if necessary, within the scope of a new Cell Version.

The Exacheck appears at regular intervals and should always be launched and tested before the installation of patches on Exadata in order to remove known problems before the patch.

The patch of the Exadata Cell and Compute Nodes is done by means of the Cell patches. These appear within the scope of the QDPE (Quarterly Database Patch for Exadata) about every three months, and update the following on the Cell and Compute Nodes:

• The operating system, including the Packages;• The firmware of all the components (e.g. BIOS, Firmware of HBAs, Firmware of the hard disks, etc.);• Settings in / on the operating system;

page 32/69

update

The installation of the Cell patch on the Cell and Compute Nodes should be done regularly and always at the same time on the Cell and Compute Nodes. At least once a year, a current version of the Cell software should be installed, so that updates to new versions are possible without interim steps.

Mixed operation of different Cell versions during a Rolling Upgrade is possible, nevertheless, should be kept as short as possible. The background to this is that features perhaps, can only be used after entire execution of the patch, owing to new Cell versions.

The next component in the software Stack is the Grid infrastructure which is the basis for all other services. The Grid infrastructure (GI) is updated by means of so-called Bundle patches. The Bundle patches work cumu-latively (i.e. the installation of the current Bundle patch is sufficient in order to install the bug fixes of all pre-vious patches in parallel). They appear approx. every three months together with a new version of the soft-ware for the Cell and Compute Nodes. For the installation of the Bundle patches, a minimum version on the Cell and the Compute Nodes is assumed, so that a regular Update makes sense. The minimum required ver-sion, from experience, dates back at least one year, so that Cell versions with an age of one year are sufficient as requirements for the installation of the Bundle patch. Should the patches be installed in „rolling“ mode, however, the versions must not date back as far as is the case with Non-Rolling installations.

The database works by directly building up on the Grid infrastructure. It is patched analogously to the Grid infrastructure with the quarterly appearing Bundle patches.

For patching of the individual components, different tools are used. These are listed in the following table:

Component Software Tool

Cell Node Cell Patch patchmgr

Compute Node Cell Patch dbnodeupgrade.sh (see MOS Note

1473002.1) and/ or YUM in older Ver-

sions

Grid Infrastructure Database Bundle Patch OPatch

Grid Infrastructure Database Minor orMajor Release (z. B. 11.2.0.2

→ 11.2.0.3 oder 11.2 → 12.1)

Oracle Universal Installer (OUI)

Switch and PDU Separate Patches ILOM

page 33/69

update

4.4.2 BundlePatch,PSU,CPU,QDPEandQFSDPOver time, Oracle has developed a variety of terms that describe the patches (among others „Bundle Patch“, „PSU“, „CPU“, „QDPE“ und „QFSDP“). The following table maps and systematises the concepts:

Patch Type For System For Component Appearance interval Comment

CPU Non-Exadata Grid Infrastructure Database Approx. every 3 months -

PSU Non-Exadata Grid Infrastructure Database Approx. every 3 months includes the CPU

Bundle Patch (BP) Exadata Grid Infrastructure Database Approx. every 3 months includes the PSU (and

thereby implicitly the CPU)

Quarterly Databa-

se Patch for

Exadata (QDPE)

Exadata Cell Nodes

Compute Nodes

Grid Infrastructure Database

Approx. every 3 months includes the current

Version of the Cell Software

(for the Cell und Compute

Nodes) and the current

Bundle Patch (incl. PSU and

CPU)

Quarterly Full Stack

Download Patch

(QFSDP)

Exadata (PSU and Switches, if

necessary)

Cell Nodes

Compute Nodes

Grid Infrastructure Database

Approx. every 3 months includes not only Patches

but the complete Software

(incl. Installation Binaries)

As one can see, the concepts PSU and CPU come from the Non-Exadata world and describe patches for prob-lems relevant for safety (CPU) or safety-critical and operation-critical problems (PSU).

The PSUs discussed above - supplemented by Bug fixes for Exadata-specific issues - become part of the Bund-le patches with which the Grid infrastructure and the database is patched. These, in turn, are a component of the quarterly appearing “Quarterly Database Patches for Exadata” (QDPE).

For the case of a complete new installation, Oracle provides a possibility, with the “Quarterly Full Stack Down-load Patch” (QFSDP) to install the system including all updates that have appeared up to now. Thus it can be avoided that the basic Installation is followed by a very complex patch cycle.

In the above-mentioned Metalink Note 888828.1, there are notes on critical bugs now discovered and their Workarounds - and/or patches that may need an additional patch installation.

page 34/69

update

4.4.3 TimeInvolvedforthePatchTransactionsThe necessary time involved for the patching of the system is determined by the following two core aspects:

1. Should the installation be done „Rolling“ or „Non-Rolling“?2. Is an Upgrade of the Grid infrastructure and/or the database desired (e.g. from Version 11.2.0.2 →

11.2.0.3 or 11.2 → 12.1)?

From both criteria listed above, the question arises concerning Upgrades, with Minor Upgrades only approx. once in 18 to 24 months (as for example from 11.2.0.2 → 11.2.0.3) and/or with major Upgrades approx. once in three years (11 → 12). Also in this case, an approach is possible nearly without Downtime. However, this requires very experienced DBAs and an intensive preparatory phase.

The case that is much more frequent in practise is the installation of scheduled updates that can take place about every three months - if one follows the patch cycle of Oracle. These updates are installable almost al-ways Rolling as well as Non-Rolling.

The following table shows approximate values for the installation duration to be expected (depending on the selected process), but includes no buffer to find and fix any problems:

Non-Rolling (Full Downtime)

Component Effort in hours Depending upon Comment

Cell Node 4 hours for all Cell Nodes - Cell Nodes are patched

Non-Rolling in parallel;

Compute Nodes 4 hours for all Compute Nodes Patch of Cell Nodes Parallel Patching possible;

thereafter parallelisation by

the DBA necessary;

Grid infrastructure on the

respective Compute Node

4 hours for all Compute Nodes Patch of Cell and

Compute Nodes

Parallel Patching possible;

thereafter parallelisation by

the DBA necessary;

Database on the respective

Compute Node IN-PLACE

2 hours for all Compute Nodes Patch of Cell and

Compute Nodes as

well as the Grid

infrastructure

Parallel Patching possible;

thereafter parallelisation by

the DBA necessary;

page 35/69

update

Database on the respective

Compute Node OUT-OF-

PLACE

No time during the patch required,

because the patch can be prepared

completely;

then time for removing old

binaries necessary;

- After the patch time for

removing old Binaries

necessary;

Database (Data Dictionary) 30 minutes per database for

importing changed objects, plus

time for recompilation (depends

on the number of objects)

Patch of Cell and

Compute Nodes as

well as the Grid

infrastructure and

Database Binaries

Rolling (Minimal to no Downtime)

Component Effort in hours Depending upon Comment

Cell Node

Depending on the redundancy of the

Disk group, between 0-24 hours

- -

4 hours per Cell Node - If space is available, more than

one cell node may be removed

simultaneously if necessary, and

thus the number of patch cycles

can be reduced.

Compute Nodes 4 hours per Compute Node Can take place in parallel

with the patch of Cell

nodes

Parallel Patching possible;

thereafter parallelisation by the

DBA necessary;

Grid infrastructure

on the respective

Compute Node

2 hours per Compute Node Patch of Cell and

Compute Nodes

Parallel Patching possible;

thereafter parallelisation by the

DBA necessary:

Database on the

respective Compute

Node IN-PLACE

1 hour for all Compute Nodes Patch of Cell and

Compute Nodes as well

as the Grid infrastructure

Parallel Patching possible;

thereafter parallelisation by the

DBA necessary;

Database on the

respective Compute

Node OUT-OF-PLA-

CE

No time during the patch required,

because the patch can be prepared

completely;

- Nach dem Patch Zeit für das

Entfernen alter Binaries

notwendig;

Database (Data

Dictionary)

30 minutes per Database for importing

changed objects, plus time for

recompilation (depends on the

number of objects)

Patch of Cell and

Compute Nodes as well

as the Grid infrastructure

and Database Binaries

-

page 36/69

update

4.4.3.1 RollingPatch-InstallationWith the Rolling patch installation, only individual components are always switched off for the update and are activated again after installation. In the normal case, the entire system is available - albeit with decreased efficiency - and can be used by the users / applications.So that an installation can be done Rolling, certain patches must have been installed successfully. The details vary from version to version and are always taken from the respective documentation.

The „Rolling“ process can be used for nearly all patch activities of the components:

• Cell Nodes• Compute Nodes• Grid Infrastructure• Database

The only exceptions are necessary code adaptations due to modified internal database packages requiring recompilation of certain database objects.

With a Rolling patch the longest time is taken up by the (possibly) necessary Rebalance in the ASM before the patch and by removing a Cell Node from the ASM. A patch process can last from two to four hours

With NORMAL Redundancy, the data are present in duplicate on different Cell Nodes. If a Cell Node is now switched off without prior „clean“ removal from the ASM (where the data on the Cell Node hard disks are distributed among all remaining Cell Nodes), certain data are still present only once. If another hard disk from a Cell Node now fails during the patch process, data loss originates, in any case. That is why it is recommen-ded, for Disk Groups with NORMAL Redundancy, to drop the Cell Node from the ASM before beginning and to wait for the Rebalance to be complete (the distribution of the data from the Cell Node to be patched onto the remaining Cell Nodes). According to the scope of the stored data and the size of the individual hard disk, this can take between six and twelve hours (High speed hard disks) and/or between six and 24 hours (High Capacity hard disks) (per Cell Node). The installation of the patch on individual/several Cell Nodes takes approx. four hours.

page 37/69

update

However, the situation is different for Disk Groups with HIGH redundancy. Here, after switching off a Cell Node, two copies of the data are still present so that a failure of a hard disk on one of the remaining Cell No-des does not lead to the loss of the Disk Group.

The patch of the individual Compute Nodes is independent of the model and the number of the databases. It lasts between two and four hours. Requirements are that the components running on the node (such as in-stance or application) are available on another node than the node to be patched. The patch of the Compute Node can be done in parallel with the already running patch of the Cell Node(s), in order to save time.

Immediately after the update of the Compute Node, the patch of Grid infrastructure and Database binaries should be done. Only after this should the Compute Node be added to the cluster again and one can then proceed with the next Compute Node.

The Database binaries can be optionally patched In-Place or Out-of-Place. In-Place means that the existing Binaries are patched. Out-of-Place means that to begin with, new Binaries are installed in parallel and these are afterwards patched. Afterwards, the Database instance is launched with the new Binaries.

The advantage with Out-of-Place lies in the saving of time - in particular if patching takes place with a comple-te Downtime instead of „Rolling“. This is because the Database binaries can be prepared in advance and the process takes place faster. However, the total time required is greater because the new Binaries are rolled out in the preliminary work and the old Binaries must be removed again after the patch.

4.4.3.2 Non-RollingPatchInstallationThis method is suitable in particular if the further availability of the database is ensured via a Standby or repli-cated database (e.g. by means of Data Guard or Golden Gate) and the time slot for the installation of the patches should be as brief as possible. If no Standby database is available, the Non-Rolling method is still the method with the shortest time for the installation - but also the method with the longest Downtime.

page 38/69

update

4.4.4 PatchCyclesinPractiseExperience shows that the non-queried installation of all the available patches is as unfavourable as no instal-lation of patches.

The following table shows a patch cycle proven in practise, where the Cell and Compute Nodes are patched once a year and the Grid infrastructure and database approx. every six to twelve months.

Patch Cycle Planned Maintenance

6-12 months Grid Infrastructure/Database

12 months Cell and Compute Nodes

2-4 years Grid Infrastructure/Database Major Release (11.2.0.2 → 11.2.0.3 or 11.2 → 12.1)

The patch frequency of the Database and Grid infrastructure decisively depends on two components in this case:1. Stability of the Database and availability of any required One-Offs2. Known safety gaps

If the application running on the database is stable (i.e. no data corruption and losses, unplanned crashes or incorrect results occur), a patch cycle of twelve months may be sufficient. If serious problems limiting the application are solved in current Bundle patches and Backports in the form of One-Offs are not available, a shorter patch cycle should be selected.

For the use of the Platinum Service, shorter patch cycles also apply. Here, attention has to be paid to the con-formity with the recommended versions according to MOS Note 888828.1, since, otherwise, the Premium Support may lapse.

The same approach is valid for known safety gaps. These should be judged according to gravity and criticality and should be closed according to their importance through the installation of current Bundle patches.An Upgrade of the database version is necessary in each case after two to four years - at the latest, however, if the Support has expired or the currently used version is more than one version behind the present current version.

Example: At present, version 11.2.0.4 is current. Versions 11.2.0.3 and 11.2.0.4 receive updates within the scope of the Support from Oracle. Version 11.2.0.2 dates back more than one version and receives no more updates. Customers with 11.2.0.2, hence, should upgrade either to 11.2.0.3 or (better) to 11.2.0.4.

page 39/69

update

4.4.5 RecommendationsforSuccessfulPatchInstallationThe following section lists in bullet points Form Tips for a successful patch installation - without claim to com-pleteness:

• General recommendations* using the Exacheck Tool (MOS Note: 1070954.1)

- using the current version - repair warnings and alerts of the tool before the Patching

* using OPlan (better understanding of installation steps)* work through documentation and MOS Notes intensively in advance* preferably, patch Standby or test system before Primary or production

• Exadata Storage Server* do not make any unsupported changes* do not install any additional software

• Grid Infrastructure* Software Release should be at least „equal to“ (better: higher than) Database, so that

GI >= DB Release

• Database Software* Recommendation: patching Out-of Place* Installation of One-Offs as “Online Patch”* Management of the load by means of load services

page 40/69

update

5 Management&SupportTools

Beside the known Oracle tools, Enterprise Manager Cloud Control is the most important tool for management and monitoring of the platform (regardless of individual systems / variety of systems / OLTP system / DWH / consolidation platform). Besides, there are some new tools that are used exclusively in the environment of Exadata. These include:

• Auto Service Request (ASR)SRs are automatically generated for HW defects and are transmitted to Oracle. These include: CPU, Disk Controller and Disks, Flash cards and Modules, InfiniBand, Cards, Memory, System Boards, Po-wer supply as well as fan;

• Command Line Interface (CLI)For local control of the Exadata Storage Server;

• Distributed Command Line Interface (DCLI)For concurrent execution on several Cell Nodes;

• Integrated Lights Out Manager (ILOM)For remote monitoring and control of the Hardware;

In the following table, there is an overview of tasks and standard tools.

Provisioning Configuration Monitoring Patching Backup

Database EM EM EM EM EM/RMAN

Grid EM EM EM EM/RMAN

ASM EM EM EM n/a

Compute Node EM EM OC e. g. TSM

Storage Cell EM EM CL n/a

InfiniBand Switch EM EM CL n/a

page 41/69

update

5.1 Provsioning/Rapid Provsioning

5.1.1 ProvisioningOracle Enterprise manager is to be recommended for the Provisioning of new databases. The method does not differ from other RAC databases. The following models are distinguished and supported by the Enterprise manager: Database Machine, Database Instance and Database Schema.

Advantages and disadvantages of these models:

Database Machine

(Dedicated VM/IaaS)

Database Instance

(Dedicated DB/DBaaS)

Database Schema

(Dedicated Schema/SaaS)

Implementation/

Onboarding

Easy Easy Intricate (Standardi-sation,

Versions)

Maintenance Complex Easy Complex

Isolation High Medium Low

Degree of Consolidation Low Medium High

ROI Low (Server level) High (Server, OS) Very high (Server, OS,

Database)

In particular where Exadata should be used as a consolidation platform, the features of Enterprise manager are in demand for Rapid Provisioning (up to Self Provisioning).

5.1.2 Rapid ProvisioningStandard Provisioning, as is still carried out today in many enterprises, has a number of disadvantages. Time-consuming processes that are established via different departments must be coordinated, owing to which Transition management becomes complex. This makes the overall process, including Life Cycle Ma-nagement, personnel-consuming and expensive. Rapid Provisioning on the Exadata platform offers numerous advantages:

• Required resources are available (within the scope of the Quotas);• Provisioning in a few hours;• Automated process with little manual resources required;• High degree of standardisation;

page 42/69

update

The technical implementation of the Provisioning can be done with the RMAN Clone and Snap Clone me-thods.

DBaaS Snap Clone:• Provisioning of big databases within a few minutes;• Currently supported: NAS Netapp, ZFS (EMC, HDS planned);• Features:

* Storage Technology;* „Use and throw“ - Databases (UAT, time travel), short-lived Databases;* Fully integrated into 12c CC Life Cycle Management;

• Advantages* Saves memory because only the space for the deltas is required (copy-on-write Snapshot), few

megabytes for a 1 TB DB;* Because the Snapshot only consists of pointers, merely the pointers must be replaced. Cloning 1 TB

DB within a few minutes;

DBaaS RMAN Clone:• Provisioning via RMAN Clone;• Currently supported: All platforms and Storage models;• Features

* Oracle Technology;* Databases with longer life cycle;* Fully integrated into 12c CC Life Cycle Management;

• Advantages* Storage neutral;* Fits into the existing infrastructure;

page 43/69

update

Self-Service Provisioning with the EM 12c:

• Out-of-the-box Console, can be customised, API

• Provisioning: PaaS, DBaaS

• Clone Methods: RMAN Clone (copy-on-write), Snap Clone, Export Scheme

• Integrated: Monitoring, Backup, Patching

• Self Service user gets a service catalogue presented

• Is integrated into the complete CC Life Cycle

• Self Service users can

* start and stop Databases;

* perform Backup and restore operations;

* monitor important metrics;

* trigger the Retirement Process;

page 44/69

update

5.2 ConfigurationDetails are in Chapter „3 Installation and Migration“, as well as „8 Appendix - Exadata Deployment Assistant Step by Step“.

5.3 MonitoringThe Exadata system, as an “Engineered system”, makes special demands on monitoring. The availability can only be guaranteed if all hardware and software components are included in the monitoring. Only in the Oracle Enterprise Manager can all components be defined as a Target and be provided with metrics. Integra-tion is done with the help of a defined Discovery process. Further information is here: Oracle Enterprise Ma-nager 12c: Oracle Exadata Discovery Cookbook [9].

Verification should be done before the Discovery process can begin. For this, Oracle makes available the fol-lowing script: $ORACLE_HOME/perl/bin/perl exadataDiscoveryPreCheck.pl.

If the Agent is installed, the Discovery Process can begin via the console of the Enterprise Manager. This is done

via the menu items “Add Targets” and “Add Targets Manually”.

page 45/69

update

Now the individual met-

rics can be defined for

the components and the

threshold values for the

monitoring can be set. At

the conclusion, the inclu-

sion of the software com-

ponents such as Cluster,

Cluster database, Liste-

ner and ASM is done.

This process is identical

to the normal Target Dis-

covery for Oracle Real

Application Clusters.

Via open interfaces,

SNMP-Traps can be sent

to other monitoring sys-

tems.

After the Hosts have

been included as a Tar-

get, the “Oracle Enterpri-

se Manager Setup Auto-

mation Kit for Exadata” is

installed. Afterwards a

guided discovery process

is carried out via the con-

sole of the Enterprise

manager.

At the end of the process,

all Targets are available in

the Enterprise manager.

page 46/69

update

5.4 PatchingAlthough the Patching of individual software components is possible via the Enterprise Manager, practical experience shows that control via command line provides more transparency and security. This is largely due to the relationships of the individual components and versions with one another, as well as with the provided Utilities.

Further information, see Chapter „4.4 Exadata Patch Management: Mystery or Predictable Service?“.

6 OptimisationintheLifeCycle

Exadata has numerous features that justify extreme performance. These technologies in particular include Intelligent Storage, PCI Flash Cache and also (E)HCC ((Exadata) Hybrid Columnar Compression). Many of these features are automatically used after the migration to Exadata. Others need an adaptation of the application to the platform as such.

The following features are directly usable:• (Partial) Cell Offloading and hence features such as for example:

* Column Projection* Predicate Filtering* Storage Indices

• Flash Cache

Optimisation and/or adaptations of the application to make full use of the possibilities of the platform requi-re the following features:• Cell Offloading

* Storage Indices* Predicate Filtering

• Parallelisation• Hybrid Columnar Compression (HCC)• Flash Cache

page 47/69

update

Some of the features were listed twice. This is because these are active without optimisation by Default, but cannot completely exhaust their potential.

An example of necessary optimisation is the Flash Cache: This is active by Default and tries to cache frequent-ly used data in the quick Flash memory and virtually serves as an Expansion of the Database Buffer Cache. By setting special attributes in the tables, one is able to specifically control the caching of properties in the Flash Cache so as to optimise the performance.

For Cell Offloading and in particular for the efficient use of Storage Indices, however, adaptations of the appli-cation are necessary. Exadata is able to ensure very fast data access solely through Storage Indices and Cell Offloading. Therefore, it should be checked how the implementation plans of the statements change if one removes the Indices of the Non-Exadata systems necessary for quick data access. The situation is similar with parallelisation, which contributes significantly to the increase in performance.

The Key Features of Exadata will be presented and briefly explained in the following section.

6.1 CellOffloading/SmartScanningA lot of technologies are subsumed under the concept Cell Offloading or also Smart Scanning. Here, Smart Scanning refers more to the optimisation of SQL Statements by Exadata, while Cell Offloading is a rather ge-neric concept. However, all these technologies have the purpose to shift the work away from the database and towards Storage (the cell). Thus the number of data records being considered should already be minimi-sed on the Storage. The objective is to transmit as few as possible data records and columns to the database for further processing and/or to have the necessary process steps done directly by the Cell Node.

Under the umbrella term Cell Offloading, the following technologies are summarised:• Column Projection• Predicate Filtering• Storage Indices• Bloom Filter• Function Offloading• Compression/Decompression• Encryption/decryption• Virtual Columns

page 48/69

update

6.1.1 Storage Index

The following example should show this briefly with the help of a Select to a big table without Indices:

Storage Index switched off:

SQL> ALTER SYSTEM SET „_KCFIS_STORAGEIDX_DISABLED“=TRUE;

System altered.

Elapsed: 00:00:00:20

SQL> SELECT COUNT(SPALTE1) FROM GROSSE_TABELLE WHERE SPALTE1=0;

COUNT(SPALTE1)

------------------------

2

Elapsed: 00:00:10:00

The Storage index ensures the avoidan-

ce of superfluous I/Os. It is present in

the Memory of every cell, is automati-

cally maintained, is transparent for the

database and functions with compres-

sed and not compressed tables. Because

the Storage Indices are held only in the

RAM they must be newly built up with

each relaunch of a cell.

Storage Indices can substantially reduce

the IOs required for the implementation

of the query, since only the blocks must

be read that contain the queried values.

Afterwards, in connection with Column Projection and Prodicate Filtering, the data volume to be read and trans-

mitted to the Compute Node is further reduced. Through this, the work on the Cell Nodes and on the Compute

Node (where the database runs) is minimised considerably. By this strategy, for example, FULL TABLE SCANs that

run on Non-Exadata systems for a very long time, can also deliver very fast results without Indices.

Source: Oracle

page 49/69

update

Exadata needs approx. 10 seconds to deliver the result - in spite of activated Predicate Offloading and Column Projection (see below). Without these two features, the required time would still be much more since all data and columns would have to be transmitted from the cell to the database.

To show the effect of Storage Indices, these are now activated again:

SQL> ALTER SYSTEM SET „_KCFIS_STORAGEIDX_DISABLED“=FALSE;

System altered.

Elapsed: 00:00:00:15

SQL> SELECT COUNT(SPALTE1) FROM GROSSE_TABELLE WHERE SPALTE1=0;

COUNT(SPALTE1)

------------------------

2

Elapsed: 00:00:00:05

Through the use of Storage Indices, the processing time has decreased for the Select statement from ten se-conds to five milliseconds - without creating a classic index.

6.1.2 ColumnProjectionWith Column Projection, only those columns are returned to the Database that are actually also requested by the query or the Join condition. The result: The execution time of the query is reduced by more than 50%. Example: Only five columns of a table with 100 gaps are queried. As a result, the transmitted data volume reduces to approx. five percent.

To demonstrate this we create a table with approx. 250 million rows and query these:

SQL> alter session set cell_offload_processing=FALSE;

Session altered.

Elapsed: 00:00:00:01

SQL> COUNT(SPALTE1) FROM GROSSE_TABELLE

COUNT(SPALTE1)

--------

250000000

Elapsed: 00:00:38:03

page 50/69

update

SQL> alter session set cell_offload_processing=TRUE;

Session altered.

Elapsed: 00:00:00:01

SQL> COUNT(SPALTE1) FROM GROSSE_TABELLE

COUNT(SPALTE1)

--------

250000000

Elapsed: 00:00:22:55

The example shows that, through the use of Column Projection, already considerable speed advantages re-sult, since less data must be transmitted between Compute Nodes and Cell Nodes.

6.1.3 Predicate FilteringWith the technology of the Predicate Filtering, only those rows are returned to the Compute Node, and hen-ce to the Database, that also correspond to the query. Through this, the amounts of data that are processed by the Database and must be transmitted to the Database are clearly reduced. With Non-Exadata systems, all rows are always transferred and afterwards filtered by the Database. With Exadata, this step is largely already done on the cells.

SQL> alter system set “_kcfis_storageidx_disabled”=true;

Session altered.

SQL> COUNT(SPALTE1) FROM GROSSE_TABELLE WHERE SPALTE1=0;

COUNT(SPALTE1)

--------

250000000

Elapsed: 00:00:04:48

This query shows how efficiently Exadata queries can be processed by Predicate Filtering. To make sure that the Storage Indices do not have a positive effect, these were first switched off. The resulting running time is only the result of Column Projection and Predicate Filtering.

page 51/69

update

6.2 HCC(HybridColumnarCompression)ArchitectureThe Hybrid Columnar Compression (HCC) was originally reserved for the Exadata systems under the name EHCC and describes a column-wise compression of the data. By the choice of column-wise rather than row-wise compression, Exadata achieves compression factors of 1:5 to approx. 1:15.

HCC, however, is only suitable for static data, because changes to HCC compressed data decompresses the affected rows again. Moreover, the data for compression must be loaded by HCC „Direct Path“ (i.e. via “Crea-te Table as Select”, “INSERT /*APPEND */”, the SQL Loader or similar).

Because of the architecture of Exadata, the data is compressed on the Compute Nodes - but can be decom-pressed on the Cell Nodes. Thus, the computational burden is shifted from the Compute Nodes to the Cell Nodes. For pure Filtering, the data must not be decompressed. That is only necessary on return to the Data-base in the form of a rowset.

It should be noted that HCC is only supported on Exadata, ZFS Appliances and Pillarstore systems from Oracle.

If several ‚similar type‘ Re-

cords are compressed, there

is a better compression fac-

tor than with a row-wise

approach (a constant chan-

ge of text and number can-

not be well compressed). In

addition, the compression in

HCC occurs in units larger

than a block (8-32 KB). The-

reby the compression ratio

is also increased.

Source: Oracle

page 52/69

update

In HCC, the data to be com-

pressed are divided into

so-called Compression Units

of 1 MB each and then com-

pressed column-wise.

Source: Oracle

As already described, there are various compression methods for HCC, which are listed in the following table. Depending on the selected compression level, different compression methods are used. In practise, the best compromise in compression ratio and speed is often found with ARCHIVE LOW and/or QUERY HIGH. However, this is dependent on the data. Frequently, queries on compressed data are even faster than those on uncom-pressed data. ARCHIVE HIGH is, however, only to be used if data is kept as long as possible, because the com-pression of the data takes a long time to complete.

Type Format Exp. ratio

QUERY LOW LZO 4x

QUERY HIGH ZLIB (gzip) 6x

ARCHIVE LOW ZLIB (gzip) 7x

ARCHIVE HIGH BZIP2 15x

page 53/69

update

Exec dbms_compression.get_compression_ratio:

Compression Advisor self-check validation successful.

select count(*) on both

Uncompressed and EHCC Compressed format = 1000001 rows

Blocks, Compressed=4329

Blocks, Uncompressed=17980

Rows, Compressed=231

Rows, Uncompressed=55

Compression Ratio=4.1

Compression Type=“Compress For Query Low“

With Compression Advisor, the expected compression factors can be estimated:

6.3 Encryption/DecryptionEncryption and decryption function similar to the HCC, where the encryption is done on the Compute Nodes and the decryption mostly on the Cell Nodes. The CPUs used can deal with the encryption and decryption directly through the hardware.

6.4 Virtual ColumnsVirtual columns are a new feature of the Oracle 11 g. With virtual columns, the content is calculated from columns of other tables. Thus, it is possible to create, for example, a virtual column „Gross“ whose values arise from the multiplication of the values of the „Net“ column with the percentages of the „percentage“ column. This saves redundant information and storage - however, the values must be calculated once more with every query, which is done by the Cell Nodes.

6.5 ParallelisationDue to the high number of hard disks, Cell Nodes and Compute Nodes, an Exadata system pays off particular-ly if the tasks are distributed over as many nodes as possible and are executed in parallel. This fact should be noted for the optimisation in the Life Cycle to be able to be to optimally use the platform

Because the parallelisation of an existing environment is very complex, Oracle offers help in the form of the feature“Auto DOP”. With Auto DOP, Oracle independently selects the optimum number of processes to be executed in parallel. The feature is activated by setting“PARALLEL_DEGREE_POLICY=AUTO” and controls hen-ceforth• the number of the processes working in parallel;• resource utilization (CPU and IO) up to a defined maximum;• the avoidance of overload;

page 54/69

update

6.6 I/ORessourceManagerThe I/O resource manager offers the possibility to distribute the I/O between several databases running on an Exadata platform, based on different criteria. Though this distribution with CPU resources was possible already in the past, nevertheless, for I/O, this represents a novelty and unique selling point for Exadata. The available I/O- bandwidth can be fairly distributed, guaranteed or limited (similar to what you would ex-pect from the CPU resource manager), for instance to be able to hold SLAs agreed during a consolidation.

6.7 FlashCacheWhile hard drives take up large amounts of data with low I / O throughput, exactly the opposite is true for Flash cards: They store only low data volumes in proportion to the overall capacity, at very high I/O through-put (currently 3.2 TB per Cell!) that, besides, is not limited by controllers. Here, Exadata is useful to accelerate the data access:

• Automatic caching of data in Flash;• Utilisation of PCI cards to bypass the bottleneck „Disk-Controller“;• The essential share of the data remains on the hard disks;• Depending on the model and expansion stage, up to 44,8 TB Flash Cache (in FullRack) – sufficient for most

databases;

The Flash Cache of Exadata can be operated in two modes: Write Through or Write Back. The Default Write Through accelerates the reading operations. Nevertheless, writing operations still go directly on the hard disk and can constitute, perhaps, a bottleneck.

page 55/69

update

WriteThrough Write Back

The hitherto known Write-Through mode is the Default. The Cache is not persistent. Random Reads are acce-lerated.

New: with Exadata SW 11.2.3.2, the Write-Back Mode was introduced. Now, the Cache is persistent and Ran-dom Writes are accelerated. This leds to the avoidance of free buffer waits in the AWR.

To what extent the application benefits from the use of „Write Back“ Flash Cache has to be tested individual-ly for every application. Here, there are no recommendations, since every application works individually. Applications with many small I/Os have tended to benefit more than applications with big sequential I/Os. The AWR serves as the first starting point of the analysis.

The conversion between these two concepts can be done online and requires a deletion and the new instal-lation of the existing Flash Cache in the Cell Node.

page 56/69

update

When using the Default Write Through, one can define for certain tables that these are held in Flash Cache. For this, the tables have the attribute „CELL_FLASH_CACHE“. Choose from the following options:

• NONE: Never cache this object;• DEFAULT: The Automated cache mechanism (This is also, at the same time, the Default);• KEEP: The object should preferably be kept in the cache, so that a very fast access is possible.

Alternatively, the memory of the Flash Cache can be used as a Disk Group in the ASM for the storage of Data Files of all kinds - however, this is not used in practise.

7 Conclusion

To close the arc, the introductory questions will now be considered once more. To what extent have answers and solutions been found?

What is the first conclusion from a project and operating perspective? The introduction of Exadata is - from a project perspective - absolutely comparable with other introduction projects of big systems. Oracle and Orac-le Partner provide adequate assistance in all areas and the essential steps are standardised. An essential ad-vantage, however, arises from the saving of time. The systems are extremely quickly installed and ready for use on account of the structured approach. Besides, almost all applications already profit from the features of the system in the standard configuration. Custom Configurations can still bring huge benefits but are not always mandatory so that Exadata can very rapidly be used productively in most cases.

The customer needs no own Exadata expert knowledge during the project phase assessment, provided that the essential concepts and specifications of Oracle are taken into consideration.

What are the requirements for it that applications can use the platform in the best way possible? Here, the situation is exactly like that with numerous other projects in which databases are migrated to a new more efficient infrastructure. Basically, almost every application profits from the new Out-of-the-Box platform wi-thout adaptation. However, in many cases, it has been shown that the effort for further optimisations very worthwhile. An optimisation under inclusion of the Exadata-specific features as well as resources goes along with huge improvements in many cases, above all concerning performance

page 57/69

update

What about the operating perspective? How is Exadata optimally integrated into existing operating environ-ments, in which areas is a rethinking of the IT organiation required? Again, it can be said that Oracle and Oracle Partner, in Exadata-specific areas, offer an entire palette of services, ranging from Hardware-Support up to the full operation of the platform. Customers who operate several systems and are thinking about own Support here, should consider a vertically structured platform support, in addition.

8 Appendix – Exadata Deployment Assistant Step by Step

The Oracle Exadata Deployment Assistant Version 2 can be downloaded at MOS as a patch for Linux, Windows and Solaris. The Deployment Assistant is based on Java and part of the OneCommand Utlility.

Start:

Instructions for the OEDA

(Oracle Exadata Deploy-

ment Assistant) using the

configuration of the X3-8

hardware as an example.

page 58/69

update

Customer Details

Hardware Selection

Required Information:

• Name Prefix: Every com-

ponent name will derive

from this Name Prefix.

• NTP Server/DNS: NTP Ser-

ver should be on the same

hierarchy level.

NTP/DNS-entries can be chan-

ged after installation process.

The available types of hard-

ware are offered in a list - (e. g.

the SuperCluster).

There are different consequen-

ces if you choose HP or HC Sto-

rage:

• HP (High Performance –

Less Capacity);

• HC (High Capacity – Less

Performance);

(Example: Exadata X3-8)

page 59/69

update

Define Customer Networks

Overview of Networks

• Admin Network;

• Client Network;

• InfiniBand Interconnect

(for RAC and access to

Storage Cells);

• Optional: Network for

Backup;

Exadata seperates the net-

works for Client and Inter-

connect – as does RAC. In ad-

dition, the Admin Network

and an optional Backup Net-

work is configurated.

Details of the networks:

• Name;

• Subnet Mask;

• Gateway;

• Physics of the network

(copper/optical);

Network application servers

(e. g. Exalogic) and/or NFS sto-

rage may be integrated in the

Infiniband.

page 60/69

update

Admin Network

Details about Exadata-spezific

Admin Netzwork:

The Pool Size and the highest IP

Address is derived from the first

IP Address and the defined Sub-

net Mask.

Here one can define if the De-

fault Gateway of the database

servers is in this Network (not in

this case) and if the database

server admin name defines the

physical host name (as entered

in this example). In this part of

the net there are also the ILOMs

of the servers integrated.

Client Network

The Client-Network is the net-

work where the application ser-

vers are located usually.

The Pool Size and the highest IP

Address is derived from the first

IP Address and the defined Sub-

net Mask.

Here one can define if the De-

fault Gateway of the database

servers is in this Network (not in

this case) and if the database

server admin name defines the

physical host name (as entered

in this example).

page 61/69

update

InfiniBand Network

The Pool Size and the highest IP

Address is derived from the first

IP Address and the defined Sub-

net Mask.

The Infiniband Network does

not have a default gateway.

Network application servers (e.

g. Exalogic) and/or NFS storage

may be integrated in the Infini-

band.

Backup Network

Definition of:

• Backup Network (as far as

defined before);

• Additional network for

Data Guard Network Traffic

(is a DR system is in use);

page 62/69

update

Identify Operating System

Review and Edit

Oracle recommends using

connected IP addresses for the

configuration. The host names

with increasing IP addresses de-

rive from these addresses.

If there are no connected IP ad-

dresses available the hosts can

have other IP addresses.

You can choose either Linux or

Solaris (usually Linux is cho-

sen);

page 63/69

update

Define Clusters Several RACs can be configura-

ted on an Exadata system.

• On up to four 2-Node-Clus-

ters there can be cobfigura-

ted up to eight database

nodes;

• Cell Nodes are assigned.

This can be changed after-

wards - but only with high

effort.

On a X3-2 Full Rack (eight Com-

pute Nodes and 14 Cell Nodes)

there could be four RACs, each

with two database servers.

On a X3-2 Half Rack (four Com-

pute Nodes und seven Cell No-

des) there could be two RACs,

each with two database servers.

This is defined now.

page 64/69

update

Overview: Cluster Definition

Cluster Definition

These aspects are also defined:

• ORACLE_HOME for Grid In-

frastructure, database and

the software version;

• database name and default

size of the database;

• „Client Network“ shows

the pool size „7“:

* two database servers

* two VIP Addresses

* three SCAN Addresses

• Definition of separation

of roles (as RAC)between

database and Grid Infra-

structure;

• Disk Group details;

• Distribution 80% to 20%

(if no backup as copy);

• Keep in mind compa-

ny-specific installation

standards of Oracle soft-

ware;

DBFS Disk Group:

• Information about data-

base file system: The

Oracle Database File Sys-

tem (DBFS) [10]

page 65/69

update

Review Cluster Definition

Definition Monitoring Cell Nodes

This is the overview and last

chance to make changes.

The Cell Nodes can be configu-

rated for E-Mail and SNMP

Alarm.

E-Mail and/or SNMP server de-

tails are shown here.

page 66/69

update

Definition OCM

As any other database Exadata

can be configurated by using the

OCM.

This enables to download up-

dates via MOS and to enter the

configuration there, so that

Oracle recognizes the configura-

tion in case of need for support.

(also via Proxy Server)

Configuration ASR

ASR serves to send a service re-quest automatically to Oracle in case of defective hardware. The Auto Service Request (ASR) Process requires an ASR Mana-ger. This is a software, that is in the client network and serves as agent between Exadata and Oracle Support. The ASR Mana-ger receives SNMP Traps of the Exadata System and forwards them via HTTPS to the Oracle support (also possible: via Proxy Server). Moreover, the name, email and MOS account name of the local client contact is required.

page 67/69

update

Grid Control Agent

Comments

Cloud Control is available or

can be installed afterwards.

The ORACLE_BASE of the agent

to be installed, the host name

and the Upload Port of the

OMS are indicated here.

These are comments addres-

sing Oracle ACS.

page 68/69

update

XML File

XML File is created.

page 69/69

update9 References

1. Deploying Oracle Maximum Availability Architecture with Exadata Database Machine

http://www.oracle.com/au/products/database/exadata-maa-131903.pdf

2. Oracle GoldenGate on Oracle Exadata Database Machine

http://www.oracle.com/technetwork/database/features/availability/maa-wp-gg-oracledbm-128760.pdf

3. Backup and Recovery Performance and Best Practices for Exadata Cell and the Oracle Exadata Database Machine

http://www.oracle.com/technetwork/database/features/availability/maa-tech-wp-sundbm-ba-

ckup-11202-183503.pdf

4. Backup and Recovery Performance and Best Practices using Oracle Sun ZFS Storage Appliance and Oracle Exadata

Database Machine

http://www.oracle.com/technetwork/database/features/availability/maa-wp-dbm-zfs-backup-1593252.pdf

5. Certified Platinum Configurations

http://www.oracle.com/us/support/library/certified-platinum-configs-1652888.pdf

6. Oracle Platinum Services Support Policies

http://www.oracle.com/us/support/library/platinum-services-policies-1652886.pdf

7. Oracle Website - Overview Platinum Services

http://www.oracle.com/us/support/premier/engineered-systems-solutions/platinum-services/overview/index.

html

8. Oracle Auto Service Request Exadata Database Machine Quick Installation Guide

http://docs.oracle.com/cd/E37710_01/doc.41/e23333.pdf

9. Oracle Enterprise Manager 12c: Oracle Exadata Discovery Cookbook

http://www.oracle.com/technetwork/oem/exa-mgmt/em12c-exadata-discovery-cookbook-1662643.pdf

10. The Oracle Database File System (DBFS)

http://ronnyegner.wordpress.com/2009/10/08/the-oracle-database-file-system-dbfs/