oracle on tegile and cisco ucs reference architecture

42
Reference Architecture www.tegile.com © 2014 Tegile Sytstems, Inc. All rights reserved. These products and technologies are protected by U.S. and international copyright and intellectual property laws. Tegile is a registered trademark of Tegile Systems, Inc. in the United States and/or other jurisdictions. Oracle on Tegile and Cisco UCS Reference Architecture

Upload: dangkhanh

Post on 01-Jan-2017

266 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Oracle on Tegile and Cisco UCS Reference Architecture

Reference Architecture

www.tegile.com © 2014 Tegile Sytstems, Inc. All rights reserved. These products and technologies are protected by U.S. and international copyright and intellectual property laws. Tegile is a registered trademark of Tegile Systems, Inc. in the United States and/or other jurisdictions.

Oracle on Tegile and Cisco UCS Reference Architecture

Page 2: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 2

Contents Audience and Scope ....................................................................................................................... 4

Executive Summary ........................................................................................................................ 4

Introducing the Cisco Unified Computing System ...................................................................... 5 Comprehensive Management .................................................................................................................................... 5 Radical Simplification ................................................................................................................................................. 5 High Performance ...................................................................................................................................................... 5

Oracle RAC ...................................................................................................................................... 5

Tegile Storage Systems Overview ................................................................................................. 6 High Performance ...................................................................................................................................................... 6 Storage Efficiency ...................................................................................................................................................... 6 Availability .................................................................................................................................................................. 6 Single Platform for Multiple Workloads ...................................................................................................................... 6 Scalability ................................................................................................................................................................... 7

Differentiators of Tegile Storage for Oracle .................................................................................. 7 Full Data Reduction Suite (Inline) .............................................................................................................................. 7 Remove Physical I/O Limitations ............................................................................................................................... 8 Variable Block Size .................................................................................................................................................... 8 Instant Backups and Restores ................................................................................................................................... 8 Zero Overhead Clones .............................................................................................................................................. 9 Scalable Write Cache ................................................................................................................................................ 9 Persistent Read Cache .............................................................................................................................................. 9 Dual Active Highly Available Controllers .................................................................................................................. 10 Hot Spot Mitigation with Dynamic Storage Allocation (Pools) ................................................................................. 10 Hybrid or All-Flash Based Solutions ........................................................................................................................ 10

Joint Solution Overview and Benefits ......................................................................................... 10 Architecture Alignment ............................................................................................................................................. 10 Hybrid Architectures Leveraging the Latest Technology ......................................................................................... 11 Multiprotocol ............................................................................................................................................................ 11 Consolidation with Deterministic Behavior ............................................................................................................... 11 Performance and Capacity Optimizations ............................................................................................................... 12 Programmable Infrastructure ................................................................................................................................... 12

Architecture and Design of Oracle RAC on UCS and Tegile Storage ...................................... 13 Hardware and Software Used for this Solution ........................................................................................................ 14

Cisco UCS Manager Configuration Overview ............................................................................. 14 Configuring Fabric Interconnects for Blade Discovery ............................................................................................. 15 Configuring LAN and SAN on Cisco UCS Manager ................................................................................................ 16 Configuring VSAN .................................................................................................................................................... 18 Configuring and Enable Ethernet LAN uplink Ports ................................................................................................. 19 Configuring Port Channel ........................................................................................................................................ 20 Configuring and Enable FC SAN Uplink Ports ......................................................................................................... 20 Configuring Pools .................................................................................................................................................... 21 Creating UUID Pools ............................................................................................................................................... 21 Creating IP and MAC Pools ..................................................................................................................................... 21

Page 3: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 3

Creating WWNN and WWPN Pool .......................................................................................................................... 22 Configuring vNIC and vHBA Templates .................................................................................................................. 22 Creating a vNIC Template ....................................................................................................................................... 22 Creating HBA Templates ......................................................................................................................................... 23 Service Profile Creation and Association to Cisco UCS Blades .............................................................................. 24 Creating Service Profile Templates ......................................................................................................................... 25 Creating Service Profiles from Service Profile Templates ....................................................................................... 28 Associating a Service Profile to Servers .................................................................................................................. 29

Cisco Nexus 5548UP Configuration ............................................................................................ 29 Configure Zoning for SAN on Nexus 5548UP Switches .......................................................................................... 30

Tegile HA2400 Storage Configuration ......................................................................................... 31 Storage Configuration for Oracle RAC Quorum ...................................................................................................... 31 Storage Configuration for Oracle Database ............................................................................................................. 33 Storage Configuration for Oracle Redo Log ............................................................................................................ 34

OS Installation ............................................................................................................................... 35 Configuring multipath on RHEL ............................................................................................................................... 36 Quorum and Database Disk Group Configuration ................................................................................................... 38 Installing Oracle 11gR2 Clusterware and Database ................................................................................................ 39

Validation ....................................................................................................................................... 40

Conclusion ..................................................................................................................................... 41

Acknowledgements ....................................................................................................................... 41

Appendix ........................................................................................................................................ 41

Page 4: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 4

Audience and Scope This document highlights the value of Tegile Systems Storage arrays combined with the Cisco Unified Computing System (UCS) architecture for an Oracle RAC (Real Application Clusters) workload. The combined innovative technologies of the Tegile storage arrays and the Cisco UCS compute and networking platforms provide an ideal infrastructure platform for Oracle RAC as this document outlines.

This document provides the following:

• Oracle RAC solution architecture of Tegile storage and Cisco UCS • Detailed configuration guidelines • Validation test results • Outline describing why Tegile combined with UCS is an ideal infrastructure for Oracle

The document contents are a summary of a joint project conducted with Cisco, which involved configuring an Oracle RAC OLTP Solution on Cisco UCS and Tegile Storage and running performance tests.

The audience should have a basic knowledge of servers, storage and database technologies and at least an introductory level knowledge of Tegile Systems, Cisco UCS and Oracle RAC.

Executive Summary Customers who are deploying Oracle RAC are forced to make difficult and limiting decisions for their infrastructure. Most application workloads are not consistent in their performance and capacity requirements, further compounding the difficulty on the infrastructure decision. A series of misleading choices often leads to over-engineering and thus overspending, or running systems that cannot meet the application demands. This paper illustrates a compromise nothing approach for designing and configuring the compute, storage and networking infrastructure.

An ideal compute, network, and storage infrastructure for Oracle would have great agility and more than meet all of the technical requirements. It would have a variety of compute, and network personalities that were easy to change, and a storage layer that would support several storage protocols, contain multiple inline space saving technologies, and optimize the use of Flash technology from a performance, capacity, and cost perspective.

The combination of Tegile Systems Arrays and the Cisco UCS compute platform is an ideal combined infrastructure to meet these ever-changing requirements.

• Storage optimized along both the performance and capacity vectors of flash drive technologies. Providing great performance as well as large capacities.

• Inline Compression and Deduplication with no performance overhead. • Multiprotocol storage allowing customers to use a range of technologies such as NFS, for which Oracle has

several reference architectures, as well as block interfaces (FC, iSCSI) for lower latency transactions. • Fully programmable infrastructure to reach automation goals using APIs for both UCS and Tegile. • Complete DR solution using Tegile asynchronous replication at the array level along with UCS service profile

exports and imports between different domains. • High bandwidth converged compute and networking layer from Cisco to optimize RAC workloads.

Page 5: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 5

Introducing the Cisco Unified Computing System The Cisco Unified Computing System addresses many of the challenges faced by database administrators and their IT departments, making it an ideal platform for Oracle RAC implementations.

Comprehensive Management The system uses an embedded, end-to-end management system that uses a high-availability active-standby configuration. Cisco UCS Manager uses role and policy-based management that allows IT departments to continue to use subject matter experts to define server, network, and storage access policy. After a server and its identity, firmware, configuration, and connectivity are defined, the server, or a number of servers like it, can be deployed in minutes, rather than the hours or days that it typically takes to move a server from the loading dock to production use. This capability relieves database administrators from tedious, manual assembly of individual components and makes scaling an Oracle RAC configuration a straightforward process.

Radical Simplification The Cisco Unified Computing System represents a radical simplification compared to the way that servers and networks are deployed today. It reduces network access-layer fragmentation by eliminating switching inside the blade server chassis. It integrates compute resources on a unified I/O fabric that supports standard IP protocols as well as Fibre Channel through FCoE encapsulation. The system eliminates the limitations of fixed I/O configurations with an I/O architecture that can be changed through software on a per-server basis to provide needed connectivity using a just-in-time deployment model. The result of this radical simplification is fewer switches, cables, adapters, and management points, helping reduce cost, complexity, power needs, and cooling overhead.

High Performance The system's blade servers are based on the Intel Xeon 5670 and 7500 series processors. These processors adapt performance to application demands, increasing the clock rate on specific processor cores as workload and thermal conditions permit. These processors, combined with patented Cisco Extended Memory Technology, deliver database performance along with the memory footprint needed to support large in-server caches. The system is integrated within a 10 Gigabit Ethernet-based unified fabric that delivers the throughput and low-latency characteristics needed to support the demands of the cluster's public network, storage traffic, and high-volume cluster messaging traffic.

Oracle RAC Data powers essentially every operation in a modern enterprise, from keeping the supply chain operating efficiently, to managing relationships with customers. Oracle RAC brings an innovative approach to the challenges of rapidly increasing amounts of data and demand for high performance. Oracle RAC uses a horizontal scaling (or scale-out) model that allows organizations to take advantage of the fact that the price of one-to-four-socket x86-architecture servers continues to drop while their processing power increases unabated. The clustered approach allows each server to contribute its processing power to the overall cluster's capacity, enabling a new approach to managing the cluster's performance and capacity.

Page 6: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 6

Tegile Storage Systems Overview Tegile is pioneering a new generation of affordable feature-rich storage arrays that are dramatically faster, and can store more effective data than traditional arrays.

Figure 1: Intelligent Flash Arrays

High Performance Tegile hybrid arrays are architected from the ground up to use flash storage in an intelligent and optimal manner. The patented Tegile MASS/IntelliFlashTM technology accelerates performance to solid state speeds without sacrificing the capacity or cost advantage of hard disk storage. Tegile arrays are significantly faster than legacy arrays and considerably less expensive than all solid-state disk-based arrays.

Storage Efficiency Inline deduplication and compression enhance usable capacity well beyond raw capacity, reducing storage capacity requirements by as much as 50 percent according to Tegile customer deployments in the field. Note that Oracle space savings is primarily realized from compression rather than deduplication due to the block architecture of Oracle databases.

Availability Tegile storage arrays are designed for redundancy, with no single-point-of-failure. The rich set of availability features include multiple RAID options, snapshots, cloning, and remote replication, with proactive monitoring and alerts for timely notification when issues arise, and they are included at no extra cost.

Single Platform for Multiple Workloads Tegile storage arrays support both SAN and NAS protocols for storage access. Users can configure Oracle to use the flexibility of NFS for shared data files as well as block LUNS for booting Oracle or for all of the data files, if desired.

Page 7: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 7

Scalability The Tegile product line includes HA2100, HA2100EP, HA2130, HA2130EP, HA2300, HA2400, T3400 and all-flash HA2800 and T3800 storage arrays, which provide progressively increasing performance along the product line. The product line also includes storage expansion shelves to add capacity to the storage arrays. The expansion shelves include a mix of solid-state and hard disk drives (SSDs and HDDs).

Tegile is a balance of performance, capacity, features, and price that can satisfy the needs of the most demanding Oracle requirements.

Differentiators of Tegile Storage for Oracle There are numerous storage vendors in the industry today, many of which publish documents outlining how to design or optimize Oracle solutions with their arrays. Unfortunately, most of these documents, when read, appear very similar to each other and do little to answer the “why question” in terms of how their storage platform differentiates itself for Oracle workloads. The following is a summary of fundamental differentiators that Tegile Systems offers for Oracle workloads, when compared to many of the other all-flash or flash-hybrid array vendors.

Full Data Reduction Suite (Inline) The systems from Tegile are unique in the industry in terms of performing inline compression and deduplication for all drive technologies without any performance overhead. The patented MASS/IntelliFlashTM technology is the key intellectual property from Tegile that enables this. IntelliFlash is an innovative metadata management system that accelerates all reads and writes to and from all metadata, thereby resulting in higher capacity and significantly higher performance. IntelliFlash allows the IntelliFlashTM storage array to organize and store metadata, independent of the data, on high-speed devices with optimized retrieval paths. This accelerates every storage function within the system, raising the performance of near-line SAS hard disk drives to the level of extremely expensive high-RPM SAS or Fibre Channel drives. This allows operations such as inline deduplication to function on the systems with no performance overhead.

Oracle data blocks are typically 8K in size for OLTP-based workloads. The blocks contain unique headers and checksum bit maps at the beginning and end of the block layout thus, any deduplication engine sees every block as unique. Oracle space savings can be achieved by compression, which Tegile also performs inline. This document outlines the savings realized during the testing.

Backup copies and clones of the database instances on the same array can benefit greatly from the advanced deduplication engine achieving significant space savings. This allows the business too safely and cost effectively take multiple backups, snapshots, and clones of production instances while keeping storage costs in check.

Only Tegile performs a full Compress-Checksum-Deduplication-Coalesce (CCDC) inline cycle for data streams. In addition, all LUNs are capable of being thin-provisioned, further adding to the overall capacity efficiency. Many DBAs tell their storage administrators worst-case space projections to avoid the “out-of-space” scenario which can make databases fail.

Page 8: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 8

Table 1 summarizes the data reduction technologies, their applicability to Oracle, and the resulting business benefit.

Table 1: Data Reduction Technologies and Benefits

Data Reduction Technology Oracle Relevance Business Benefit

Inline Compression All data blocks can be compressed

Lowering costs, $/Usable GB, less power and cooling

Inline Checksum Provides additional data integrity and used for dedupe calculations

Data integrity provided by MASS/IntelliFlash OS, complimentary to Oracle consistency features

Inline Deduplication Backup copies, clones can be deduped efficiently

More frequent backups and clones enable faster recoveries and streamlined development processes

Thin Provisioning All LUNs, if desired Lowering costs while meeting the stated DB team requirements on capacity

Remove Physical I/O Limitations One of the most common issues for Oracle deployments is some set of processes slowing down due to contention on I/O resources. Flash drives provide an obvious solution to this issue by offering tremendous IOPS for random I/O with low latencies. Tegile offers customers the ability to have Online Redo logs, key indexes or data files, or even temporary table spaces allocated to all-flash media while the remaining data can reside on HDD drives lowering the total cost of the solution while still removing the I/O bottlenecks.

Variable Block Size DB_BLOCK_SIZE specifies (in bytes) the size of Oracle database blocks. Typical values are 4096 and 8192 but vary amongst deployments depending on the use case, host resident file system (if applicable) and space and performance requirements. Tegile allows block sizes ranging from 4K to 128K on a per-LUN or per-project (collection of LUNs and shares) basis. This allows the same platform to be used for instances with varying block sizes and provides the critical ability to align the block size of the Oracle application layer with the underlying storage OS.

Instant Backups and Restores Storage array snapshots are common. Most, if not all, vendors have some sort of snapshot engine and some vendors have built their entire value proposition around them. They key to snapshots in the 21st century is the implementation. They must be zero overhead constructs allowing multiple snapshots to exist simultaneously. The user can then take snapshots frequently, even on an hourly basis, for rapid recoveries. Many vendors snapshot story breaks down if you have more than 3 or 4 snapshots of a volume, LUN, or file system at the same time, due to the major performance hit incurred for maintaining these.

Recovering Oracle databases rapidly, at a granular level of time, is critical to most Oracle deployments. The implementation must allow several snapshots to be taken without any performance overhead.

The IntelliFlash OS uses a “redirect-on-write” approach to snapshots and clones (see below) that enables the taking and maintaining of multiple snapshots without any performance overhead. Writes are never updated in place in the Tegile OS.

Page 9: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 9

The following is an example of the intuitive user interface to take and schedule snapshots.

Figure 2: Scheduling Snapshots

Zero Overhead Clones Copies of production Oracle instances are necessary for several uses. Development, Test and QA teams are frequent consumers. Another common use for clones is to perform off-host backups.

Similar to snapshots, the storage operating system must be capable of taking clones frequently and without performance overhead. This is fundamental. The array must be built with the proper foundation, to retrofit later as technologies and corporations change. Simply having an array vendor confirm that their product takes snapshots and clones is not enough.

Scalable Write Cache Most of the arrays that are hybrid, or all flash implementations, rely on NVRAM in the storage controller for caching writes. This approach of mirroring NVRAM caches between controllers has been adopted by the majority of array vendors going back well over a decade. While this method allowed HA to function between controllers and it was a well-known model, it has an inherent weakness for periods of high writes or unpredictable workloads.

The Tegile implementation caches writes on the persistent SSDs directly before acknowledging the write to Oracle. The user can determine how many SSDs to use for the caching while still providing low latency and persistence during a failover event. This caching agility and scalability is unique to Tegile systems.

Persistent Read Cache Tegile arrays satisfy read blocks from the controller DRAM or from secondary scalable SSD-based read cache. Tegile has introduced the ability to make this secondary read cache persistent, such that after a failover event the read cache on the surviving controller is already warm and can start satisfying read requests.

This has major performance implications after a failover. Many arrays do not possess such a capability and thus can only offer performance for Oracle during “steady state” or normal operations.

Page 10: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 10

Dual Active Highly Available Controllers There are array vendors in the hybrid category whose products only offer Active/Passive controller architectures. This greatly limits the overall throughput of the array as an entire set of CPUs, cores, memory, and interface cards are just sitting idle waiting for a failure. This sort of architecture is not cost-efficient, nor does it allow each controller to serve I/O for different instances of a database.

Tegile’s HA arrays with dual active controllers let you make the most of your investment, because the controllers are optimized and used fully during normal operations. It also provides the ability to make each controller work independently, running different instances, or test and development scenarios.

Hot Spot Mitigation with Dynamic Storage Allocation (Pools) Many, or most workloads, change as a function of time. New users are added, indices are changed, and table spaces are expanded, or need to be relocated. How the storage system manages the dynamic nature of the workload is critical.

Tegile arrays allow SSD and/or HDDs to be added to storage pools online, and the new capacity is immediately consumable by the application. An advantage of this architecture is evidenced in the use case where an expansion shelf of either performance (SSDs) or capacity (HDDs or larger SSDs) is dynamically added vs. having to add fixed configurations. The additional capacity and performance of the drives are instantly usable by Oracle, effectively removing any hot spots that have occurred with time.

Hybrid or All-Flash Based Solutions Not sure which array type is the best fit for your environment? Why choose a storage platform based on having to make this decision. Tegile is the only vendor whose “stack” works just as effectively on hybrid configurations or all-flash deployments. No other vendor offers this flexibility and investment protection.

As your workloads increase and change with time, you can deploy either architecture based on business needs, not based on what the storage vendor can support.

Joint Solution Overview and Benefits Architecture Alignment Figure 3 outlines how the two layers of the architecture are aligned along four key pillars.

Page 11: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 11

Multi-­‐Protocol  FC,  iSCSI,  NFS,  CIFS

Hybrid:  Leverage  Latest  Technology

Performance  and  Capacity  

Consolidation  With  Deterministic  Behavior

Enterprise  SSD,  NL-­‐SAS  HDD,  Next  Gen  SSDs  Implicit  Use

Different  CPU  and  Memory  Densities  all  

UCS  Managed  

Pin  key  Oracle  Files  on  Flash  media  use  both  controllers  

Satisfy  Performance  with  Best  $/GB/IOPS

Different    Workloads  with  full  Quality  of  

Service

Oracle  is  a  mix  of  performance    and  

capacity  centric  data    

Oracle  can  leverage  block  and  file  storage

Supporting  all  block  and  file  interfaces  

Figure 3: Architecture Alignment

Hybrid Architectures Leveraging the Latest Technology Technology is rapidly changing as nanoscale-manufacturing technologies continue to be refined. The result is a never ending sequence of denser and often times faster CPU, DRAM and SSD components flowing out of the different manufacturers. A compute and storage platform should allow the rapid incorporation of these “latest” devices into their systems with no changes on how the system is managed or fundamentally behaves.

Cisco UCS, as a business, is very aggressive in adopting the latest Intel CPUs offered on the market in conjunction with the more recent innovations in system DRAM capacities and speeds. The elegance of the service profile-driven, stateless nature of the Cisco architecture is that adding these latest technology components is seamless from a management perspective and as simple as a service profile migration from old to new blade, from an operations standpoint.

Tegile follows a similar paradigm when IntelliFlashTM is correctly viewed as an innovation which allows the latest performance or capacity drives to be quickly used with no architecture or management changes necessary in the IntelliFlash OS and all the application benefits still realized. All applications have requirements along the vectors of performance and capacity, thus a storage platform must be optimized for both of these. Contrast this too many systems in the market who can only support a single vendor/single drive type combination throughout the lifecycle of that release. This greatly limits an application such as Oracle from benefiting from the latest technology.

Multiprotocol Oracle is one of the few database engines that has invested significant R&D into optimizing both file and block storage interfaces. Oracle’s DNFS is an excellent example of leveraging NFS for high speed, highly resilient connections between storage and servers. A high bandwidth multi-uplink 10G server infrastructure is needed to properly exploit this, which UCS provides with full QoS.

Many customers only consider “FC block” for their Oracle workloads, thus the storage infrastructure needs to have robust support for both FC and NFS (DNFS) workload types, possibly at the same time, depending on deployment scenarios.

Consolidation with Deterministic Behavior The classic argument for not consolidating different, or multiple, workloads onto a single compute or storage

Page 12: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 12

infrastructure is the non-deterministic nature of the result; meaning what will happen to the workload that was running fine? Since many customers are using infrastructures not designed for this consolidation approach, they have to maintain the combinations of applications-servers-storage in silos, physically isolated. This results in higher costs across many dimensions and more points of management.

Figure 4 illustrates a simple means of assuring the right files have the appropriate server and storage requirements to meet their needs and results in deterministic performance.

Figure 4: Consolidation

Performance and Capacity Optimizations Very few Oracle-based applications demand very high performance requirements without capacity requirements. Rather, a balanced and efficient storage system is required that can deliver on both of these vectors, using the right underlying disk drives as well as having data reduction capabilities.

The compute tier must provide a variety of CPU models with varying core speeds, densities and memory capabilities. The I/O adapter on the compute tier must be able to provide multiple interfaces, of varying types, each of which can provide excellent performance. Only the Cisco Virtual Interface Card (VIC) can provide such a flexible, programmable and high performance platform.

Programmable Infrastructure Private clouds, hybrid clouds, modular pods, these common phrases in the industry have a fundamental common denominator; they must be fully programmable from an outside engine. Cisco and Tegile both invested heavily, as their platforms were being built, to expose “northbound” APIs to allow them to be managed from portals, third party ISVs, or even simple scripting tools.

You can easily obtain application consistent snapshots and clones by leveraging the API constructs. The familiar SQL Plus command cycle described below is possible with Tegile:

Page 13: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 13

1. Alter database begin backup. 2. Take snapshot using API. 3. Alter database end backup

The UCS API is rich enough to support any management operation on the entire system. You could leverage a single script to manage Oracle, UCS and Tegile in a coordinated manner.

Architecture and Design of Oracle RAC on UCS and Tegile Storage The configuration presented in this solution validation is based on Oracle Database 11g R2 with RAC technology on Cisco Unified Computing Servers and Tegile storage arrays. It is fully extendable to Oracle 12c deployments in terms of all of the configuration guidelines outlined in this document.

This section explains Cisco UCS networking and computing design considerations when deploying Oracle Database 11g R2 RAC in a Fibre Channel storage design. Figure 5 represents a detailed view of the physical topology, and some of the main components of Cisco UCS and Tegile storage arrays.

Figure 5: Physical topology

As illustrated above, a pair of Cisco UCS fabric interconnects carries both storage and network traffic from the blades with the help of the Cisco Nexus 5548UP switch. The 8GB Fibre Channel traffic (orange lines) leads the UCS Fabrics through the Nexus 5548UP switches to the Tegile Array that stores the Oracle database. Booting from SAN allows the service profile use case of migration from one compute node to another. You can configure UCS service profile boot polices with up to four different boot LUN targets which can span different Tegile arrays for maximum redundancy.

Two virtual Port Channels (vPCs) are configured to provide a public network and a private network for the blades to northbound switches. Each vPC has VLANs created for application network data and management data paths. For more information about vPC configuration on the Cisco Nexus 5548UP Switch, see: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html.

Page 14: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 14

Hardware and Software Used for this Solution Table 2 displays the software and hardware used for Oracle Database 11g R2 GRID Infrastructure with RAC Option Deployment.

Table 2: Software and Hardware for Oracle Database 11g R2 GRID Infrastructure with RAC Option - Deployment

Hardware Quantity Servers Cisco UCS 6120XP Fabric Interconnect with 6-port 8GB FC Expansion Module N10-E0060

2 (configured as an active-active pair)

Cisco UCS 5108 B-Series blade chassis 1 Cisco UCS 2208 B-Series Blade Fabric Extender modules 2 Cisco UCS B200 M3 B-Series blade server with: 2-socket E5-2690 2.9GHz CPU & 256GB Memory & 1 x Cisco UCS 1240 virtual interface card VIC 1 x 300GB 6G SAS 10K RPM SFF HDD

2 nodes for RAC used for this solution

Network Cisco Nexus 5548UP switches 2 Storage Tegile HA2400 flash-based hybrid storage array: 11 x 200GB flash cache 13 x 1TB NL SAS drives 2 x FC8G ports per controller

1 subsystem with dual active-active storage controllers

Tegile ES2400 storage expansion shelf: 6 x 200GB flash cache 18 x 1TB NL SAS drives

1 shelf connected to the HA2400 storage array

Software Version Redhat Enterprise Linux 6.3 64-bit 2.6.32-279.el6.x86_64 Oracle Database 11gR2 GRID 11.2.0.3 Oracle Database 11gR2 Database 11.2.0.3 Oracle Enterprise Manager Cloud Control 12c Cisco UCS Manager 2.1.(1a) Cisco NXOS for Nexus 5548UP 5.0(3)N2(1) Tegile IntelliFlash Software on HA2400 array 2.0.8.1

Note: For Oracle RAC configuration on UCS, we recommend that you keep all private interconnects local on a single

Fabric interconnect with NIC failover enabled. In such a case, the private traffic stays local on that fabric interconnect

and does not get routed through the northbound network switch. In other words, all interblade (or RAC node private)

communication is resolved locally at the fabric interconnect and this significantly reduces latency for Oracle Cache

Fusion traffic.

Page 15: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 15

Cisco UCS Manager Configuration Overview The following are the high-level steps involved for a Cisco UCS configuration:

1. Configure Fabric Interconnects for Chassis and Blade Discovery:

• Configure Server Ports

2. Configure LAN and SAN on UCS Manager:

a. Configure VLAN b. Configure VSAN c. Configure and enable Ethernet LAN uplink Ports d. Configure Port Channel e. Configure and Enable FC SAN uplink Ports

3. Creating and Configuring UUID, MAC, WWWN and WWPN Pool:

a. Create UUID Pool b. IP Pool and MAC Pool c. WWNN Pool and WWPN Pool

4. Configuring vNIC and vHBA Template:

a. Create vNIC templates b. Create Public vNIC template c. Create Private vNIC template d. Create Storage vNIC template e. Create HBA templates

5. Create Service Profile Template.

Details for each step are discussed in subsequent sections below.

Configuring Fabric Interconnects for Blade Discovery Cisco UCS 6120XP Fabric Interconnects are configured for redundancy. They provide resiliency in case of failures. The first step is to establish connectivity between the blades and the fabric interconnects.

To configure the server ports in Cisco UCS Manager:

1. Click Equipment. 2. Click Fabric Interconnects. 3. Click Fabric Interconnect A. 4. Click Fixed Module. 5. Click Ethernet Ports and select the desired number of ports. 6. Right-click Configure as Server Port. The ports are configured as the Server Port display:

Page 16: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 16

Figure 6: Configuring Ports

In the above screenshot, ports 1~4 are configured as Server ports on Fabric Interconnect A.

7. Repeat the same steps to configure ports 1~4 as Server ports on Fabric Interconnect B.

Configuring LAN and SAN on Cisco UCS Manager Perform the LAN and SAN configuration steps in the Cisco UCS:

1. In Cisco UCS manager, click LAN. 2. Click LAN Cloud > VLAN. 3. Right-click Create VLANs.

In this solution, you need to create two VLANs: one for a private network (VLAN 200), and one for a public network (VLAN 135). These four VLANS are used in the vNIC templates and discussed later.

Page 17: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 17

Figure 7: Creating VLANs

In the screenshot above, VLAN 200 is selected for private network creation. It is very important that you create both VLANs as global across both fabric interconnects. This way, VLAN identity is maintained across the fabric interconnects in case of NIC failover.

4. Repeat the process for creating Public VLAN 135.

Below is the summary of VLANs your system contains once you complete VLAN creation:

• VLAN 200 for Oracle RAC private interconnect interfaces • VLAN 135 for public interfaces

Note: Even though private VLAN traffic stays local within the UCS domain during normal operating conditions, it is

necessary to configure entries for these private VLANS in the northbound network switch. This allows the switch to

route interconnect traffic appropriately in case of partial link failures. These scenarios and traffic routing are discussed

in details in later sections.

Page 18: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 18

Figure 8 summarizes all of the VLANs for Public and Private Networks.

Figure 8: VLAN Summary

Configuring VSAN To configure the VSAN in Cisco UCS manager:

1. Click SAN > SAN Cloud > Fabric A > VSANs. 2. Right-click Create VSAN to create VSAN 101 on Fabric A.

Figure 9: Configuring VSAN in Cisco UCS Manager

Note: Even if you are not currently using FCoE traffic for SAN Storage, it is mandatory to specify VLAN ID.

Page 19: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 19

Figure 10: VSAN Summary

3. Repeat the steps to create VSAN 102 on Fabric B.

Configuring and Enable Ethernet LAN uplink Ports To configure and enable Ethernet LAN uplink Ports From the Equipment tab:

1. Click Fabric Interconnects > Fabric Interconnect A > Fixed Module > Ethernet Ports. 2. Select the desired number of ports 3. Right-click Configure as Uplink Port.

In the test setup, ports 17 and 18 were configured as Network uplinks.

Figure 11: Network Uplink Details

4. Repeat the same step on Fabric interconnect B to configure ports 17 and 18 as Ethernet uplink ports.

Page 20: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 20

Configuring Port Channel To configure the port channel in the Cisco UCS manager:

1. Click LAN > LAN Cloud > Fabric A > Port Channels. 2. Right-click Create Port Channel.

In the test setup, ports 17 and 18 on Fabric A were configured as port channel 10. Similarly, ports 17 and 18 on Fabric B are configured as port channel 11.

Figure 12: Fault Summary

Figure 13: Port Channel 10 Details

Configuring and Enable FC SAN Uplink Ports To configure and enable FC SAN Uplink Ports from the Equipment tab:

1. Click Fabric Interconnects > Fabric Interconnect A > Expansion Module 2 > FC Ports. 2. Select the desired FC ports. 3. Click Configure as Uplink Port to configure them as SAN uplinks.

In the test setup, FC Port 1 and FC Port 2 are configured as Uplink ports.

Page 21: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 21

Figure 14: Uplink Ports

4. Repeat for same steps on Fabric Interconnect B to configure FC Port 1 and FC Port 2 on Fabric Interconnect B as Uplink ports.

Configuring Pools When VLANs and VSAN are created, you need to configure pools for UUID, MAC Addresses, Management IP and WWN.

Creating UUID Pools To configure the pools in the Cisco UCS Manager:

1. Click Servers > Pools > UUID Suffix Pools. 2. Right-click Create UUID Suffix Pool to create a new pool.

Figure 15: Creating UUID Pools

Creating IP and MAC Pools To create an IP and MAC pool in the Cisco UCS Manager:

1. Click LAN > Pools > IP Pools. 2. Right-click Create IP Pool Ext-mgmt.

Page 22: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 22

Figure 16: Creating IP Pool and MAC Pool

The IP pools are used for console management, while MAC addresses for the vNICs are used later.

Creating WWNN and WWPN Pool To create WWNN and WWPN pools in the Cisco UCS Manager:

1. Click SAN > Pools > WWNN Pools. 2. Right-click Create WWNN Pools. 3. Click WWPN Pools to create WWPN pools.

These WWNN and WWPN entries are used for the virtual FC HBAs to access the database on the Tegile storage.

Figure 17: Create WWNN and WWPN Pool

Configuring vNIC and vHBA Templates The following sections discuss vNIC and vHBA creation.

Creating a vNIC Template To create a vNIC template in the Cisco UCS Manager:

1. Click LAN > Policies > vNIC templates. 2. Right-click Create vNIC Template.

In the test setup, two vNIC templates were created: one for the public network, and the other for the private network.

Page 23: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 23

The private network is for Oracle RAC internal heartbeat and cache fusion traffic while the Public network is for external clients like middle tiers and SSH sessions to the Oracle hosts.

The public network vNIC template is pinned to Fabric A, with failover enabled; the private network vNIC template is pinned to Fabric B, with failover enabled. For the private network vNIC template, we strongly recommend that you set it to 9000 MTUs.

For both templates, make sure that you use OraAppsMAC as the MAC Pool for MAC addresses.

Figure 18: Creating Private vNIC Template

Creating HBA Templates To create HBA templates in the Cisco UCS Manager:

1. Click SAN > Policies > vHBA templates. 2. Right-click Create vHBA Template.

Figure 19: HBA Template Summary

Page 24: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 24

Figure 20: vHBA_A Template Properties

Figure 21: vHBA-B Template Properties

Service Profile Creation and Association to Cisco UCS Blades Service profile templates enable policy-based server management that helps ensure consistent server resource provisioning suitable to meet predefined workload needs.

Page 25: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 25

Creating Service Profile Templates To create a Service Profile template in the Cisco UCS Manager:

1. Click Servers > Service Profile Templates > root. 2. Right-click root and select Create Service Profile Template.

Figure 22: Creating Service Profile Template

3. Enter the template name and select the UUID Pool that was created earlier. 4. In the Networking screen, click Expert to enter Expert mode.

Figure 23: Expert Mode

Page 26: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 26

5. Click Add to add a vNIC for the public network.

Figure 24: Creating a vNIC for a Public Network

6. Add another vNIC for the private network.

Figure 25: Creating a vNIC for a Private Network

Two vNICs were created:

Page 27: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 27

Figure 26: Successful vNIC creation

7. Click Next to go to the Storage configuration screen. 8. In the Storage screen, click Expert to enter expert mode. 9. Click Add to add a vHBA for Fabric A.

Figure 27: Creating vHBA

10. Click Add to add a vHBA for Fabric B:

Page 28: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 28

Figure 28: Storage vHBA summary

11. Click Next to proceed to the next screen. 12. Leave the configuration to default in the rest of the screen, and click Finish to complete the service profile

template creation.

Creating Service Profiles from Service Profile Templates To create a service profile from a service profile template:

1. Click Servers > Service Profile Templates. 2. Right-click Create Service Profiles from Template.

Note: Make sure to set the WWNN assignment to the WWNN pool created previously.

Page 29: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 29

Figure 29: Creating Service Profiles

In the test setup, from the service profile template, or the AppsDB, Tegile created two service profiles with prefix: Tegile.

• Tegile1 • Tegile2

Associating a Service Profile to Servers When you create service profiles, you can associate them with servers.

1. In the UCS Manager, click Servers tab > Service Profiles > Root. 2. Select the desired service profile. 3. In the Associate Service Profile screen, set the Server Assignment to Select Existing Server.

4. Select the desired blade server and click OK associate the blade server to the service profile.

5. Repeat the same steps to associate the other service profiles for the respective blade servers.

Cisco Nexus 5548UP Configuration To set up the Cisco Nexus configuration:

1. Log in to switch A as admin through the command line. 2. Type the following:

# conf term

# feature vpc

# vpc domain 1

# peer-keepalive destination <IP Address of peer-N5K>

# exit

# interface port-channel 1

# switchport mode trunk

# vpc peer-link

Note: Make sure the FSM (Final State Machine) Association progress status completes to 100 percent.

Page 30: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 30

# switchport trunk allowed vlan 1,20,135

# spanning-tree port type network

# exit

# interface port-channel 10

# switchport mode trunk

# vpc 10

# switchport trunk allowed vlan 1,20,135

# spanning-tree port type edge trunk

# exit

# interface port-channel 11

# switchport mode trunk

# vpc 11

# switchport trunk allowed vlan 1,20,135

# spanning-tree port type edge trunk

# exit

# interface eth 1/17

# switchport mode trunk

# switchport trunk allowed vlan 1,20,135

# channel-group 10 mode active

# no shut

# interface eth 1/18

# switchport mode trunk

# switchport trunk allowed vlan 1,20,135

# channel-group 11 mode active

# no shut

# copy running-config startup-config

3. Repeat the above steps on both Nexus 5548UP switches.

Configure Zoning for SAN on Nexus 5548UP Switches On the Nexus 5548UP switch connecting to Fabric A, configure the following zoning for SAN:

1. Log in to switch A as admin through the command line. 2. Type the following:

# conf term

# zoneset name TegileOraAppsZoneset vsan 101

# zone name OraAppsDB_1

# member pwwn 21:00:00:24:ff:44:bb:a8

# member pwwn 21:00:00:24:ff:44:bb:b6

# member pwwn 20:00:00:25:b5:bb:00:06

# exit

Page 31: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 31

# zone name OraAppsDB_2

# member pwwn 21:00:00:24:ff:44:bb:a8

# member pwwn 21:00:00:24:ff:44:bb:b6

# member pwwn 20:00:00:25:b5:bb:00:08

# exit

# exit

# zoneset activate name TegileOraAppsZoneset vsan 101

# copy running-config startup-config

On the Nexus 5548UP switch connecting to Fabric B, configure the following zoning for SAN:

1. Log in to switch A as admin through the command line. 2. Type the following:

# conf term

# zoneset name TegileOraAppsZoneset vsan 102

# zone name OraAppsDB_3

# member pwwn 21:00:00:24:ff:44:bb:a9

# member pwwn 21:00:00:24:ff:44:bb:b7

# member pwwn 20:00:00:25:b5:bb:00:07

# exit

# zone name OraAppsDB_4

# member pwwn 21:00:00:24:ff:44:bb:a9

# member pwwn 21:00:00:24:ff:44:bb:b7

# member pwwn 20:00:00:25:b5:bb:00:09

# exit

# exit

# zoneset activate name TegileOraAppsZoneset vsan 102

# copy running-config startup-config

Tegile HA2400 Storage Configuration The following section discusses the Tegile storage layout design considerations when deploying an Oracle Database with Tegile arrays.

Storage Configuration for Oracle RAC Quorum For the Oracle RAC Quorum, one pool was created on controller 2, with 2 HDDs configured as RAID1. On top of the storage pool, five 20GB LUNs were created and mapped to the FC HBA initiators of both of the UCS blade servers.

Page 32: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 32

Figure 30: Configuration for Oracle RAC Quorum LUNs

Figure 31: General Configuration of a Quorum LUN

Figure 32: Advanced Configuration of a Quorum LUN

Page 33: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 33

Storage Configuration for Oracle Database For Oracle database, Tegile created two pools, one pool per controller.

In each pool, there were 2 metadata SSDs, 6 r/w cache SSDs and 12 HDDs. On each pool, Tegile created four 500GB LUNs for the OLTP database. With two pools, there were a total of 8 OLTP LUNs. All of the 8 LUNs were mapped to the FC initiators on both blade servers.

Figure 33: Configuration of Oracle Database LUNs

Figure 34: General Configuration of a Database LUN

Page 34: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 34

Figure 35: Advanced Configuration of a Database LUN

Storage Configuration for Oracle Redo Log On the same storage pools as the Oracle database, eight 50GB LUNs were created for the Oracle redo log with 4 LUNs per pool. In addition, all of the eight redo log LUNs were mapped to the FC initiators of both blade servers.

On each LUN, the secondary cache was set to “cache none”.

Figure 36: Configuration of Oracle Database Redo Log LUNs

Page 35: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 35

Figure 37: General Configuration of a Redo Log LUN

Figure 38: Advanced Configuration of a Redo Log LUN

Note that Secondary cache was set to None on all redo log LUNs.

OS Installation For the test setup, Tegile configured a two-node Oracle Database 11g R2 RAC cluster using Cisco B200 M3 servers. The databases and grid infrastructure components were configured to use the FC protocol on Tegile storage. Redhat Enterprise Linux 6.3 64-bit Operating System is installed on each server.

Page 36: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 36

Table 3: Host Configuration

Complete the following steps to install the OS and other required packages to enable the RAC environment settings:

1. Install 64-bit Redhat Enterprise Linux 6.3, on both nodes (during OS install, select packages shown below). 2. Install the following RPM(s) by choosing from the options:

• Device Mapper

The Device Mapper is the built-in multipath support in RHEL.

• Oracle Validated Configuration RPM package

The Oracle Validated Configuration RPM sets and verifies system parameters based on recommendations from the Oracle Validated Configurations program, and installs any additional packages needed for Oracle Clusterware and database installation. It also updates sysctl.conf settings, system startup parameters, user limits, and driver parameters to Oracle recommended values.

Configuring multipath on RHEL To configure multipathing on RHEL:

1. On each server, edit the /etc/multipath.conf file to include the following configuration information for Tegile FC storage.

2. Type: defaults { polling_interval 5 path_grouping_policy multibus failback immediate user_friendly_names yes no_path_retry 6

Component Details Description

Server 2xB200 M3 2 Sockets with 8 cores with HT enabled

Memory 256 GB Physical memory

NIC1 Public Access Management and Public Access, MTU Size 1500

NIC2 Private Interconnect Private Interconnect configured for HAIP, MTU Size 9000

SWAP Space 20 GB Swap Space

Page 37: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 37

max_fds 8192 } devices { device { vendor "TEGILE" product "ZEBI-FC" hardware_handler "1 alua" path_selector "round-robin 0" path_grouping_policy "group_by_prio" no_path_retry 10 dev_loss_tmo 50 path_checker tur prio alua failback 30 rr_min_io 128 } } multipaths { multipath { wwid 3600144f03c794400000052743b4b0001 alias OLTP1 } multipath { wwid 3600144f03c794400000052743bdd0002 alias OLTP2 } multipath { wwid 3600144f03c794400000052743c3e0003 alias OLTP3 } multipath { wwid 3600144f03c794400000052743c580004 alias OLTP4 } multipath { wwid 3600144f0b49e8900000052743ce00008 alias OLTP5 } multipath { wwid 3600144f0b49e8900000052743c9c0006 alias OLTP6 } multipath { wwid 3600144f0b49e8900000052743cb50007 alias OLTP7 } multipath { wwid 3600144f0b49e8900000052743d0a0009 alias OLTP8 } multipath { wwid 3600144f03c794400000052743d890006 alias REDOCTL1 } multipath { wwid 3600144f03c794400000052743d9f0007

Page 38: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 38

alias REDOCTL2 } multipath { wwid 3600144f03c794400000052743dba0008 alias REDOCTL3 } multipath { wwid 3600144f03c794400000052743dd10009 alias REDOCTL4 } multipath { wwid 3600144f0b49e8900000052743dba000a alias REDOCTL5 } multipath { wwid 3600144f0b49e8900000052743dd1000b alias REDOCTL6 } multipath { wwid 3600144f0b49e8900000052743ded000c alias REDOCTL7 } multipath { wwid 3600144f0b49e8900000052743e03000d alias REDOCTL8 } multipath { wwid 3600144f0b49e8900000052742eae0001 alias OCRVOTE1 } multipath { wwid 3600144f0b49e8900000052742ec80002 alias OCRVOTE2 } multipath { wwid 3600144f0b49e8900000052742edd0003 alias OCRVOTE3 } multipath { wwid 3600144f0b49e8900000052742eff0004 alias OCRVOTE4 } multipath { wwid 3600144f0b49e8900000052742f6c0005 alias OCRVOTE5 } }

Quorum and Database Disk Group Configuration The following illustrates the three ASM disk groups for the Oracle RAC quorum, OLTP database and redo log as discussed later in the Oracle Cloud Control:

Page 39: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 39

Figure 39: ASM Disk Groups

Installing Oracle 11gR2 Clusterware and Database The following are the requirements for the Oracle 11gR2 RAC:

• A new OS user called "grid" must be created for installing GRID infrastructure and associated with oinstall, asmadmin, asmdba, and dba groups.

• Two separate Oracle Homes are required for grid and Oracle. • Choice of Time Synchronization:

• Oracle Cluster Time Synchronization Service (CTSS) • NTP with slewing (option -x prevents time from being adjusted backward) • New SCAN (Single Client Access Name) Listener for Client connection uses three VIP(s) hence making

the possibility of using a "dedicated single hostname" to access the cluster services by the clients. This is highly beneficial in the event of any changes to the cluster since the clients will not be affected and will not require any changes to the "dedicated single hostname". It can move around cluster nodes and helps to register all database instances and services. It also uses the load balance advisory to distribute the loads across cluster nodes. In addition, you can have local listeners too. Both SCAN and Local listeners are managed by the GRID OS user.

Page 40: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 40

Validation Swingbench utility was used to generate the OLTP workload with a total of 800 users (400 users per node). The focus of the testing was around functional validation and configuration details as well as highlighting the space savings capability.

It is also important to highlight that the performance numbers obtained were consistent for well over 24 hours of running. Many vendors that utilize lower quality SSD drives suffer from performance degradation over time. Approximately 5000 TPS (Transaction per second) were obtained from the Tegile array used in this testing. Tegile offers a variety of array models, some of which have significantly higher performance capabilities.

The following graphics were from the Oracle RAC AWR report. It illustrates that the solution delivered 4796 transactions per second sustained for over 24 hours.

The graphic below showcases the savings from the Tegile inline compression.

Figure 40: Savings from Tegile Compression

The total original data set size was approximately 2.8TB, after MASS/IntelliFlash compression, the actual storage usage was reduced to around 1.6TB. The storage saving was approximately 43.8%.

Page 41: Oracle on Tegile and Cisco UCS Reference Architecture

www.tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

Pg. 41

Conclusion Cisco UCS paired with Tegile Storage Arrays create a highly performing and cost effective platform for Oracle RAC deployments. The highly differentiated architectures are well aligned to bring customers value for Oracle deployments initially, as well as with evolving technology over time.

Critical Business Driver UCS Enabling Technology Tegile Enabling Technology

Performance and Capacity Optimization

Varying blade models with CPU, memory options.

VIC virtual interfaces and 10G integrated fabric

Hybrid and All Flash offerings, enterprise SSD, high capacity HDD

Data Protection Service Profile – stateless computing.

Fully redundant HA fabrics

Snapshots – no overhead

Replication

Data block checksums

Time to Market for Oracle Based Applications

Service Profile templates

Export, Import of XML schema

Clones, read/write without performance or space overhead

Consolidation

Varying workloads on same fabric using instrumented QoS and Virtual interfaces

Multiprotocol via different port options

Store critical files on all flash pools

Multiprotocol FC, iSCSI, NFS (DNFS)

Reduce Infrastructure Costs

Stateless computing allows fewer spare systems

Converged 10G fabric, FCoE reduces infrastructure

Thin Provisioning

Inline Compression, 50% space savings

Acknowledgements • Ramakrishna Nishtala, Cisco Systems • Tushar Patel, Cisco Systems • TJ Bamrah, Cisco Systems • Wen Yang, Tegile Systems • Rajiev Rajavasireddy, Tegile Systems • Chris Naddeo, Tegile Systems

Appendix A few init.asm parameters *.asm_diskgroups='OCRVOTEDG','OLTPDG','REDOCTLDG' *.asm_diskstring='/dev/oracleasm/disks' *.asm_power_limit=1 *.diagnostic_dest='/oracle/admin/GRID' *.instance_type='asm'

Page 42: Oracle on Tegile and Cisco UCS Reference Architecture

Tegile Systems is a leading provider of intelligent flash storage arrays. Our mission is to accelerate the transformation of enterprise IT by changing the performance and economics of enterprise storage.

Our flash storage arrays, with patented IntelliFlash architecture, deliver high I/O and low latency for business applications such as databases, server virtualization and virtual desktops. Our customers achieve business acceleration and unmatched storage capacity reduction.

Follow us on Twitter @tegile or give us a call: (855) 583-4453 or (855) 5-TEGILE

www.Tegile.com

Oracle on Tegile and Cisco UCS Reference Architecture

*.large_pool_size=12M *. memory_max_target=1025M *. memory_target=1025M

Key init.ora Parameters *.db_cache_size=110G *.oracle_base='/oracle/admin/tegiledb'#ORACLE_BASE set from environment *.pga_aggregate_target=8G *.sga_target=125G *.shared_pool_size=12G *._ash_size=256000000 *.audit_file_dest='/oracle/admin/tegiledb/admin/tegiledb/adump' *.cluster_database=true *.compatible='11.2.0.0.0' *.control_files='+REDOCTLDG/tegiledb/control01.ctl','+REDOCTLDG/tegiledb/control02.ctl' *.db_block_size=8192 *.db_domain='ucs.cisco.com' *.db_name='tegiledb' *.db_writer_processes=8 *.diagnostic_dest='/oracle/admin/tegiledb' *.filesystemio_options='SETALL' tegiledb2.instance_number=2 tegiledb1.instance_number=1 *.log_buffer=72089600 *.open_cursors=300 *.parallel_degree_limit='32' *.parallel_min_servers=8 *.pga_aggregate_target=8G *.processes=3000 *.remote_listener='tegilecluster-scan.ucs.cisco.com:1521' *.remote_login_passwordfile='exclusive' *.sga_max_size=125G tegiledb2.thread=2 tegiledb1.thread=1 tegiledb1.undo_tablespace='UNDOTBS1' tegiledb2.undo_tablespace='UNDOTBS2'

Sample entry of udev configuration file: /etc/udev/rules.d/99-oracle_rac_asm.rules

ENV{DM_NAME}=="OLTP1p1", OWNER="oracle", GROUP="dba",MODE="660", NAME+="oracleasm/disks/OLTP1" ENV{DM_NAME}=="OLTP2p1", OWNER="oracle", GROUP="dba",MODE="660", NAME+="oracleasm/disks/OLTP2"