5 install grid

43
Author – A.Kishore http:/www.appsdba.info Install Oracle 11g R2 Grid Infrastructure High Level Steps > Install openfiler using VMWARE > Install Grid Infrastructure on RACERP1 and RACERP2 > Install Oracle 11gR2 software on RACERP1 and RACERP2 Download the software from the below site http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010- linuxsoft-085393.html

Upload: kishore-adikar

Post on 21-Apr-2015

77 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Install Oracle 11g R2 Grid Infrastructure

High Level Steps

> Install openfiler using VMWARE> Install Grid Infrastructure on RACERP1 and RACERP2> Install Oracle 11gR2 software on RACERP1 and RACERP2

Download the software from the below site

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/112010-linuxsoft-085393.html

Page 2: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

On Node 1

/etc/init.d/oracleasm listdisksVOL1VOL2

On Node 2

/etc/init.d/oracleasm scandisksScanning system for ASM disks [ OK ]

/etc/init.d/oracleasm listdisksVOL1VOL2

Page 3: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

1. Install Oracle 11gR2 Grid Infrastructure (Both the nodes should be ready)

-- clean the old installation (on both nodes)

rm -rf /etc/oraclerm -rf /etc/oraInst.locrm -rf /etc/oratabcd /d01/oracle/apprm -rf oraclerm -rf oraInventoryrm -rf /d01/oracle/app/*.*rm -rf /d01/oracle/app/11.2.0/grid/*.*

-- On one of the node – Using only one diskgroup

/etc/init.d/oracleasm deletedisk VOL1 /dev/sdc1/etc/init.d/oracleasm deletedisk VOL2 /dev/sdd1

/etc/init.d/oracleasm createdisk VOL1 /dev/sdc1/etc/init.d/oracleasm createdisk VOL2 /dev/sdd1

/etc/init.d/oracleasm listdisks

Page 4: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Run cluvfy to verify all the prereqs are available

su - gridcd $SOFTWARE_LOCATION/11gR2/grid

sh runcluvfy.sh stage -pre crsinst -n racerp1,racerp2 –verbose

Page 5: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

cd $SOFTWARE_LOCATION/11gR2/grid./runInstaller

Page 6: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 7: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

192.168.1.187 racnode-cluster-scan – make the entry in /etc/hosts

Page 8: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 9: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 10: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Click setup

Click TEST

Page 11: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 12: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 13: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 14: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 15: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 16: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 17: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 18: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 19: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 20: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 21: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 22: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 23: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

On node1 run root.sh

[root@racerp1 tmp]# /d01/oracle/app/11.2.0/grid/root.sh

[root@racerp2 ~]# /d01/oracle/app/11.2.0/grid/root.sh

Page 24: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 25: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Click OK

Page 26: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

INFO: Checking Single Client Access Name (SCAN)...INFO: Checking name resolution setup for "racnode-cluster-scan"...INFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 216.24.138.153) failedINFO: ERROR:INFO: PRVF-4657 : Name resolution setup check for "racnode-cluster-scan" (IP address: 192.168.1.187) failedINFO: ERROR:INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "racnode-cluster-scan"INFO: Verification of SCAN VIP and Listener setup failed

Provided this is the only error reported by the CVU, it would be safe to ignore this check and continue by clicking the [Next] button in OUI and move forward with the Oracle grid infrastructure installation. This is documented in Doc ID: 887471.1 on the My Oracle Support web site.

Page 27: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

If on the other hand you want the CVU to complete successfully while still only defining the SCAN in the hosts file, simply modify the nslookup utility as root on both Oracle RAC nodes as follows.

First, rename the original nslookup binary to nslookup.original on both Oracle RAC nodes:

[root@racerp1 tmp]# mv /usr/bin/nslookup /usr/bin/nslookup.original

Next, create a new shell script named /usr/bin/nslookup as shown below while replacing 24.154.1.34 with your primary DNS, racnode-cluster-scan with your SCAN host name, and 192.168.1.187 with your SCAN IP address:

#!/bin/bash

HOSTNAME=${1}

if [[ $HOSTNAME = "racnode-cluster-scan" ]]; then echo "Server: 24.154.1.34" echo "Address: 24.154.1.34#53" echo "Non-authoritative answer:" echo "Name: racnode-cluster-scan" echo "Address: 192.168.1.187"else /usr/bin/nslookup.original $HOSTNAMEFi

Finally, change the new nslookup shell script to executable:

[root@racnode1 ~]# chmod 755 /usr/bin/nslookup

Remember to perform these actions on both Oracle RAC nodes.

The new nslookup shell script simply echo's back your SCAN IP address whenever the CVU calls nslookup with your SCAN host name; otherwise, it calls the original nslookup binary.

The CVU will now pass during the Oracle grid infrastructure installation when it attempts to verify your SCAN:

[grid@racerp1 ~]$ cluvfy comp scan -verbose

Page 28: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

[grid@racerp2 ~]$ cluvfy comp scan -verbose

Now retry

Page 29: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 30: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

Page 31: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

cat grid.envexport ORACLE_HOME=/d01/oracle/app/11.2.0/gridexport PATH=$PATH:$ORACLE_HOME/bin. ./grid.envcrs_stat –t

Post-Install Actions

By default, the Global Services Daemon (GSD) is not started on the cluster. To start GSD, change directory to the <CRS_HOME> and issue the following commands:

srvctl enable nodeapps –gsrvctl start nodeapps

Page 32: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

ora.oc4j is offine, that is fine

Successful Oracle Clusterware operation can also be verified using the following command:crsctl check crs

olsnodes -n

Page 33: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

srvctl status asm -a

ocrcheck

crsctl query css votedisk

Page 34: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

crsctl check cluster -all

Page 35: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

References:

Thanks to Jeff Hunter, without his help I couldn’t have done this assignment

http://www.rachelp.nl/index_kb.php?menu=articles&actie=show&id=61

11gR2 Grid: root.sh fails to start the clusterware on the second node (Doc ID 981357.1)

1. short-term: disable the firewall on all nodes, on Linux this can be done by running the following command(s) as the root user on each node of the cluster:

service iptables stopservice ip6tables stop

To permanently disable the firewall, use:

chkconfig iptables offchkconfig ip6tables off

2.  long-term: exclude all traffic on the private network from the firewall configuration.

http://coskan.wordpress.com/

http://coskan.wordpress.com/2009/12/07/root-sh-failed-after-asm-disk-creation-for-11gr2-grid-infrastructure/

http://oracle-base.com/forums/viewtopic.php?f=1&t=11307&start=01. How to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation [ID

942166.1]

Issues and solutions

cd /home/oracle/app/11.2.0/grid/log/linux2

cat alertlinux2.log

[ctssd(27935)]CRS-2409:The clock on host linux2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.2010-07-24 05:26:20.471

Page 36: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

[ctssd(27935)]CRS-2412:The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /home/oracle/app/11.2.0/grid/log/linux2/ctssd/octssd.log.2010-07-24 05:26:20.471[ctssd(27935)]CRS-2409:The clock on host linux2 is not synchronous with the mean cluster time. No action has been taken as the Cluster Time Synchronization Service is running in observer mode.http://www.oracle.com/technology/pub/articles/hunter-rac11gr2-iscsi.htmlhttp://arjudba.blogspot.com/2010/03/in-11gr2-grid-rootsh-fails-with-crs.htmlHow to Proceed from Failed 11gR2 Grid Infrastructure (CRS) Installation [ID 942166.1]

-- If root.sh has failed, run the below commands to clean up

-- first nodecd /home/oracle/app/11.2.0/grid/crs/install./rootcrs.pl -verbose -deconfig -force -- second nodecd /home/oracle/app/11.2.0/grid/crs/install./rootcrs.pl -verbose -deconfig -force -lastnode

To add vip manually

[root@linux2 bin]# ./srvctl add vip -n linux2 -A 192.168.0.201/255.255.255.0/eth0 -k 1[root@linux2 bin]# ./srvctl config vip -n linux2

http://forums.oracle.com/forums/thread.jspa?messageID=4039293

rpm -q binutils-2.15.92.0.2 \compat-libstdc++-33.2.3 \elfutils-libelf-0.97 \elfutils-libelf-devel-0.97 \gcc-3.4.6 \gcc-c++-3.4.6 \glibc-2.3.4-2.41 \glibc-common-2.3.4 \glibc-devel-2.3.4 \glibc-headers-2.3.4 \libaio-devel-0.3.105 \libaio-0.3.105 \

Page 37: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

libgcc-3.4.6 \libstdc++-3.4.6 \libstdc++-devel-3.4.6 \make-3.80 \pdksh-5.2.14 \sysstat-5.0.5 \unixODBC-2.2.11 \unixODBC-devel-2.2.11

http://forums.oracle.com/forums/thread.jspa?messageID=4345930

http://www.oracle.com/technology/pub/articles/wartak-rac-vm_3.htmlhttp://wiki.oracle.com/page/11gR2+RAC+on+a+Mac+-+Part+6 –nice one

Check Network Requirements: In this release there are two new network related components – SCAN and GNS.

Single Client Access Name (SCAN) for the Cluster:

During Typical installation, you are prompted to confirm the default Single Client Access Name (SCAN), which is used to connect to databases within the cluster irrespective of which nodes they are running on. By default, the name used as the SCAN is also the name of the cluster. The default value for the SCAN is based on the local node name. If you change the SCAN from the default, then the name that you use must be globally unique throughout your enterprise.In a Typical installation, the SCAN is also the name of the cluster. The SCAN and cluster name must be at least one character long and no more than 15 characters in length, must be alphanumeric, and may contain hyphens (-). If you require a SCAN that is longer than 15 characters, then select an Advanced installation.

Configure the following addresses:

• A public IP address for each node• A virtual IP address for each node• A single client access name (SCAN) configured on the domain name server (DNS) or Grid Naming Service (GNS) for Round Robin resolution to three addresses (recommended) or at least one address (check documentation)

Some more information on Single Client Access Name

Single Client Access Name (SCAN) for the Cluster

Page 38: 5 Install Grid

Author – A.Kishore http:/www.appsdba.info

If you have ever been tasked with extending an Oracle RAC cluster by adding a new node (or shrinking a RAC cluster by removing a node), then you know the pain of going through a list of all clients and updating their SQL*Net or JDBC configuration to reflect the new or deleted node! To address this problem, Oracle 11g Release 2 introduced a new feature known as Single Client Access Name or SCAN for short. SCAN is a new feature that provides a single host name for clients to access an Oracle Database running in a cluster. Clients using SCAN do not need to change their TNS configuration if you add or remove nodes in the cluster. The SCAN resource and its associated IP address(s) provide a stable name for clients to use for connections, independent of the nodes that make up the cluster. You will be asked to provide the host name and up to three IP addresses to be used for the SCAN resource during the interview phase of the Oracle grid infrastructure installation. For high availability and scalability, Oracle recommends that you configure the SCAN name so that it resolves to three IP addresses. At a minimum, the SCAN must resolve to at least one address.

The SCAN virtual IP name is similar to the names used for a node's virtual IP addresses, such as racnode1-vip. However, unlike a virtual IP, the SCAN is associated with the entire cluster, rather than an individual node, and can be associated with multiple IP addresses, not just one address. Note that SCAN addresses, virtual IP addresses, and public IP addresses must all be on the same subnet.

The SCAN should be configured so that it is resolvable either by using Grid Naming Service (GNS) within the cluster, or by using Domain Name Service (DNS) resolution.

In this article, I will configure SCAN to resolve to only one, manually configured static IP address using the DNS method (but not actually defining it in DNS)