Saturday 29 August 2015

Add Node on Existing Cluster_Steps

1> Compare the mentioned file in the new and old environment: "/etc/sysctl.conf" file.
2> Compare the mentioned file in the new and old environment: "/etc/security/limits.conf" file.
3> Compare the mentioned file in the new and old environment "/etc/pam.d/login" file.
4> Create the necessary oracle groups and users.
5> chkconfig ntpd status
6> Create the .profile for setting the environment for grid and oracle user (Compare the mentioned file in the new and old environment)
"/home/oracle/.bash_profile" file and "/home/grid/.bash_profile"
"/home/oracle/grid_env" and "/home/grid/grid_env"
"/home/oracle/db_env" and "/home/grid/db_env"
7> uname -rm
8> All necessary ASM libraries and packages
9> Ping all the IPs of the New_Node
ping -c 3 112-rac1
ping -c 3 112-rac1-priv
ping -c 3 112-rac1-vip

10> ASM must be configured and "#oracleasm listdisks" command on the New_Node should show all the disks.
Few command reference : oracleasm listdisks, oracleasm init, oracleasm scandisks

11> Configure secure shell for oracle user on all nodes

From oracle_home/oui/bin on existing Node -
./runSSHSetup.sh -user oracle -hosts "Existing_Node New_Node" -advanced -exverify

12> Verify New_Node (HWOS)

From grid_home on existing Node
$GRID_HOME/bin/cluvfy stage -post hwos -n New_Node > /u02/hwos.log

13> Verify Peer (REFNODE)

From grid_home on existing Node
$GRID_HOME/bin/cluvfy comp peer -refnode existing Node -n New_Node -orainv oinstall -osdba dba -verbose > /u02/comppeer.log

14> Verify New_Node (New_Node PRE)

From grid_home on existing Node
$GRID_HOME/bin/cluvfy stage -pre nodeadd -n New_Node -fixup -verbose > /u02/fixup.log

14> Extend Clusterware

Run “addNode.sh”

a) [oracle@existing Node bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={New_Node}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={New_Node-vip}"
From root user on New_Node :::

b) [root@New_Node oraInventory]# ./orainstRoot.sh
From root user on New_Node :::

c) [root@New_Node oraInventory]# cd /u01/app/11.2.0/grid/
[root@New_Node grid]# ./root.sh

If successful, clusterware daemons, the listener, the ASM instance, etc. should be started

d) [oracle@New_Node ~]$ crsctl check crs
e) [oracle@New_Node ~]$ crs_stat -t -v
f) [oracle@New_Node ~]$ olsnodes -n
Existing Node  1
Existing_Node 2
New_Node  3

g)[oracle@New_Node ~]$ srvctl status asm -a
ASM is running on Existing_Node,New_Node,existing Node
ASM is enabled.

h) [oracle@New_Node ~]$ ocrcheck
i) [oracle@New_Node ~]$ crsctl query css votedisk

15) Verify New_Node (New_Node POST)

[oracle@existing Node u02]$ $GRID_HOME/bin/cluvfy stage -post nodeadd -n New_Node -verbose > /u02/clusterpost.log




16) CREATE ORACLE INSTANCE


#Run DBCA to add instance from RAC1
cd $ORACLE_HOME/bin
./dbca


“Oracle Real Application Clusters Database” and then “select Instance Management” and then select “add instance”

No comments:

Post a Comment