1> Capture Resource Definitions
Before doing anything, we should capture resource definitions from the current CRS resources. This is an optional step, but it will simplify configuration later.
A single resource definition can be captured with a command $ORA_CRS_HOME/bincrs_stat -p . Here is a small shell script to capture that for every resource and save it into a .cap file. As you will see later these files can be used to easily recreate resources:
for res in `$ORA_CRS_HOME/bin/crs_stat -p | grep "^NAME=" | cut -d = -f 2` ; do
$ORA_CRS_HOME/bin/crs_stat -p $res >/opt/oracle/resources/$res.cap
done
2> Stop Clusterware
Now you can stop Oracle Clusterware on all nodes using $ORA_CRS_HOME/bin/crsctl stop crs, and then change hostnames. Note that this will stop all databases, listeners, and other resources registered within CRS, so this is the time when outage starts.
3> Rename Hosts
Ask SA to change Hostnames
Please note the following important points with respect to changing hostname.
1> Make sure that aliases in /etc/hosts are amended.
2> Don’t forget to change aliases for VIPs and private IPs. This is not strictly required but you are better off following the standard naming convention (-priv and -vip for interconnect and virtual IP respectively) unless you have really good reason not to. Note that at this stage you should be also able to change IP addresses as well. I didn’t try it, but it should work.
3> Also make sure DNS configuration is also changed by your SA, if your applications use DNS to resolve hostnames.
4> Modify $ORA_CRS_HOME/install/rootconfig
$ORA_CRS_HOME/install/rootconfig is called as part of the root.sh script run after Oracle Clusterware installation. We have to modify it so that it uses different host names.
Generally, you would simply change every appearance of the old hostnames to the new hostnames. If you want to do that in vi, use :%s/old_node/new_node/g. Be careful not to change existing non-relevant parts of the script matching your old hostname. The variables that should be changed are
CRS_HOST_NAME_LIST
CRS_NODE_NAME_LIST
CRS_PRIVATE_NAME_LIST
CRS_NODELIST
CRS_NODEVIPS
The latter might need modification if you also change IPs.
At this stage, you can also change your OCR and voting disks locations. The following lines should be changed:
CRS_OCR_LOCATIONS={OCR path},{OCR mirror path}
CRS_VOTING_DISKS={voting disk1 path},{voting disk2 path},{voting disk3 path}
You can also change your cluster name via the variable CRS_CLUSTER_NAME.
5> Cleanup OCR and Voting Disks
You should clear OCR and voting disks, otherwise, the script will refuse to format them. This can be done using dd. In the example below I have mirrored OCR and 3 voting disks:
dd if=/dev/zero of= bs=1024k
dd if=/dev/zero of={OCR2 path} bs=1024k
dd if=/dev/zero of={voting1 path} bs=1024k
dd if=/dev/zero of={voting2 path} bs=1024k
dd if=/dev/zero of={voting3 path} bs=1024k
6> “Break” Clusterware Configuration
rootconfig has some protection — it checks if Clusterware has been already configured and, if it has, it exits without doing any harm. One way to “break” the configuration and make this script run for a second time is to delete the file /etc/oracle/ocr.loc. (Note that this is a Linux-specific location; other Unix variants might have different path. On HP-UX, for example, it’s something like /var/opt/oracle/ocr.log if I recall correctly.)
Run $ORA_CRS_HOME/install/rootconfig
If everything has gone alright, you should be able to run $ORA_CRS_HOME/install/rootconfig as the root user without any issues. If there are problems, follow the standard CRS troubleshooting procedure — checking /var/log/messages and $ORA_CRS_HOME/log/{nodename} et cetera.
Note that this should be done on every node one by one — sequentially. On the last node of the cluster, the script will try to configure the VIPs, and there is a known bug here if you use a private range IP for VIP. This can be easily fixed by running $ORA_CRS_HOME/bin/vipca manually in graphical mode (i.e. you will need $DISPLAY configured correctly).
Verify Clusterware Configuration and Status
This is a simple check to make sure that all nodes are up and have VIP components configured correctly:
[root@mch10 bin]# $ORA_CRS_HOME/bin/crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.mch10.gsd application ONLINE ONLINE mch10
ora.mch10.ons application ONLINE ONLINE mch10
ora.mch10.vip application ONLINE ONLINE mch10
ora.mch11.gsd application ONLINE ONLINE mch11
ora.mch11.ons application ONLINE ONLINE mch11
ora.mch11.vip application ONLINE ONLINE mch11
7> Adding Listener Resources to CRS
There are two ways to do this — you can either use netca to configure the listener from scratch (you might need to clean it up from listener.ora first), or you can change the configuration manually and register it with CRS from command line. I’ll show how to that manually — obviously, the preferred way when it comes to the real environments. ;-)
First of all, we will need to change the $ORACLE_HOME/network/admin/listener.ora file, and you will probably want to change tnsnames.ora at the same time. You need to replace old node aliases with new ones, and change the IPs if they are used instead of aliases, and if you changed them above during clusterware reconfiguration.
Note that depending on how your LOCAL_LISTENER and REMOTE_LISTENER init.ora parameters are set, you might need to change them: if they reference connections descriptors from tnsname.ora, then only the latter should be changed, but if there are full connection descriptors, they should also be modified).
You should also change listener names to reflect new hostnames. Usually, listeners are named as LISTENER_{hostname}, and you should keep this convention again unless you have a very good reason not to. Do that on both nodes if you don’t have a shared ORACLE_HOME.
Now it’s time to get back to the .cap files with the CRS resource definitions we captured when we began. The files we are interested in are in format ora.{hostname}.LISTENER_{HOSTNAME}.lsnr.cap. In my case, one of them is ora.vs10a.LISTENER_VS10A.lsnr (my old hostname was vs10a). If you changed listener names above, you would need to amend it there as well — NAME=ora.mch10.LISTENER_VS10.lsnr, and rename the file according to the new host name following the same naming convention.
Your VIP name has probably changed, so this line should be modified as well: REQUIRED_RESOURCES=ora.mch10.vip. And finally, the hosting member will change: HOSTING_MEMBERS=mch10. Check the whole file carefully — you should simply modify the old hostname to the new one in both lower and upper case.
Now it’s time to register the resource — the crs_register command does just that. This command specifies the resource name to register and the directory where the .cap file is located. It should be named exactly like resource name plus a “.cap” extension. Each node’s listener can be added from the same node. It’s important that the content of the .cap file is modified appropriately. Assuming I have files ora.mch10.LISTENER_VS10.lsnr and ora.mch11.LISTENER_VS11.lsnr in directory /opt/oracle/A/resources, I run:
$ORA_CRS_HOME/bin/crs_register ora.mch10.LISTENER_VS10.lsnr -dir /opt/oracle/A/resources
$ORA_CRS_HOME/bin/crs_register ora.mch11.LISTENER_VS11.lsnr -dir /opt/oracle/A/resources
Now the output from crs_stat -t should be:
Name Type Target State Host
------------------------------------------------------------
ora....10.lsnr application OFFLINE OFFLINE
ora.mch10.gsd application ONLINE ONLINE mch10
ora.mch10.ons application ONLINE ONLINE mch10
ora.mch10.vip application ONLINE ONLINE mch10
ora....11.lsnr application OFFLINE OFFLINE
ora.mch11.gsd application ONLINE ONLINE mch11
ora.mch11.ons application ONLINE ONLINE mch11
ora.mch11.vip application ONLINE ONLINE mch11
It’s now time to start the listeners:
$ORA_CRS_HOME/bin/srvctl start nodeapps -n mch10
$ORA_CRS_HOME/bin/srvctl start nodeapps -n mch11
crs_stat -t should show the listeners online:
Name Type Target State Host
------------------------------------------------------------
ora....10.lsnr application ONLINE ONLINE mch10
ora.mch10.gsd application ONLINE ONLINE mch10
ora.mch10.ons application ONLINE ONLINE mch10
ora.mch10.vip application ONLINE ONLINE mch10
ora....11.lsnr application ONLINE ONLINE mch11
ora.mch11.gsd application ONLINE ONLINE mch11
ora.mch11.ons application ONLINE ONLINE mch11
ora.mch11.vip application ONLINE ONLINE mch11
8.> Adding ASM Instances to CRS
This step is optional, and it you don’t use ASM, skip it.
Unfortunately, we can’t simply use .cap files to register ASM resources. There are more pieces required and the only way I could find to register ASM instances is to use srvctl which is, actually, a more supported option. This is simple:
$ORACLE_HOME/bin/srvctl add asm -n mch10 -i ASM1 -o $ORACLE_HOME
$ORACLE_HOME/bin/srvctl add asm -n mch11 -i ASM1 -o $ORACLE_HOME
$ORACLE_HOME/bin/srvctl start asm -n mch10
$ORACLE_HOME/bin/srvctl start asm -n mch11
There is a catch — sometimes I had to prefix the name of the ASM instance with a “+” (i.e. making it like -i +ASM1) and sometimes no plus-sign was required.
crs_stat -t should show now:
Name Type Target State Host
------------------------------------------------------------
ora....SM1.asm application ONLINE ONLINE mch10
ora....10.lsnr application ONLINE ONLINE mch10
ora.mch10.gsd application ONLINE ONLINE mch10
ora.mch10.ons application ONLINE ONLINE mch10
ora.mch10.vip application ONLINE ONLINE mch10
ora....SM2.asm application ONLINE ONLINE mch11
ora....11.lsnr application ONLINE ONLINE mch11
ora.mch11.gsd application ONLINE ONLINE mch11
ora.mch11.ons application ONLINE ONLINE mch11
ora.mch11.vip application ONLINE ONLINE mch11
9.> Register Databases
For each database, you need to register a database resource. Then, for every instance, you need to register an instance resource. So for database A, my two-node cluster, I use:
$ORACLE_HOME/bin/srvctl add database -d A -o $ORACLE_HOME
$ORACLE_HOME/bin/srvctl add instance -d A -i A1 -n mch10
$ORACLE_HOME/bin/srvctl add instance -d A -i A2 -n mch11
$ORACLE_HOME/bin/srvctl start database -d A
10.> Finally, crs_stat -t should show all resources online:
Name Type Target State Host
------------------------------------------------------------
ora.A.A1.inst application ONLINE ONLINE mch10
ora.A.A2.inst application ONLINE ONLINE mch11
ora.A.db application ONLINE ONLINE mch10
ora....SM1.asm application ONLINE ONLINE mch10
ora....10.lsnr application ONLINE ONLINE mch10
ora.mch10.gsd application ONLINE ONLINE mch10
ora.mch10.ons application ONLINE ONLINE mch10
ora.mch10.vip application ONLINE ONLINE mch10
ora....SM2.asm application ONLINE ONLINE mch11
ora....11.lsnr application ONLINE ONLINE mch11
ora.mch11.gsd application ONLINE ONLINE mch11
ora.mch11.ons application ONLINE ONLINE mch11
ora.mch11.vip application ONLINE ONLINE mch11
11.> Other Resources
If you had other resources like services, user VIPs, or user-defined resources, you will probably be fine using the crs_register command to get them back into CRS. I didn’t try it, but it should work.
Final Check
To make sure that everything is working, you should at least reboot every node and see if everything comes up.
I don’t know if that operation is considered to be supported. The only slippery bit is modifying the $ORA_CRS_HOME/install/rootconfig file, because it’s usually created by the Universal Installer. Another tricky place is the “unusual” registration of listeners. Otherwise, all the commands are pretty much usual stuff, I think. Good luck!