Oracle 10.2.0.4 RAC Setup Step By Step
1.We need to configure the following items on the both nodes (node1 and node2 server)
B. System Services
If you find there do not exists ‘telnet’ service , you need setup this ‘telnet’ service . we will setup
“telnet” rpm package when we install other linux rpm packages required by Oracle installation .
2. When the both servers configuration is completed , we need to sync date time between both
server and time server , and because they are new servers , so maybe we need to adjust their hard
clock and system time .
A. you can key in ‘date ‐s hh:mm:ss’ to adjust time on both servers refer to current time .
B. Key in ‘ hwclock ‐r ; date ‘ to check the different between the hard clock and OS system time ，
then we can rsync time by running command ‘hwclock ‐‐systohc’ , at last , we need key in ‘ hwclock
‐r ; date ‘ to check again , and reboot both servers .
After rebooting both servers , double check the hard clock and OS system time .
3. We need to check the public and private network card which Oracle RAC require on both servers.
You can see 10.13.13.205/206 are public IPs and 192.168.13.205/206 are private IPs .
4. Next , we will configure ‘/etc/hosts’ file on both servers .
5. Configure linux kernel parameter ‘/etc/sysctl.conf ’ file on both servers .
6. Create Linux group and user on both servers for Oracle and clusterware setup , and configure
oracle limit and user profile on both servers .
On both server , edit ‘ /etc/security/limits.conf ’ and add the below 4 rows to limit resource of user
On both server , edit ‘ /etc/pam.d/login ’ and add the below line .
# vi /etc/pam.d/login
session required pam_limits.so
On both servers, we need to edit file ‘ .bash_profile ’ in the home directory of user ‘oracle’ .
The .bash_profile script. like :
# Get the aliases and functions
if [ ‐f ~/.bashrc ]; then
# User specific environment and startup programs
export BASH_ENV PATH
# Set Oracle Environment
ORACLE_HOME=/u01/product/oracle; export ORACLE_HOME
ORACLE_SID= wmb2bprd1 ; export ORACLE_SID
#NLS_LANG='traditional chinese_taiwan'.ZHT16BIG5; export NLS_LANG
ORA_CRS_HOME=/u01/product/crs; export ORA_CRS_HOME
EDITOR=/bin/vi; export EDITOR
alias ll='ls ‐l';
alias ls='ls ‐‐color';
7. We must check the following linux packages which Oracle RAC requrie .
OS: Red Hat Enterprise Linux Server release 5.3 (Tikanga) X86_64Bit ,
Kernel: 2.6.18‐128.el5 #1 SMP
glibc‐2.5‐24.x86_64.rpm (rpm ‐q glibc)
glibc‐common‐2.5‐24.x86_64.rpm (rpm ‐q glibc‐common)
glibc‐devel‐2.5‐24.i386.rpm (rpm ‐q glibc‐devel) (Need 32bit & 64bit rpm)
glibc‐devel‐2.5‐24.x86_64.rpm (rpm ‐q glibc‐devel)
libXp‐1.0.0‐8.1.el5.i386.rpm （rpm ‐q libXp） （Need 32bit & 64bit rpm）
libXp‐1.0.0‐8.1.el5.x86_64.rpm （rpm ‐q libXp）
binutils‐220.127.116.11.6‐6.el5.x86_64.rpm (rpm ‐q binutils)
compat‐db‐4.2.52‐5.1.x86_64.rpm (rpm ‐q compat‐db)
control‐center‐2.16.0‐16.el5.x86_64.rpm (rpm ‐q control‐center)
gcc‐4.1.2‐42.el5.x86_64.rpm (rpm ‐q gcc)
gcc‐c++‐4.1.2‐42.el5.x86_64.rpm (rpm ‐q gcc‐c++)
libstdc++‐4.1.2‐42.el5.x86_64.rpm (rpm ‐q libstdc++)
libstdc++‐devel‐4.1.2‐42.el5.x86_64.rpm (rpm ‐q libstdc++‐devel)
make‐3.81‐3.el5.x86_64.rpm (rpm ‐q make)
ksh‐20080202‐2.el5.x86_64.rpm (rpm ‐q ksh)
sysstat‐7.0.2‐1.el5.x86_64.rpm (rpm ‐q sysstat)
gnome‐screensaver‐2.16.1‐8.el5.x86_64.rpm (rpm ‐q gnome‐screensaver)
libaio‐devel‐0.3.106‐3.2.x86_64.rpm (rpm ‐q libaio‐devel)
libaio‐0.3.106‐3.2.x86_64.rpm (rpm ‐q libaio)
Review from the above picture , we need to install some linux packages, we can get these packages
from Linux 5.3 CD and we will create ‘/u01/packages’ directory to storage these packages :
On both servers , Install these Linux packages (include vsftp and telnet rpm packages) :
On both servers , key in “ setup ” to enable vsftp and telnet service if ftp and telnet does not work,
maybe you need to reboot server or restart these two service by command .
[root@mxb2bcoredb01 ~]# setup
8. Start to copy Clusterware , Oracle, ocfs2 software to the both servers (/u01/packages) .
9. Disable SElinux on both server and reboot both servers , make it valid .
10. Make directory ‘ .ssh ’ in the oracle home on the both servers .
11. Create a ssh connection environment
12. On the first server (node1) , generate ‘authorized_keys’ between node1 and node2 server .
Then on node1 server, copy ‘authorized_keys’ file from node1 to the same directory on the node2
“chmod 600 authorized_keys ” on the node2 server .
At last, we need to test ‘ssh’ between node1 and node2 server . if it does not need password , test
is successful .
13. Setup the ocfs2 software on the both servers , please notice the installation sequence .
14. Check the storage and powerpath message on the both servers .
[root@mxb2bcoredb01 ~]# powermt display dev=all
[root@mxb2bcoredb02 ~]# powermt display dev=all
You can see the message like the below picture (same on both servers):
15. We fdisk shared disk on the storage . Only need to perform. this step on Node 1 Server .
16. We need to configure VNC or other remote control tools to login Linux X‐Windows on both
servers , we will omit the process of VNC configuration .
We start to configure OCFS2 file system (Oracle Cluster File System) in X‐Windows and format shared
disk on the storage . Only need to perform. this step on Node 1 Server .
Click ‘OK’ to format the new device .
Click ‘OK’ to format the new device .
‘OK’ to format the new device .
17. Then we need to configure cluster nodes , Only need to perform. this step on Node 1 Server .
Add public hostname and private IP Address to the below form. .
At last , Click ‘apply’ to activate them , and we can see something at the ‘Active’ column .
Notes : if you encounter the error message “ Could not start cluster stack. This must be resolved before
any OCFS2 filesystem can be mounted . “ , maybe there exists problem on the OCFS2 software version ;
and if you encounter the error message like “ o2cb_ctl: Unable to access cluster service while creating
node ” , you need to check the file “/etc/ocfs2/cluster.conf” and delete all of its content .
We can check the configuration file ‘ /etc/ocfs2/cluster.conf ’ on the node1 server .
Then we need to propagate configuration to another node server (Node2) .
At last , click menu ‘ File Quit ’
18. Next step, we need to configure O2CB (ocfs2) and test it on the both servers .
then configure O2CB on both servers.
Test and check O2CB configuration on both servers.
19. Create some directory for oracle database and mount shared disk on the both servers .
Create directory for oracle datafile on both servers .
Create another directory for clusterware and oracle software on both servers .
On both servers
# vi /etc/fstab and configure the ocfs2 shared disk .
We can try to mount ocfs2 shared disk on the first server .
Oracle Clusterware setup .
1.At first , we must check system date and time on both servers and make sure they are same . we
will create special script. (ntp file) to synchronize time with time server (here IP of time server:
10.13.67.50 or 10.13.7.30 ) . Please configure ntpdate script. on both servers .
$ mkdir ‐p /u01/run
$ mkdir ‐p /u01/run/log
The script. in ntp file like :
And then configure crontab job (Notes: root user)
# crontab ‐e
And the below script. to crontab job
0,30 * * * * sh /u01/run/ntp 1>>/u01/run/log/ntp.log 2>>/u01/run/log/ntp.bad
At last, You can check the script. by “ # sh /u01/run/ntp ” , make sure the time of both servers is
correct and synchronous , otherwise , you will receive time stamp error message when you install
oracle clusterware .
2, Create clusterware (Oracle CRS) installation directory on both servers .
Ok, let us setup Oracle clusterware (CRS) , firstly , we need to unzip CRS software file in the
/u01/packages directory (by oracle role) , because when the crs softeware complete installation , it
will remote copy these crs files to Node2 server by ssh . Only need to perform. this step on Node 1
So we key in ‘n’ to quit the installation window , and run ‘ rootpre.sh ’ by root in another window.
Notes: oracle software installation log ： /u01/product/oraInventory/logs
Notes : ‘ /u01/product/crs ’ is the crs software installation directory , please remember to select
‘production languages’ .
Then edit it , you can refer to ‘ /etc/hosts ’ file .
来自 “ ITPUB博客 ” ，链接：http://blog.itpub.net/25198367/viewspace-706211/，如需转载，请注明出处，否则将追究法律责任。