============================================================
Oracle® Database 10g Release 2 (10.2.0) RAC for RedHat5.4工程实施实录
[ RHEL5.4+Oracle10gR2 RAC+OCFS2 ]
by : 王磊/菜小小~@2011/1/21 10:57 QQ:262477752
最后整理:王磊/菜小小~@2011/1/25 19:05
本文记录了RedHat Linux5.4环境下使用OCFS2配置Oracle10.2RAC的实施实录
============================================================
书接前文《工程实施实录-RHEL5.4+Oracle10gR2 RAC+OCFS2【一】》
六、分别在两节点用安装配置ocfs2(root权限执行)
分别在两节点安装ocfs2软件、启动服务
[root@NEWORACLE1&2 mapper]# cd /home/oracle_in/ocfs/
[root@NEWORACLE1&2 ocfs]# ls
ocfs2-2.6.18-164.el5-1.4.7-1.el5.x86_64.rpm ocfs2-tools-1.4.4-1.el5.x86_64.rpm
ocfs2-2.6.18-164.el5-debuginfo-1.4.7-1.el5.x86_64.rpm ocfs2-tools-debuginfo-1.4.4-1.el5.x86_64.rpm
ocfs2console-1.4.4-1.el5.x86_64.rpm ocfs2-tools-devel-1.4.4-1.el5.x86_64.rpm
[root@NEWORACLE1&2 ocfs]# rpm -ivh *
warning: ocfs2-2.6.18-164.el5-1.4.7-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing... ########################################### [100%]
1:ocfs2-tools ########################################### [ 17%]
2:ocfs2-2.6.18-164.el5 ########################################### [ 33%]
3:ocfs2-2.6.18-164.el5-de########################################### [ 50%]
4:ocfs2console ########################################### [ 67%]
5:ocfs2-tools-debuginfo ########################################### [ 83%]
6:ocfs2-tools-devel ########################################### [100%]
[root@NEWORACLE1&2 ocfs]# /etc/init.d/o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
will abort.
Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Checking O2CB cluster configuration : Failed
[root@NEWORACLE1&2 ocfs]# /etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Offline
[root@NEWORACLE1&2 ocfs]# export DISPLAY=130.0.68.159:0.0
[root@NEWORACLE1&2 ocfs]# /etc/init.d/o2cb configure
[root@NEWORACLE1&2 ocfs]# ocfs2console
在节点1用ocfs2console 配置节点
ocfs2console --> Task -->Format
查看一下/etc/ocfs2/下是否有cluster.conf文件,把里面的内容清空;
ocfs2console --> Cluster --> Node Configuration-->把两个节点都添加上,名字就是Hostname, IP部分添私网IP.
【注:此处如果有问题可以重启动O2CB服务试试,或者重新配置O2CB服务 service o2cb configure】
ocfs2console --> Cluster --> Progagate Cluster Configuration 同步NEWORACLE1上的配置到NEWORACLE2上去,输出日志如下:
Propagating cluster configuration to NEWORACLE2...
Finished!
另:附配置完成的/etc/ocfs2/cluster.conf 文件内容如下:
[root@NEWORACLE1 oracle_in]# cat /etc/ocfs2/cluster.conf
node:
ip_port = 7777
ip_address = 10.0.0.1
number = 0
name = NEWORACLE1
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.0.0.2
number = 1
name = NEWORACLE2
cluster = ocfs2
cluster:
node_count = 2
name = ocfs2
在节点1配置文件系统挂载
[root@NEWORACLE1 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda6 19G 1.3G 17G 8% /
/dev/sda5 184G 204M 174G 1% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 44M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_redo
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_ocr
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_vote
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_temp
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_archive
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_data
[root@NEWORACLE1 ~]# mkdir -p /u02/ocfs_control
[root@NEWORACLE1 ~]# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/home /home ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0
LABEL=redo /u02/ocfs_redo ocfs2 _netdev,datavolume,nointr 0 0
LABEL=ocr /u02/ocfs_ocr ocfs2 _netdev,datavolume,nointr 0 0
LABEL=vote /u02/ocfs_vote ocfs2 _netdev,datavolume,nointr 0 0
LABEL=temp /u02/ocfs_temp ocfs2 _netdev,datavolume,nointr 0 0
LABEL=archive /u02/ocfs_archive ocfs2 _netdev,datavolume,nointr 0 0
LABEL=data /u02/ocfs_data ocfs2 _netdev,datavolume,nointr 0 0
LABEL=control /u02/ocfs_control ocfs2 _netdev,datavolume,nointr 0 0
[root@NEWORACLE1 ~]# mount -a
mount.ocfs2: Device or resource busy while mounting /dev/sdb1 on /u02/ocfs_redo. Check 'dmesg' for more information on this error.
mount.ocfs2: Device or resource busy while mounting /dev/sdm1 on /u02/ocfs_vote. Check 'dmesg' for more information on this error.
mount.ocfs2: Device or resource busy while mounting /dev/sdn1 on /u02/ocfs_temp. Check 'dmesg' for more information on this error.
mount.ocfs2: Device or resource busy while mounting /dev/sdc1 on /u02/ocfs_archive. Check 'dmesg' for more information on this error.
mount.ocfs2: Device or resource busy while mounting /dev/sdk1 on /u02/ocfs_control. Check 'dmesg' for more information on this error.
注:在节点1此处mount报错,df查看没有挂载上去,接下来使用oracle的工具ocfs2console进行挂载正常挂载上去,挂载截图如下:
[root@NEWORACLE1 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda6 19G 1.3G 17G 8% /
/dev/sda5 184G 204M 174G 1% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 44M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
/dev/dm-8 2.0G 278M 1.8G 14% /u02/ocfs_ocr
/dev/dm-12 601G 1.7G 599G 1% /u02/ocfs_data
/dev/dm-7 4.0G 278M 3.8G 7% /u02/ocfs_redo
/dev/dm-9 2.0G 278M 1.8G 14% /u02/ocfs_vote
/dev/dm-10 32G 874M 32G 3% /u02/ocfs_temp
/dev/dm-11 361G 1.4G 359G 1% /u02/ocfs_archive
/dev/dm-13 2.0G 278M 1.8G 14% /u02/ocfs_control
[root@NEWORACLE1 ~]# chown -R oracle:oinstall /u02
[root@NEWORACLE1 ~]# chmod -R 755 /u02
[root@NEWORACLE1 ~]# ls -l /u02
总计 0
drwxr-xr-x 3 755 oinstall 3896 01-21 09:53 ocfs_archive
drwxr-xr-x 3 755 oinstall 3896 01-21 09:54 ocfs_control
drwxr-xr-x 3 755 oinstall 3896 01-21 09:54 ocfs_data
drwxr-xr-x 3 755 oinstall 3896 01-21 09:52 ocfs_ocr
drwxr-xr-x 3 755 oinstall 3896 01-21 09:51 ocfs_redo
drwxr-xr-x 3 755 oinstall 3896 01-21 09:53 ocfs_temp
drwxr-xr-x 3 755 oinstall 3896 01-21 09:53 ocfs_vote
在节点2配置文件系统挂载
[root@NEWORACLE2 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda5 19G 1.6G 17G 9% /
/dev/sda6 184G 3.9G 170G 3% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 47M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_ocr
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_vote
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_temp
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_archive
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_data
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_control
[root@NEWORACLE2 ~]# mkdir -p /u02/ocfs_redo
[root@NEWORACLE2 ~]# cat /etc/fstab
LABEL=/ / ext3 defaults 1 1
LABEL=/home /home ext3 defaults 1 2
LABEL=/usr /usr ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
LABEL=SWAP-sda2 swap swap defaults 0 0
LABEL=redo /u02/ocfs_redo ocfs2 _netdev,datavolume,nointr 0 0
LABEL=ocr /u02/ocfs_ocr ocfs2 _netdev,datavolume,nointr 0 0
LABEL=vote /u02/ocfs_vote ocfs2 _netdev,datavolume,nointr 0 0
LABEL=temp /u02/ocfs_temp ocfs2 _netdev,datavolume,nointr 0 0
LABEL=archive /u02/ocfs_archive ocfs2 _netdev,datavolume,nointr 0 0
LABEL=data /u02/ocfs_data ocfs2 _netdev,datavolume,nointr 0 0
LABEL=control /u02/ocfs_control ocfs2 _netdev,datavolume,nointr 0 0
[root@NEWORACLE2 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda6 19G 1.3G 17G 8% /
/dev/sda5 184G 204M 174G 1% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 44M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
/dev/dm-8 2.0G 278M 1.8G 14% /u02/ocfs_ocr
/dev/dm-12 601G 1.7G 599G 1% /u02/ocfs_data
/dev/dm-7 4.0G 278M 3.8G 7% /u02/ocfs_redo
/dev/dm-9 2.0G 278M 1.8G 14% /u02/ocfs_vote
/dev/dm-10 32G 874M 32G 3% /u02/ocfs_temp
/dev/dm-11 361G 1.4G 359G 1% /u02/ocfs_archive
/dev/dm-13 2.0G 278M 1.8G 14% /u02/ocfs_control
[root@NEWORACLE2 ~]# chown -R oracle:oinstall /u02
[root@NEWORACLE2 ~]# chmod -R 755 /u02
[root@NEWORACLE2 ~]# ls -l /u02
总计 56
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_archive
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_control
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_data
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_ocr
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_redo
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_temp
drwxr-xr-x 2 755 oinstall 4096 01-21 10:00 ocfs_vote
[root@NEWORACLE2 ~]# mount -a
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
mount.ocfs2: Unable to access cluster service while trying to join the group
注:在节点2此处mount报错,df查看没有挂载上去,接下来重启了该节点的ocfs2服务,重新挂载正常。
[root@NEWORACLE2 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda5 19G 1.6G 17G 9% /
/dev/sda6 184G 3.9G 170G 3% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 47M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
[root@NEWORACLE2 ~]# service o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Offline
[root@NEWORACLE2 ~]# service o2cb restart
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@NEWORACLE2 ~]# service o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
Network idle timeout: 30000
Network keepalive delay: 2000
Network reconnect delay: 2000
Checking O2CB heartbeat: Not active
[root@NEWORACLE2 ~]# mount -a
[root@NEWORACLE2 ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/sda5 19G 1.6G 17G 9% /
/dev/sda6 184G 3.9G 170G 3% /home
/dev/sda3 29G 3.6G 24G 14% /usr
/dev/sda1 1.9G 47M 1.8G 3% /boot
tmpfs 7.8G 8.0K 7.8G 1% /dev/shm
/dev/mapper/mpath3p1 4.0G 278M 3.8G 7% /u02/ocfs_redo
/dev/mapper/mpath9p1 2.0G 278M 1.8G 14% /u02/ocfs_ocr
/dev/mapper/mpath7p1 2.0G 278M 1.8G 14% /u02/ocfs_vote
/dev/mapper/mpath8p1 32G 874M 32G 3% /u02/ocfs_temp
/dev/mapper/mpath4p1 361G 1.4G 359G 1% /u02/ocfs_archive
/dev/mapper/mpath6p1 601G 1.7G 599G 1% /u02/ocfs_data
/dev/mapper/mpath5p1 2.0G 278M 1.8G 14% /u02/ocfs_control
八、安装CRS
[root@NEWORACLE2 oracle_in]# ls
asmlib client clusterware companion database doc gateways ocfs welcome.html
[root@NEWORACLE2 oracle_in]# tar -zcf clusterware.tar.gz clusterware
[root@NEWORACLE2 oracle_in]# scp clusterware.tar.gz root@NEWORACLE1:`pwd`
clusterware.tar.gz 100% 302MB 37.7MB/s 00:08
[root@NEWORACLE1 oracle_in]# ls
asmlib clusterware.tar.gz ocfs
[root@NEWORACLE1 oracle_in]# tar -zxf clusterware.tar.gz
[root@NEWORACLE1 oracle_in]# ls
asmlib clusterware clusterware.tar.gz ocfs
[root@NEWORACLE1 home]# chown -R oracle:oinstall oracle_in/
[root@NEWORACLE1 home]# chmod -R 755 oracle_in/
[root@NEWORACLE1 cluvfy]# su - oracle
[oracle@NEWORACLE1 ~]$ cd /home/oracle_in/clusterware/cluvfy
注意:runcluvfy集群预检查使用Oracle用户进行
[oracle@NEWORACLE1 cluvfy]$ ./runcluvfy.sh stage -pre crsinst -n NEWORACLE1,NEWORACLE2 -verbose
执行 群集服务设置 的预检查
正在检查节点的可访问性...
检查: 节点 "NEWORACLE1" 的节点可访问性
目标节点 是否可访问?
------------------------------------ ------------------------
NEWORACLE2 是
NEWORACLE1 是
结果:节点 "NEWORACLE1" 的节点可访问性检查已通过。
正在检查等同用户...
检查: 用户 "oracle" 的等同用户
节点名 注释
------------------------------------ ------------------------
NEWORACLE2 通过
NEWORACLE1 通过
结果:用户 "oracle" 的等同用户检查已通过。
正在检查管理权限...
检查: 用户 "oracle" 的存在性
节点名 用户存在 注释
------------ ------------------------ ------------------------
NEWORACLE2 是 通过
NEWORACLE1 是 通过
结果:"oracle" 的用户存在性检查已通过。
检查: 组 "oinstall" 的存在性
节点名 状态 组 ID
------------ ------------------------ ------------------------
NEWORACLE2 存在 1000
NEWORACLE1 存在 1000
结果:"oinstall" 的组存在性检查已通过。
检查: 组 "oinstall" 中用户 "oracle" 的成员资格 [作为 主]
节点名 用户存在 组存在 组中的用户 主 注释
---------------- ------------ ------------ ------------ ------------ ------------
NEWORACLE2 是 是 是 是 通过
NEWORACLE1 是 是 是 是 通过
结果:组 "oinstall" 中用户 "oracle" 的成员资格检查 [作为 主] 已通过。
管理权限检查已通过。
正在检查节点的连接性...
节点 "NEWORACLE2" 的接口信息
接口名 IP 地址 子网
------------------------------ ------------------------------ ----------------
eth2 130.0.100.116 130.0.0.0
eth3 10.0.0.2 10.0.0.0
usb0 169.254.95.120 169.254.95.0
节点 "NEWORACLE1" 的接口信息
接口名 IP 地址 子网
------------------------------ ------------------------------ ----------------
eth2 130.0.100.115 130.0.0.0
eth3 10.0.0.1 10.0.0.0
usb0 169.254.95.120 169.254.95.0
检查: 子网 "130.0.0.0" 的节点连接性
源 目标 是否已连接?
------------------------------ ------------------------------ ----------------
NEWORACLE2:eth2 NEWORACLE1:eth2 是
结果:含有节点 NEWORACLE2,NEWORACLE1 的子网 "130.0.0.0" 的节点连接性检查已通过。
检查: 子网 "10.0.0.0" 的节点连接性
源 目标 是否已连接?
------------------------------ ------------------------------ ----------------
NEWORACLE2:eth3 NEWORACLE1:eth3 是
结果:含有节点 NEWORACLE2,NEWORACLE1 的子网 "10.0.0.0" 的节点连接性检查已通过。
检查: 子网 "169.254.95.0" 的节点连接性
源 目标 是否已连接?
------------------------------ ------------------------------ ----------------
NEWORACLE2:usb0 NEWORACLE1:usb0 是
结果:含有节点 NEWORACLE2,NEWORACLE1 的子网 "169.254.95.0" 的节点连接性检查已通过。
子网 "130.0.0.0" 上用于 VIP 的合适接口:
NEWORACLE2 eth2:130.0.100.116
NEWORACLE1 eth2:130.0.100.115
子网 "169.254.95.0" 上用于 VIP 的合适接口:
NEWORACLE2 usb0:169.254.95.120
NEWORACLE1 usb0:169.254.95.120
子网 "10.0.0.0" 上用于专用互联的合适接口:
NEWORACLE2 eth3:10.0.0.2
NEWORACLE1 eth3:10.0.0.1
结果:节点的连接性检查已通过。
正在检查其系统要求 'crs'...
没有为此产品注册检查。
在所有节点上预检查 群集服务设置 失败。
[oracle@NEWORACLE1 ~]$ exit
[root@NEWORACLE1 rootpre]# pwd
/home/oracle_in/clusterware/rootpre
[root@NEWORACLE1 rootpre]# ./rootpre.sh
No OraCM running
[root@NEWORACLE1 rootpre]# su - oracle
[oracle@NEWORACLE1 clusterware]$ unset LANG
注:此处unset lang是为了避免安装界面中文乱码。
[oracle@NEWORACLE1 clusterware]$ ./runInstaller
********************************************************************************
Please run the script rootpre.sh as root on all machines/nodes. The script can be found at the toplevel of the CD or stage-area. Once you have run the script, please type Y to proceed
Answer 'y' if root has run 'rootpre.sh' so you can proceed with Oracle Clusterware installation.
Answer 'n' to abort installation and then ask root to run 'rootpre.sh'.
********************************************************************************
Has 'rootpre.sh' been run by root? [y/n] (n)
y
Starting Oracle Universal Installer...
Checking installer requirements...
Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
Passed
All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-01-21_10-47-33AM. Please wait ...[oracle@NEWORACLE1 clusterware]$ Oracle Universal Installer, Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.
安装结束以root权限执行脚本,注意执行的先后顺序:
[root@NEWORACLE1 home]# /home/oracle/oraInventory/orainstRoot.sh
[root@NEWORACLE2 home]# /home/oracle/oraInventory/orainstRoot.sh
[root@NEWORACLE1 home]# /home/oracle/oracle/product/10.2.0/crs/root.sh
[root@NEWORACLE2 home]# /home/oracle/oracle/product/10.2.0/crs/root.sh
日志如下:
[root@NEWORACLE1 home]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@NEWORACLE2 oracle_in]# /home/oracle/oraInventory/orainstRoot.sh
Changing permissions of /home/oracle/oraInventory to 770.
Changing groupname of /home/oracle/oraInventory to oinstall.
The execution of the script is complete
[root@NEWORACLE1 home]# /home/oracle/oracle/product/10.2.0/crs/root.sh
WARNING: directory '/home/oracle/oracle/product/10.2.0' is not owned by root
WARNING: directory '/home/oracle/oracle/product' is not owned by root
WARNING: directory '/home/oracle/oracle' is not owned by root
WARNING: directory '/home/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/home/oracle/oracle/product/10.2.0' is not owned by root
WARNING: directory '/home/oracle/oracle/product' is not owned by root
WARNING: directory '/home/oracle/oracle' is not owned by root
WARNING: directory '/home/oracle' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: neworacle1 neworacle1-priv neworacle1
node 2: neworacle2 neworacle2-priv neworacle2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /u02/ocfs_vote/vote
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
neworacle1
CSS is inactive on these nodes.
neworacle2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@NEWORACLE2 home]# /home/oracle/oracle/product/10.2.0/crs/root.sh
WARNING: directory '/home/oracle/oracle/product/10.2.0' is not owned by root
WARNING: directory '/home/oracle/oracle/product' is not owned by root
WARNING: directory '/home/oracle/oracle' is not owned by root
WARNING: directory '/home/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/home/oracle/oracle/product/10.2.0' is not owned by root
WARNING: directory '/home/oracle/oracle/product' is not owned by root
WARNING: directory '/home/oracle/oracle' is not owned by root
WARNING: directory '/home/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: neworacle1 neworacle1-priv neworacle1
node 2: neworacle2 neworacle2-priv neworacle2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
neworacle1
neworacle2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
注:
在NEWORACLE2上执行root.sh时,会提示一个错误,关于vipca无法执行的【找不到libraries: libpthread.so.0】
这是个bug,原因是新版的glibc和10g的java有不兼容的地方,按照官方文档要在运行root.sh之前修改vipca脚本.解决办法如下:
[oracle@NEWORACLE1 product]$ vi /home/oracle/oracle/product/10.2.0/crs/bin/vipca
JREDIR=/oracle/product/10.2.0/crs/jdk/jre/ #把最后/的去掉,改为:
JREDIR=/oracle/product/10.2.0/crs/jdk/jre
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL fi #找到这部分,改为:
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL fi
unset LD_ASSUME_KERNEL #添加这一行,也就是去掉LD_ASSUME_KERNEL变量
现在再运行vipca试试,又出现类似下面的错误:
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
解决办法:
[oracle@NEWORACLE1 product]$ oifcfg iflist
eth2 130.0.0.0
eth3 10.0.0.0
usb0 169.254.95.0
[oracle@NEWORACLE1 product]$ oifcfg setif -global eth2/130.0.0.0:public
[oracle@NEWORACLE1 product]$ oifcfg setif -global eth3/10.0.0.0:cluster_interconnect
[oracle@NEWORACLE1 product]$ oifcfg getif
eth2 130.0.0.0 global public
eth3 10.0.0.0 global cluster_interconnect
[root@NEWORACLE1 ~]# vipca
注:vipca需要使用root用户运行
配置完vipca,再回到node1的CRS安装界面上来,点击OK,会再一次检查,成功了就安装结束了
注:如果前面配置环境检查出错以root身份执行$ORA_CRS_HOME/cfgtoollogs/configToolFailedCommands.sh
然后执行:
[root@NEWORACLE1 ~]# crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....le1.gsd application ONLINE ONLINE neworacle1
ora....le1.ons application ONLINE ONLINE neworacle1
ora....le1.vip application ONLINE ONLINE neworacle1
ora....le2.gsd application ONLINE ONLINE neworacle2
ora....le2.ons application ONLINE ONLINE neworacle2
ora....le2.vip application ONLINE ONLINE neworacle2
结果显示正确就可以。【不报错的话就不需要执行,如果没有错误你应该也找不到configToolFailedCommands.sh文件】
九、安装Oracle数据库软件
在其中一个节点以Oracle解压数据库软件、运行runInstaller安装数据库软件【注意仅安装软件不建库】
分别在2节点以root用户执行root.sh: /home/oracle/oracle/product/10.2.0/db_1/root.sh
[root@NEWORACLE2 db_1]# ./root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /home/oracle/oracle/product/10.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
在一节点以使用netca配置监听【最好提前配置好监听,即软件装完,dbca之前】
在一节点使用DBCA配置自动存储管理
在一节点使用DBCA创建RAC数据库
这块有两个地方需要注意,一个是要选择正确的数据库字符集(对于中文字符推荐ZHS16GBK)和国家字符集(建议AL16UTF16),另外就是要点击 All initiallzation Parameters,屏蔽掉remote_listener初始化参数的值,不然在dbca执行建库第一步的过程中可能会引发下列错误:
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name ‘LISTENERS_RACDB’
这是由于模板数据库中配置了该初始化参数造成的,解决方式有如下几种:
.手工修改模版文件,注视掉该参数
.选择自定义数据库,在执行安装之前修改该初始化参数为空
.创建数据库时选择保存建库脚本而不创建数据库,然后手工修改建库脚本,再通过脚本创建
.提前配置好监听和Net ServiceName
分别在两节点配置LOCAL_LISTENER参数
#节点1
LISTENERS_local =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip.wang.com)(PORT = 1521))
)
#节点2
LISTENERS_local =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip.wang.com)(PORT = 1521))
)
查看集群状态
十、其他
[oracle@orarac1 ~]$ srvctl status database -d sun
/ora/app/oracle/product/10.2/db_1/jdk/jre/bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
这是个bug,原因是新版的glibc和10g的java有不兼容的地方,解决办法是
将该脚本中arch=‘uname -m‘以下4行注释掉
arch=`uname -m`
#if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
#then
# LD_ASSUME_KERNEL=2.4.19
# export LD_ASSUME_KERNEL
#fi
#End workaround
或者直接在fi 上面添加一行unset LD_ASSUME_KERNEL
另外:安装Oracle后,srvctl有好几个,例如:
/u01/oracle/product/10.2.0/crs_1/bin/srvctl
/u01/oracle/product/10.2.0/crs_1/inventory/Templates/bin/srvctl
/u01/oracle/product/10.2.0/db_1/bin/srvctl
/u01/oracle/product/10.2.0/db_1/inventory/Templates/bin/srvctl
确保你修改和使用的是$ORA_CRS_HOME/bin下面的srvctl
关于重启时ocfs其中一个节点挂载不上,报错mount.ocfs2: Device or resource busy while mounting /dev/sdb1 on /u02/ocfs_redo. Check 'dmesg' for more information on this error。。另一个节点没有问题,而用ocfs2console可以手动挂载,解决办法参见另一篇笔记。
参考文档:
参考文档:http://www.itpub.net/viewthread.php?tid=1009235&extra=page%3D1%26amp%3Bfilter%3Ddigest
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/21162451/viewspace-696413/,如需转载,请注明出处,否则将追究法律责任。