ITPub博客

首页 > Linux操作系统 > Linux操作系统 > Solaris8上安装RAC10202环境(二)

Solaris8上安装RAC10202环境(二)

原创 Linux操作系统 作者:yangtingkun 时间:2007-03-16 00:00:00 0 删除 编辑

前一阵一直在测试ORACLE 10R2RAC环境在Solaris上的安装。碰到了很多的问题,不过最后总算成功了,这里简单总结一下安装步骤,以及碰到的问题和解决方法。

这一篇主要讨论ORACLECLUSTERWARE的安装。

操作系统准备工作可以参考:Solaris8上安装RAC10202环境(一):http://yangtingkun.itpub.net/post/468/271797


在上一篇文章中已经将操作系统准备完毕。下面需要安装CLUSTER软件,Solaris下的Oracle10gRAC环境可供选择的CLUSTER软件包括:

SunCluster 3.1

Oracle Clusterware

Fujitsu-Siemens PrimeCluster 4.2

Veritas Storage Foundation for Oracle RAC 5.0

这篇文章介绍使用ORACLECLUSTERWARE来安装RAC环境。不过只有10G以上的版本才支持Oracle Clusterware

Oraclecluster安装文件解压,利用cpio idmv < 10gr2_cluster_sol.cpio命令展开。然后进入展开目录,进入cluvfy目录执行下面的检测命令:

$ cd cluster_disk
$ cd cluvfy

$ ./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "racnode1".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.

Administrative privileges check passed.

Checking node connectivity...

Node connectivity check passed for subnet "172.25.0.0" with node(s) racnode2,racnode1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) racnode2,racnode1.

Suitable interfaces for the private interconnect on subnet "172.25.0.0":
racnode2 ce0:172.25.198.223
racnode1 ce0:172.25.198.222

Suitable interfaces for the private interconnect on subnet "10.0.0.0":
racnode2 ce1:10.0.0.2
racnode1 ce1:10.0.0.1

ERROR:
Could not find a suitable set of interfaces for VIPs.

Node connectivity check failed.


Checking system requirements for 'crs'...
Total memory check passed.
Free disk space check passed.
Swap space check failed.
Check failed on nodes:
racnode2,racnode1
System architecture check passed.
Operating system version check passed.
Package existence check passed for "SUNWarc".
Package existence check passed for "SUNWbtool".
Package existence check passed for "SUNWhea".
Package existence check passed for "SUNWlibm".
Package existence check passed for "SUNWlibms".
Package existence check passed for "SUNWsprot".
Package existence check passed for "SUNWsprox".
Package existence check passed for "SUNWtoo".
Package existence check passed for "SUNWi1of".
Package existence check passed for "SUNWi1cs".
Package existence check passed for "SUNWi15cs".
Package existence check passed for "SUNWxwfnt".
Package existence check passed for "SUNWlibC".
Package existence check failed for "SUNWscucm:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWudlmr:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWudlm:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWscr:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWscu:3.1".
Check failed on nodes:
racnode2,racnode1
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "oracle".
User existence check passed for "nobody".

System requirement failed for 'crs'

Pre-check for cluster services setup was unsuccessful on all the nodes.

上一篇文章已经提到了VIP的错误原因。其中swap空间不足的错误可以忽略,在上一篇文章中已经进行了检查,系统中有足够的swap空间。下面在对一些系统包进行检查时失败,这些包是和SunCluster有关的包,由于安装RAC准备使用OracleClusterware,因此这些错误也可以忽略。

在安装之前,首先配置Clusterware所需的共享存储。这篇文档暂时没有涉及存储部分内容。这里假定存储已经安装到两台服务器上了,而且对于两个服务器都是可见的。而且存储已经划分出了一定的裸设备供两个服务器使用。

由于两台测试服务器的光纤卡不同,因此加载的裸设备名称也不相同,在racnoce1上设备的名称是:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd99114,0
1. c1t1d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c50acf424,0
2. c2t0d0
/pci@8,700000/QLGC,qla@3/sd@0,0
3. c2t0d1
/pci@8,700000/QLGC,qla@3/sd@0,1
4. c2t0d2
/pci@8,700000/QLGC,qla@3/sd@0,2
5. c2t0d3
/pci@8,700000/QLGC,qla@3/sd@0,3
6. c2t0d4
/pci@8,700000/QLGC,qla@3/sd@0,4
7. c2t0d5
/pci@8,700000/QLGC,qla@3/sd@0,5
8. c2t0d6
/pci@8,700000/QLGC,qla@3/sd@0,6
9. c2t0d7
/pci@8,700000/QLGC,qla@3/sd@0,7

而在racnode2上,加载的设备名称为:

# format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9e4b8,0
1. c1t1d0
/pci@9,600000/SUNW,qlc@2/fp@0,0/ssd@w21000004cfd9ead5,0
2. c2t500601603022E66Ad0
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,0
3. c2t500601603022E66Ad1
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,1
4. c2t500601603022E66Ad2
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,2
5. c2t500601603022E66Ad3
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,3
6. c2t500601603022E66Ad4
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,4
7. c2t500601603022E66Ad5
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,5
8. c2t500601603022E66Ad6
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,6
9. c2t500601603022E66Ad7
/pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w500601603022e66a,7

由于安装过程中,在建立ocrvot共享磁盘时,需要两个服务器具有相同的名称,因此建立如下的链接。在racnode1上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t0d2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t0d2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t0d2s1
# chown oracle:oinstall /dev/rdsk/c2t0d2s3

racnode2上:

# mkdir /dev/rac
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s1 /dev/rac/ocr
# ln -s -f /dev/rdsk/c2t500601603022E66Ad2s3 /dev/rac/vot
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s1
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad2s3

注意不要使用s0作为共享裸设备,否则在安装完成后执行root.sh文件时会出现Failed to upgrade Oracle Cluster Registry configuration的错误信息。

下面可以开始安装了,启动Xmanager,登陆racnode1执行:

# xhost +
access control disabled, clients can connect from any host
su - oracle
Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001
$ cd /data/cluster_disk
$ ./runInstaller

启动图形界面后,点击next。这时Oracle会提示输入inventory路径和操作系统组信息:默认的就是刚才建立的/data/oracle/oraInventory目录和oinstall,点击next

下面Oracle会提示OraCrs10g_home1目录的路径,这里默认是ORACLE_HOME的路径:/data/oracle/product/10.2/database将其修改为/data/oracle/product/10.2/crs,并且选择简体中文语句,点击next

下面Oracle会自动检测系统是否满足安装需要,如果根据上面一篇文章中的内容进行了设置,这里的检查成功,然后进入下一步。

进入Cluster的配置,默认的Cluster Namecrs,可以修改也可以采用默认设置。Oracle会自动将安装节点的网络配置列出来,这里需要手工将racnode2的节点信息:racnode2racnode2-privracnode2-vip添加进去。点击next

下面会列出可用的网卡信息,检查配置的PUBLICPRIVATE配置是否和hosts文件中的一致。由于当前系统配置172.25开头的ipOraclebug会认为这个ipPrivate IP,因此,会将两个网卡的属性都设置为PRIVATE,这里需要手工的将子网为172.25.0.0的网卡设置为PUBLIC。修改之后,点击next

进入ocr的配置阶段。由于ocr使用的共享磁盘来自存储,本身已经采用了raid0的配置,所以这里选择External Redundancy,并在OCR Location的位置输入刚才设置好的/dev/rac/ocr,然后点击next

进入Voting Disk配置,出于同样的原因选择External Redundancy,并在Voting Disk Location的位置输入配置好的/dev/rac/vot,点击next

出现汇总也,点击install开始安装。

安装完毕需要分别在两个节点用root先后执行两个脚本。前后在racnode1racnode2上执行下面的脚本,输入结果是一样的。

# . /data/oracle/oraInventory/orainstRoot.sh
Changing permissions of /data/oracle/oraInventory to 770.
Changing groupname of /data/oracle/oraInventory to oinstall.
The execution of the script is complete

对于第二个脚本,racnode1racnode2的结果就不同了:

# . /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
ln: cannot create /data/oracle/product/10.2/crs/lib/libskgxn2.so: File exists
ln: cannot create /data/oracle/product/10.2/crs/lib32/libskgxn2.so: File exists
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
Creating OCR keys for user 'root', privgrp 'other'..
Operation successful.
Now formatting voting device: /dev/rac/vot
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
racnode1
CSS is inactive on these nodes.
racnode2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

上面是racnode1上的执行结果,而racnode2上执行:

# . /data/oracle/product/10.2/crs/root.sh
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
Checking to see if Oracle CRS stack is already configured

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/data/oracle/product/10.2' is not owned by root
WARNING: directory '/data/oracle/product' is not owned by root
WARNING: directory '/data/oracle' is not owned by root
WARNING: directory '/data' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
racnode1
racnode2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "ce0" is not public. Public interfaces should be used to configure virtual IPs.

脚本显示出现了错误。这个错误就是前面提到了多次的Oraclebug,将PUBLICinterface认为是private的,导致无法配置vip。这个问题的详细描述会在最后一篇文章问题汇总中详细描述,这里只给出解决办法。

最简单的方法是启动vipca图形界面手头配置:

# cd /data/oracle/product/10.2/crs/bin/
# ./vipca

Xmanager中启动一个终端,输入上述命令,启动vipca图形界面。点击next,出现所有可用的网络接口,由于ce0配置的是PUBLIC INTERFACT,这里选择ce0,点击next,在出现的配置中IP Alias Name分别填入:racnode1-vipracnode2-vipIP address处填入:172.25.198.224172.25.198.225。这里如果你的配置是正确的,那么你填完一个IPOracle会自动将剩下三个配置补齐。点击next,出现汇总页面,检查无误后,点击Finish

Oracle会执行6个步骤,Create VIP application resourceCreate GSD application resourceCreate ONS application resourceStart VIP application resourceStart GSD application resourceStart ONS application resource

全部成功后点击OK,结束VIPCA的配置。

这个时候可以返回到刚才的Clusterware安装界面,点击OK

这个时候Oracle会尝试启动两个工具并最终运行一下验证程序。全部检查完成,跳到安装结束画面,点击Exit结束Clusterware的安装。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/4227/viewspace-69204/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
暂无介绍

注册时间:2007-12-29

  • 博文量
    1955
  • 访问量
    10424295