ITPub博客

首页 > Linux操作系统 > Linux操作系统 > 安装11202 RAC for solaris x86_64时碰到multicasting问题

安装11202 RAC for solaris x86_64时碰到multicasting问题

原创 Linux操作系统 作者:yangtingkun 时间:2011-05-21 22:23:56 0 删除 编辑

在安装11.2.0.2 RAC for Solaris x86_64环境是,节点2上运行root.sh碰到错误。

 

 

详细错误信息为:

OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
updating /platform/i86pc/boot_archive
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node unknown, number unknown, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Failed to start Oracle Clusterware stack
Failed to start Cluster Synchorinisation Service in clustered mode at /reuters/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1017.
/reuters/app/11.2.0/grid/perl/bin/perl -I/reuters/app/11.2.0/grid/perl/lib -I/reuters/app/11.2.0/grid/crs/install /reuters/app/11.2.0/grid

查询metalink后,发现问题和bug: 9974223描述的一致。

首先cssd日志中出现了下面的信息:

2011-04-25 11:05:00.917: [GIPCHGEN][1] gipchaNodeCreate: adding new node 1032710 { host '', haName 'CSS_racnode-cluster', srcLuid 09df7985-00000000, dstLuid 00000000-00000000 numInf 0, contigSeq 0, lastAck 0, lastValidAck 0, sendSeq [0 : 0], createTime 231687457, flags 0x1 }

随后寻找multicasting地址:

2011-04-25 11:05:01.280: [GIPCHTHR][17] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'racnode2', haName 'CSS_racnode-cluster', inf 'mcast://230.0.1.0:42424/192.168.18.129'

然后出现了通信不成功的信息:

2011-04-25 11:05:04.287: [ CSSD][1]clssnmvDHBValidateNCopy: node 1, , has a disk HB, but no network HB, DHB has rcfg 199191505, wrtcnt, 279, LATS 231690827, lastSeqNo 0, uniqueness 1303729178, timestamp 1303729498/231692428

这说明,Oracle无法建立PRIVATE地址间的通信,多播地址230.0.1.0被阻塞。

利用metalink上提供的mcasttest.pl工具进行检查,发现地址230.0.1.0失败,但是224.0.0.251地址成功:

# perl mcasttest.pl -n racnode1,racnode2 -i eth1,eth2
########### Setup for node racnode1 ##########
Checking node access 'racnode1'
Checking node login 'racnode1'
Checking/Creating Directory /tmp/mcasttest for binary on node 'racnode1'
Distributing mcast2 binary to node 'racnode1'
########### Setup for node racnode2 ##########
Checking node access 'racnode2'
Checking node login 'racnode2'
Checking/Creating Directory /tmp/mcasttest for binary on node 'racnode2'
Distributing mcast2 binary to node 'racnode2'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 8 09:05:33 | Multicast Failed for eth1 using address 230.0.1.0:42000
Nov 8 09:05:34 | Multicast Failed for eth2 using address 230.0.1.0:42001

Test for Multicast address 224.0.0.251

Nov 8 09:05:35 | Multicast Succeeded for eth1 using address 224.0.0.251:42002
Nov 8 09:05:36 | Multicast Succeeded for eth2 using address 224.0.0.251:42003

根据文档ID 1212703.1的描述,这种情况可以通过打Patch: 9974223来解决。

在两个节点分别打了这个patch后,节点2root.sh顺利执行完成,cluster的安装也顺利完成。

 

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/4227/viewspace-695891/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
暂无介绍

注册时间:2007-12-29

  • 博文量
    1955
  • 访问量
    10524184