首页 > Linux操作系统 > Linux操作系统 > oracle 10g clusterware 在redhat as 5上的bug

oracle 10g clusterware 在redhat as 5上的bug

原创 Linux操作系统 作者:paulyibinyi 时间:2009-05-13 23:21:06 0 删除 编辑

        今天在oracle公司培训时,安装oracle 10g  clusterware for redhat as 5,在第二个节点运行到


/home/oracle/10gR2/crs/jdk/jre/bin/java:error while loading shared

cannot open shared object file:No such file or directory

这个问题在redhat as 4版本时不会出现,而且运行root.sh也正常


以下是metalink 414163.1的解释

10gR2 RAC Install issues on Oracle EL5 or RHEL5 or SLES10 (VIPCA / SRVCTL / OUI Failures)
  文档 ID: 414163.1 类型: PROBLEM
  上次修订日期: 16-OCT-2008 状态: PUBLISHED

In this Document

Applies to:

Oracle Server - Enterprise Edition - Version: to
Linux x86-64
Generic Linux
Intel Based Server LINUX


When installing 10gR2 RAC on Oracle Enterprise Linux 5 or RHEL5 or SLES10 there are three issues that users must be aware of.

Issue#1: To install 10gR2, you must first install the base release, which is As these version of OS are newer, you should use the following command to invoke the installer:

$ runInstaller -ignoreSysPrereqs        // This will bypass the OS check //

Issue#2:  At end of on the last node vipca will fail to run with the following error:

Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading
shared libraries: cannot open shared object file:
No such file or directory 

Also, srvctl will show similar output if workaround below is not implemented.

Issue#3: After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP's are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:

# vipca
Error 0(Native: listNetInterfaces:[3]) 
[Error 0(Native: listNetInterfaces:[3])]


These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.


If you have not yet run on the last node, implement workaround for issue#2 below and run (you may skip running the vipca portion at the bottom of this note). 
If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 and then run vipca manually.

To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:

if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]

unset LD_ASSUME_KERNEL         <<== Line to be added


Similarly for srvctl (in both the CRS and, when installed, RDBMS and ASM bin directories on all nodes), unset LD_ASSUME_KERNEL by adding one line, around line 168 should look like this:


unset LD_ASSUME_KERNEL          <<== Line to be added

Remember to re-edit these files on all nodes:

after applying the or patchsets, as these patchset will still include those settings unnecessary for OEL5 or RHEL5 or SLES10
.   This issue was raised with development and is fixed in the patchsets.

Note that we are explicitly unsetting LD_ASSUME_KERNEL and not merely commenting out its setting to handle a case where the user has it set in their environment (login shell).

To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during, if you still have the OUI window open, click OK and it will create the "oifcfg" information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed.  Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):

/bin # ./oifcfg setif -global eth0/ 
/bin # ./oifcfg setif -global eth1/ 
/bin # ./oifcfg getif 
 eth0 global public 
 eth1 global cluster_interconnect


The goal is to get the output of "oifcfg getif" to include both public and cluster_interconnect interfaces, of course you should exchange your own IP addresses and interface name from your environment. To get the proper IPs in your environment run this command:

/bin # ./oifcfg iflist


If you have not yet run on the last node, implement workaround for issue #2 above and run (you may skip running the vipca portion below. If you have a non-routable IP range for VIPs you will also need workaround for issue# 3 above, and then run vipca manually.

Running VIPCA:

After implementing the above workaround(s), you should be able invoke vipca (as root, from last node) manually and configure the VIP IPs via the GUI interface.

/bin # export DISPLAY=
/bin # ./vipca

Make sure the DISPLAY environment variable is set correctly and you can open X-clock or other X applications from that shell.

Once vipca completes running, all the Clusterware resources (VIP, GSD, ONS) will be started, there is no need to re-run since vipca is the last step in 

To verify the Clusterware resources are running correctly:

/bin # ./crs_stat -t
Name           Type        Target State  Host
ora....ux1.gsd application ONLINE ONLINE raclinux1
ora....ux1.ons application ONLINE ONLINE raclinux1 application ONLINE ONLINE raclinux1
ora....ux2.gsd application ONLINE ONLINE raclinux2
ora....ux2.ons application ONLINE ONLINE raclinux2 application ONLINE ONLINE raclinux2

You may now proceed with the rest of the RAC installation.


来自 “ ITPUB博客 ” ,链接:,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录


  • 博文量
  • 访问量