ITPub博客

首页 > Linux操作系统 > Linux操作系统 > Solaris8上安装RAC10202环境(三)

Solaris8上安装RAC10202环境(三)

原创 Linux操作系统 作者:yangtingkun 时间:2007-03-17 00:00:00 0 删除 编辑

前一阵一直在测试ORACLE 10R2RAC环境在Solaris上的安装。碰到了很多的问题,不过最后总算成功了,这里简单总结一下安装步骤,以及碰到的问题和解决方法。

这一篇主要讨论ORACLE的软件安装。

操作系统准备工作可以参考:Solaris8上安装RAC10202环境(一):http://yangtingkun.itpub.net/post/468/271797

OracleClusterware安装过程可以参考:Solaris8上安装RAC10202环境(二):http://yangtingkun.itpub.net/post/468/271812


在上一篇文章中已经将CLUSTERWARE软件安装完毕,下面准备安装数据库。

首先检查系统是否满足数据库的安装需求:

# su - oracle
Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001
$ cd /data/cluster_disk/cluvfy
$ ./runcluvfy.sh stage -pre dbinst -n racnode1,racnode2

Performing pre-checks for database installation

Checking node reachability...
Node reachability check passed from node "racnode1".


Checking user equivalence...
User equivalence check passed for user "oracle".

Checking administrative privileges...
User existence check passed for "oracle".
Group existence check passed for "oinstall".
Membership check for user "oracle" in group "oinstall" [as Primary] passed.
Group existence check passed for "dba".
Membership check for user "oracle" in group "dba" passed.

Administrative privileges check passed.

Checking node connectivity...

Node connectivity check passed for subnet "172.25.0.0" with node(s) racnode2,racnode1.
Node connectivity check passed for subnet "172.25.198.0" with node(s) racnode2,racnode1.
Node connectivity check passed for subnet "10.0.0.0" with node(s) racnode2,racnode1.

Suitable interfaces for the private interconnect on subnet "172.25.0.0":
racnode2 ce0:172.25.198.223
racnode1 ce0:172.25.198.222

Suitable interfaces for the private interconnect on subnet "172.25.198.0":
racnode2 ce0:172.25.198.225
racnode1 ce0:172.25.198.224

Suitable interfaces for the private interconnect on subnet "10.0.0.0":
racnode2 ce1:10.0.0.2
racnode1 ce1:10.0.0.1

ERROR:
Could not find a suitable set of interfaces for VIPs.

Node connectivity check failed.


Checking system requirements for 'database'...
Total memory check passed.
Free disk space check passed.
Swap space check failed.
Check failed on nodes:
racnode2,racnode1
System architecture check passed.
Operating system version check passed.
Operating system patch check failed for "112760-05".
Check failed on nodes:
racnode2,racnode1
Operating system patch check passed for "108993-45".
Operating system patch check failed for "113800-06".
Check failed on nodes:
racnode2,racnode1
Operating system patch check failed for "112763-13".
Check failed on nodes:
racnode2,racnode1
Package existence check passed for "SUNWarc".
Package existence check passed for "SUNWbtool".
Package existence check passed for "SUNWhea".
Package existence check passed for "SUNWlibm".
Package existence check passed for "SUNWlibms".
Package existence check passed for "SUNWsprot".
Package existence check passed for "SUNWsprox".
Package existence check passed for "SUNWtoo".
Package existence check passed for "SUNWi1of".
Package existence check passed for "SUNWi1cs".
Package existence check passed for "SUNWi15cs".
Package existence check passed for "SUNWxwfnt".
Package existence check passed for "SUNWlibC".
Package existence check failed for "SUNWscucm:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWudlmr:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWudlm:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "ORCLudlm:Dev_Release_06/11/04,_64bit_3.3.4.8_reentrant".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWscr:3.1".
Check failed on nodes:
racnode2,racnode1
Package existence check failed for "SUNWscu:3.1".
Check failed on nodes:
racnode2,racnode1
Kernel parameter check failed for "SEMMNI".
Check failed on nodes:
racnode2
Kernel parameter check failed for "SEMMNS".
Check failed on nodes:
racnode2
Kernel parameter check failed for "SEMMSL".
Check failed on nodes:
racnode2
Kernel parameter check failed for "SEMVMX".
Check failed on nodes:
racnode2
Kernel parameter check passed for "SHMMAX".
Kernel parameter check passed for "SHMMIN".
Kernel parameter check passed for "SHMMNI".
Kernel parameter check passed for "SHMSEG".
Group existence check passed for "dba".
Group existence check passed for "oinstall".
User existence check passed for "oracle".
User existence check passed for "nobody".

System requirement failed for 'database'

Checking CRS integrity...

Checking daemon liveness...
Liveness check passed for "CRS daemon".

Checking daemon liveness...
Liveness check passed for "CSS daemon".

Checking daemon liveness...
Liveness check passed for "EVM daemon".

Checking CRS health...
CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application (required)
Check passed.

Checking existence of ONS node application (optional)
Check passed.

Checking existence of GSD node application (optional)
Check passed.


Pre-check for database installation was unsuccessful on all the nodes.

上一篇文章已经提到了VIP的错误原因。其实现在VIP已经绑定上了,但是oracle仍然没有认出来。接着的swap空间不足的错误可以忽略,在第一篇文章中已经进行了检查,系统中有足够的swap空间。接着报了4个补丁没有打,但是系统中目前安装了这4个补丁,但是安装的版本比Oracle需要的更高,估计Oracle没有认出来。下面在对一些系统包进行检查时失败,这些包是和SunCluster有关的包,由于安装RAC准备使用OracleClusterware,因此这些错误也可以忽略。最后一个错误是关于系统核心参数设置的。这写设置racnode1racnode2完全一致,而只报了racnode2的设置有问题,怀疑是Oracle的检查程序的问题。

由于上面所有的错误都是可以忽略的,下面开始数据库的安装:将Oracle的安装文件解压,利用cpio idmv < 10gr2_db_sol.cpio命令展开。

在安装之前,需要将已经配置好的所有裸设备授权给Oracle用户,使得Oracle用户有足够的空间来建立数据库。在racnode1上:

# chown oracle:oinstall /dev/rdsk/c2t0d0s1
# chown oracle:oinstall /dev/rdsk/c2t0d0s3
# chown oracle:oinstall /dev/rdsk/c2t0d0s4
# chown oracle:oinstall /dev/rdsk/c2t0d0s5
# chown oracle:oinstall /dev/rdsk/c2t0d0s6
# chown oracle:oinstall /dev/rdsk/c2t0d1s1
.
.
.
# chown oracle:oinstall /dev/rdsk/c2t0d5s5
# chown oracle:oinstall /dev/rdsk/c2t0d5s6
# chown oracle:oinstall /dev/rdsk/c2t0d5s7

racnode2上:

# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad0s1
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad0s3
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad0s4
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad0s5
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad0s6
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad1s1
.
.
.
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad5s6
# chown oracle:oinstall /dev/rdsk/c2t500601603022E66Ad5s7

下面可以开始安装了,启动Xmanager,登陆racnode1执行:

# xhost +
access control disabled, clients can connect from any host
su - oracle
Sun Microsystems Inc. SunOS 5.8 Generic Patch October 2001
$ cd /data/disk1
$ ./runInstaller

启动图形界面后,点击next。选择企业版,选择简体中文语句,点击next

设置OraDb10g_home1,由于设置了初始化参数ORACLE_HOMEOracle自动会将/data/oracle/product/10.2/database填入,直接点击next

然后进入cluster安装模式,选择cluster安装,将racnode2选上,然后点击next

系统会执行检查程序,检查是否满足安装RAC数据库的需求,检查成功后点击next

下面有三个选择,一个是建立数据库,一个是配置Automatic Storage Management,最后是只安装软件。为了简化安装,这里选择使用ASM。因此这里选择第二个,配置ASM,并输入两次ASM实例的SYS密码。点击next

下面是配置ASM磁盘组信息,默认Disk Group Name的名称是DATA,这里改为DISK。由于共享磁盘已经采用了RAID0这里在冗余选项处选择了Extenal

选择所有可用的磁盘,点击next

出现汇总页面后,点击install开始安装。

安装结束后,Oracle自动执行Oracle Net Configure Assistant工具和Oracle Database Configuration Assistant工具。

工具配置完成后,使用root在两个节点上执行root脚本:

# . /data/oracle/product/10.2/database/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /data/oracle/product/10.2/database

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Creating /usr/local/bin directory...
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /var/opt/oracle/oratab file...
Entries will be added to the /var/opt/oracle/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

点击ok,点击exit,软件安装和ASM的配置完成。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/4227/viewspace-69205/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
暂无介绍

注册时间:2007-12-29

  • 博文量
    1955
  • 访问量
    10485999