ITPub博客

首页 > 数据库 > Oracle > oracle10g RAC在虚拟机下的安装

oracle10g RAC在虚拟机下的安装

原创 Oracle 作者:yushangfu 时间:2011-08-10 16:43:16 0 删除 编辑

前提 yushangfu

本文的主旨是使没有安装过10gRAC的朋友能够根据本文顺利的安装一遍RAC

虚拟机版本是vmware server2.02,而且必须使用server版本

红帽版本是rhel-server-5.5-i386-dvd

至于为什么非得用server版本的虚拟机,是因为workstation版本的不支持共享磁盘。本文采用的磁盘共享方式是:在第一节点添加一块磁盘,然后再在第二节点添加磁盘时,让添加的磁盘指向第一节点添加的那块,这样来实现磁盘的共享。

一.安装系统

1.

2.

3.本本是2G内存的,同时开两个虚拟机,所以把内存设成了700m,不至于太卡

4.

5.

6.

7.

8.

9.

10.

11.

12.

13.通过再添加一块网卡。接下来就开始系统的安装

14.

这样一步一步下来,系统安装完成

二,系统参数的配置和包的安装

1.参数配置可以使用以下脚本

(在/opt目录下建个configure.sh文件,然后用vi编辑器把以下脚本粘在这个文件中,在执行chmod a+x configure.sh命令,执行完这个脚本,参数也就配置完成。)

需要注意的是:由于RAC需要系统定时向其他节点发送数据包来确定其他节点的存活状态,而虚拟机毕竟不是真实的机器,此间会存在不稳定的状态,至于此,可以在安装crs的同时ping其它主机来验证(有时返回的时间很长,有时则很短),为避免安装失败,所以以下脚本中红色的部分可以去掉,等到安装crs和数据库完成后在根据自己的意愿执行添加。我在安装的时候,等数据库和crs都安装完了,我也没配置红色的参数。自己做练习的话没有影响。

#!/bin/bash
cp /etc/sysctl.conf /etc/sysctl.conf_bak
echo "kernel.shmall = 2097152" >> /etc/sysctl.conf
echo "kernel.shmmax = 2147483648" >> /etc/sysctl.conf
echo "kernel.shmmni = 4096" >> /etc/sysctl.conf
echo "kernel.sem = 250 32000 100 128" >> /etc/sysctl.conf
echo "fs.file-max = 65536" >> /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range = 1024 65000" >> /etc/sysctl.conf
echo "net.core.rmem_default = 1048576" >> /etc/sysctl.conf
echo "net.core.rmem_max = 1048576" >> /etc/sysctl.conf
echo "net.core.wmem_default = 262144" >> /etc/sysctl.conf
echo "net.core.wmem_max = 262144" >> /etc/sysctl.conf
/sbin/sysctl -p
echo ">>>>>>>>>>sysctl.conf updated successfully<<<<<<<<<<"

cp /etc/security/limits.conf /etc/security/limits.conf_bak
echo "oracle soft nproc 2047" >> /etc/security/limits.conf
echo "oracle hard nproc 16384" >> /etc/security/limits.conf
echo "oracle soft nofile 1024" >> /etc/security/limits.conf
echo "oracle hard nofile 65536" >> /etc/security/limits.conf

tail -n 4 /etc/security/limits.conf
echo ">>>>>>>>>>limits.conf updated successfully<<<<<<<<<<"

cp /etc/pam.d/login /etc/pam.d/login_bak
echo "session required /lib/security/pam_limits.so" >> /etc/pam.d/login

tail -n 1 /etc/pam.d/login
echo ">>>>>>>>>>login updated successfully<<<<<<<<<<"

cp /etc/profile /etc/profile_bak
echo "if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi" >> /etc/profile

tail -n8 /etc/profile
echo ">>>>>>>>>>profile updated successfully<<<<<<<<<<"

insmod /lib/modules/$(uname -r)/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180
echo "insmod /lib/modules/$(uname -r)/kernel/drivers/char/hangcheck-timer.ko hangcheck_tick=30 hangcheck_margin=180" >> /etc/rc.d/rc.local

echo ">>>>>>>>>>rc.local updated successfully<<<<<<<<<<"

groupadd oinstall -g 201
groupadd dba -g 202
groupadd oper -g 203
useradd oracle -u 200 -g oinstall -G dba,oper

echo ">>>>>>>>>>user and group added successfully<<<<<<<<<<"

mkdir -p /u01/crs/oracle/product/10.2.0/crs
mkdir -p /u01/app/oracle/product/10.2.0/db_1
chown -R oracle:oinstall /u01

echo ">>>>>>>>>>directory added successfully<<<<<<<<<<"

echo 'export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/u01/crs/oracle/product/10.2.0/crs
export ORACLE_SID=orcl1
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:/sbin'
>> /home/oracle/.bash_profile

tail -n 5 /home/oracle/.bash_profile

echo ">>>>>>>>>>.bash_profile updated successfully<<<<<<<<<<"


由于此脚本比较简单,执行完后还要手动修改一下/etc/sysctl.conf,注释掉文件中原有的kernel.shmall 和kernel.shmmax ,再给oracle用户设置密码,在修改一下/home/oracle/.bash_profile,使得此文件的非注释部分只剩上面蓝色的部分。

2.安装包

rpm -ivh elfutils-libelf-devel-static-0.137-3.el5.i386.rpm --nodeps
rpm -ivh elfutils-libelf-devel-0.137-3.el5.i386.rpm --nodeps
rpm -ivh kernel-headers-2.6.18-194.el5.i386.rpm
rpm -ivh glibc-headers-2.5-49.i386.rpm
rpm -ivh glibc-devel-2.5-49.i386.rpm
rpm -ivh libgomp-4.4.0-6.el5.i386.rpm
rpm -ivh gcc-4.1.2-48.el5.i386.rpm
rpm -ivh libstdc++-devel-4.1.2-48.el5.i386.rpm
rpm -ivh gcc-c++-4.1.2-48.el5.i386.rpm
rpm -ivh libaio-devel-0.3.106-5.i386.rpm
rpm -ivh numactl-devel-0.9.8-11.el5.i386.rpm
rpm -ivh sysstat-7.0.2-3.el5.i386.rpm
rpm -ivh libXp-1.0.0-8.1.el5.i386.rpm

设置IP地址(别着急,先设置好,等一下再复制出个RAC2来),网卡一的IP是192.168.1.123,网卡二的IP是10.0.0.154(根据自己的情况)

三.安装CRS前的物理配置

以下步骤大多数是在两个节点执行,具体会注明,把刚才配置的那个系统关闭,把它复制到rac2文件夹里,第二个节点就形成了,此时要修改一下虚拟机的配置文件,用记事本打开rac2文件夹里的rac1.vmx文件,修改为displayName = "rac2"。当然还要进入系统改一下主机名和IP,同时别忘了修改一下oracle用户的bash_profile,把里面的oracle_sid改为orcl2。

然后把rac2导入到虚拟机里面

先打开rac2虚拟机,修改主机名和IP。设置rac2的网卡一的IP是192.168.1.127, 网卡二的IP是10.0.0.155

使两台虚拟机的hosts文件如下

127.0.0.1 localhost.localdomain localhost
192.168.1.123 rac1
192.168.1.127 rac2
10.0.0.154 rac1-priv
10.0.0.155 rac2-priv
192.168.1.201 rac1-vip
192.168.1.202 rac2-vip

然后在关闭掉rac2,使rac1,rac2都处于关闭状态,添加共享磁盘。

先在RAC1上添加一块磁盘

1.

2.此时可以再建个文件夹,在下图中修改下路径,把新添加的磁盘放在新的文件夹里面

为提高磁盘的性能,请勾选Allocate all disk space now,在Disk Mode中请勾选Idependent,图中已有解释。

3.

在Virtual Device Node中修改为图中的模式,至于为什么,以下是英文说明

(SCSI reservation must be enabled in a virtual machine before you can share its disks. To enable it, make sure the virtual machine is powered off. Open the configuration file (.vmx file on Windows hosts, .cfg file on Linux hosts) in a text editor and add the line scsi[n].sharedBus = "virtual" anywhere in the file, where [n] is the SCSI bus being shared.

VMware recommends you set up the shared disks on their own SCSI bus, which is a different bus than the one the guest operating system uses. For example, if your guest operating system is on scsi0:0, you should set up disks to share on scsi1 bus.

For example, to enable SCSI reservation for devices on the scsi1 bus, add the following line to the virtual machine's configuration file:

scsi1.sharedBus = "virtual"

This gives the whole bus the ability to be shared. However, if you would rather not share the whole bus, you can selectively enable SCSI reservation for a specific SCSI disk on the shared bus. This prevents the locking of this specific disk. Add the following line to the configuration file:

scsi1:1.shared = "true"

If SCSI reservation is enabled (that is, scsi1.sharedBus is set to "virtual"), then this setting is ignored.

In addition to enabling SCSI reservation on the bus, you need to allow virtual machines to access the shared disk concurrently. Add the following line to the virtual machine's configuration file:

disk.locking = "false"

This setting permits multiple virtual machines to access a disk concurrently. Be careful though; if any virtual machine not configured for SCSI reservation tries to access this disk concurrently, then the shared disk is vulnerable to corruption or data loss.

When SCSI reservation is enabled, a reservation lock file that contains the shared state of the reservation for the given disk is created. The name of this file consists of the filename of the SCSI disk appended with .RESLCK.

For example, if the disk scsi1:0.filename is defined in the configuration file as

scsi1:0.fileName = "//vmSCSI.pln"

then the reservation lock file for this disk is given the default name

"//vmSCSI.pln.RESLCK"

However, you can provide your own lock file name. Add a definition for scsi1:0.reslckname to the configuration file. For example, if

scsi1:0.reslckname = "/tmp/scsi1-0.reslock"

is added to the configuration file, it overrides the default lock file name.


Selecting the Disk

Once SCSI reservation is enabled for a disk — that is, the scsi[n].sharedBus = "virtual" and disk.locking = "false" settings are added to the configuration file for each virtual machine wanting to share this disk, you need to point to this disk for each virtual machine that wants to access it.


Sharing a Disk on the scsi0 Bus

VMware does not recommend sharing a disk on SCSI bus 0. )

4.点击“下一步”,添加磁盘

添加完RAC1的,再添加RAC2的

1.注意,此时是use an existing virtual disk

2.如下图,浏览到刚才添加的那块磁盘,其它选项还是和刚才设置的一样

3.点击“下一步”,添加磁盘

至此,还要修改rac1.vmx和rac2.vmx,分别添加disk.locking = "false" ,分别修改一项参数为scsi1.sharedBus = "virtual" 。

至此,物理配置全部完成。

四,安装CRS前的参数配置

1.配置SSH
以下须在两个节点上以oracle用户运行
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ /usr/bin/ssh-keygen -t rsa
注意:本例中此处一路按回车选择默认
$ /usr/bin/ssh-keygen -t dsa
注意:本例中此处一路按回车选择默认

以下在一个节点运行
$ cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
$ ssh
oracle@rac2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
$ chmod 644 ~/.ssh/authorized_keys
$ scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

配置完成,分别在rac1和rac2上执行
ssh rac2
ssh rac2-priv
ssh rac1
ssh rac1-priv
该yes的就yes。执行完这些命令,等效性才算全部完成,否则的话,在后面安装CRS时,会在检查等效性这步出错。

2.NTP配置
1节点做NTP服务器、2节点做NTP客户端不用做修改
在1节点/etc/ntp.conf文件中添加
restrict 192.168.1.123(ip根据自己的情况定)

restrict 192.168.1.0 mask 255.255.255.0 nomodify
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
driftfile /var/lib/ntp/drift

broadcastdelay 0.008

#authenticate no
#keys /etc/ntp/keys  (这条需注示掉)

运行/etc/init.d/ntpd restart
chkconfig ntpd on

等待几分钟后,在2节点上运行ntpdate 192.168.1.123
[root@rac2 /]# ntpdate rac1
10 Jun 14:58:05 ntpdate[14667]: step time server 192.168.1.123 offset 6.194315 sec

记得重启rac1的NTP服务器之后大约要3-5分钟才可以实现同步。
在2节点上制定定时任务
# crontab -e
10 * * * * ntpdate rac1

3.安装ASM软件
首先把刚才添加的磁盘分区
分为四个区,两个用于OCR和voting disk,另外两个用于asm磁盘组
在一个节点分区就可以,分完区,在第二个节点执行partprobe命令,就可以同步看到分好区的磁盘了
[root@rac1 ~]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 274 2096482+ 82 Linux swap / Solaris
/dev/sda3 275 1958 13526730 83 Linux

Disk /dev/sdb: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 250 2008093+ 83 Linux
/dev/sdb2 251 500 2008125 83 Linux
/dev/sdb3 501 750 2008125 83 Linux
/dev/sdb4 751 1044 2361555 83 Linux

安装的asm软件(以下在两个节点)
[root@rac1 opt]# ls
oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm
oracleasmlib-2.0.4-1.el5.i386.rpm
oracleasm-support-2.1.4-1.el5.i386.rpm
可在
http://www.oracle.com/technetwork/topics/linux/downloads/rhel5-084877.html 下载

[root@rac1 opt]# rpm -ivh *.rpm
warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
warning: oracleasm-support-2.1.4-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID b38a8516
Preparing... ########################################### [100%]
1:oracleasm-support ########################################### [ 33%]
2:oracleasm-2.6.18-194.el########################################### [ 67%]
3:oracleasmlib ########################################### [100%]
[root@rac1 opt]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]

在第一个节点
[root@rac1 opt]# /etc/init.d/oracleasm createdisk VOL1 /dev/sdb3
Marking disk "VOL1" as an ASM disk: [ OK ]
[root@rac1 opt]# /etc/init.d/oracleasm createdisk VOL2 /dev/sdb4
Marking disk "VOL2" as an ASM disk: [ OK ]
在第二个节点
[root@rac2 opt]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks: [ OK ]
[root@rac2 opt]# /etc/init.d/oracleasm listdisks
VOL1
VOL2

4.配置裸设备(在两个节点)
配置CRS和VOTE裸设备
raw /dev/raw/raw1 /dev/sdb1
raw /dev/raw/raw2 /dev/sdb2
chown root:oinstall /dev/raw/raw1
chmod 640 /dev/raw/raw1
chown oracle:dba /dev/raw/raw2
chmod 660 /dev/raw/raw2

vi /etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add", KERNEL=="sdb2", RUN+="/bin/raw /dev/raw/raw2 %N"
KERNEL=="raw1",OWNER="root",GROUP="oinstall",MODE="0640"
KERNEL=="raw2",OWNER="oracle",GROUP="dba",MODE="0660"
好了,到现在为止,参数基本上都配置完了
接下来是安装了,安之前别忘了把系统的版本改成4,当然跳过检查也可以
五,正式安装

1.

2.此时手动把路径改为crs的路径

3.把rac2节点添加上

4.

5.把eth0改为public

6.

7.

8.

9.remote oprations in progress表示正在向第二个节点传递数据

10.

11.

分别在两个节点执行以上脚本

rac1:
[root@rac1 crs]# ./root.sh
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw2
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
CSS is inactive on these nodes.
rac2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.


rac2:
[root@rac2 crs]# ./root.sh
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/crs/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/crs/oracle/product' is not owned by root
WARNING: directory '/u01/crs/oracle' is not owned by root
WARNING: directory '/u01/crs' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac1 for node 1.
assigning default hostname rac2 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac1 rac1-priv rac1
node 2: rac2 rac2-priv rac2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
rac1
rac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/u01/crs/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot

open shared object file: No such file or directory

此时,在最后会报错,解决方法:
第一种解决方法:不用管它。直接在节点1中点OK继续往下进行,最后会在Oracle Cluster
Verification Utility一项失败。如图

这时直接cancel就行了。

安装10.2.0.4的补丁,安装完成后再在第2个节点执行./vipca,就没问题了。crs_stat -t时,crs的服务都启来了。

第二种解决方法:
1、public接口的网卡需要指定网关,且是真实存在的IP;
2、在两个节点的$ORA_CRS_HOME和$ORACLE_HOME的编辑vipca文件,在export LD_ASSUME_KERNEL-fi的下面,添加unset LD_ASSUME_KERNEL
3、在第二个节点执行
/bin# ./oifcfg setif -global eth0/192.168.1.0:public
/bin# ./oifcfg setif -global eth1/10.0.0.0:cluster_interconnect
/bin# ./oifcfg getif --如果显示不出内容,就要运行上面的两条
eth0 192.168.1.0 global public
eth1 10.0.0.0 global cluster_interconnect

然后在第二个节点上以root用户运行vipca,即可正常配置vip了。如果最后在Oracle Cluster Verification Utility一项失败,点击

retry就应该成功了。

按照第二种方法,安装crs

1.

2.

3.

4.

5.点击下图的retry

6.

此时就完成了crs的安装,接下来就是数据库的安装,想必大家都已经安装过许多次了,在这里不再赘述。

谢谢您的阅读。

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23915995/viewspace-704611/,如需转载,请注明出处,否则将追究法律责任。

上一篇: 没有了~
下一篇: 12c pdb基础
请登录后发表评论 登录
全部评论

注册时间:2010-05-15

  • 博文量
    3
  • 访问量
    21579