ITPub博客

首页 > Linux操作系统 > Linux操作系统 > Linux AS 4.0下Oracle10g RAC搭建(虚拟机+裸设备)

Linux AS 4.0下Oracle10g RAC搭建(虚拟机+裸设备)

原创 Linux操作系统 作者:andyxu 时间:2011-11-10 08:36:21 0 删除 编辑
同事做的一份文档,拿来记录一下,以便后续能够做参考查询:

Linux AS 4.0下Oracle10g RAC搭建(虚拟机+裸设备) 


Linux AS 4.0 ,   Oracle 10.2.0.4.0  rac , 虚拟机,  裸设备 


1. 前期设置参考其他文档 。 

安装linux packages , 这里只是安装了缺少的几个包,不同安装方式可能会不同。
参考其他文档操作。  

[root@localhost packages]# rpm -ivh  glibc-kernheaders-2.4-9.1.100.EL.i386.rpm 
warning: glibc-kernheaders-2.4-9.1.100.EL.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:glibc-kernheaders      ########################################### [100%]
[root@localhost packages]# rpm -ivh glibc-headers-2.3.4-2.36.i386.rpm 
warning: glibc-headers-2.3.4-2.36.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:glibc-headers          ########################################### [100%]
[root@localhost packages]# rpm -ivh glibc-devel-2.3.4-2.36.i386.rpm 
warning: glibc-devel-2.3.4-2.36.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:glibc-devel            ########################################### [100%]
[root@localhost packages]#  rpm -ivh compat-gcc-32-3.2.3-47.3.i386.rpm  
warning: compat-gcc-32-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:compat-gcc-32          ########################################### [100%]
[root@localhost packages]# 

[root@localhost packages]# rpm -ivh compat-libstdc++-33-3.2.3-47.3.i386.rpm 
warning: compat-libstdc++-33-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:compat-libstdc++-33    ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-gcc-32-c++-3.2.3-47.3.i386.rpm 
warning: compat-gcc-32-c++-3.2.3-47.3.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:compat-gcc-32-c++      ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-libgcc-296-2.96-132.7.2.i386.rpm 
warning: compat-libgcc-296-2.96-132.7.2.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:compat-libgcc-296      ########################################### [100%]
[root@localhost packages]# rpm -ivh compat-libstdc++-296-2.96-132.7.2.i386.rpm 
warning: compat-libstdc++-296-2.96-132.7.2.i386.rpm: V3 DSA signature: NOKEY, key ID db42a60e
Preparing...                ########################################### [100%]
   1:compat-libstdc++-296   ########################################### [100%]






2.  修改hostname(每个节点) . 

[root@localhost sysconfig]# vi network

NETWORKING=yes
HOSTNAME=rac01



3.  建立oracle組和用戶(每个节点) : 

[root@localhost etc]# groupadd dba       
[root@localhost etc]# groupadd oper 
[root@localhost etc]# useradd -g  dba -G oper oracle  
[root@localhost etc]# passwd oracle
Changing password for user oracle.
New UNIX password: 
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password: 
passwd: all authentication tokens updated successfully.


創建目錄(每个节点) :  

[root@localhost /]# mkdir -p  /u01/product 
[root@localhost /]# chown oracle.dba  /u01



4.  配置內核參數 (每个节点) :  

[root@localhost etc]# vi sysctl.conf 

# Added by DBA for Oracle DB
kernel.shmall = 2097152
kernel.shmmax = 545259520 
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144






5.  設置ORACLE環境變量 (每个节点) :  : 

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:/bin:/sbin:/usr/bin:/usr/sbin
BASH_ENV=$HOME/.BASHRC

export BASH_ENV PATH
unset USERNAME

# Set Oracle Environment
ORACLE_HOME=/u01/product/oracle;export ORACLE_HOME
ORACLE_SID=orcl2; export ORACLE_SID
ORACLE_OWNER=oracle;export ORACLE_OWNER
ORACLE_BASE=/u01/product;export ORACLE_BASE
ORACLE_TERM=vt100;export ORACLE_TERM
#NLS_LANG='traditional chinese_taiwan'.ZHT16BIG5;export NLS_LANG
LD_LIBRARY_PATH=$ORACLE_HOME/lib;export LD_LIBRARY_PATH

ORA_CRS_HOME=/u01/product/crs; export ORA_CRS_HOME

set -u
PS1=`hostname`'<$PWD>$';export PS1
EDITOR=/bin/vi; export EDITOR
JAVA_HOME=/usr/local/java;export JAVA_HOME
ORA_NLS33=/u01/product/oracle/ocommon/nls/admin/data;export ORA_NLS33
CLASSPATH=/u01/product/oracle/jdbc/lib/classesl11.zip:/usr/local/java;
export DISPLAY=127.0.0.1:0.0
export LD_ASSUME_KERNEL=2.6.9
PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$ORA_CRS_HOME/bin:$JAVA_HOME/bin:$PATH:.;export PATH
alias ll='ls -l';
alias ls='ls --color';
alias his='history';
# alias sqlplus='rlwrap sqlplus'
# alias rman='rlwrap rman'
stty erase ^H

umask 022




6.  設置/etc/hosts (每个节点) : 


[root@rac01 etc]# vi hosts

# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1               localhost.localdomain localhost

10.161.34.111   rac01
10.1.1.1        pri01
10.161.32.151   vip01

10.161.34.112   rac02
10.1.1.2        pri02
10.161.32.152   vip02



[root@rac02 ~]# cd /etc/security/
[root@rac02 security]# vi  limits.conf 

oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536



[root@rac01 security]# cd /etc/pam.d/
[root@rac01 pam.d]# vi login  

加入 

session    required   pam_limits.so 


[root@rac01 etc]# vi grub.conf 
加入
selinux=0




7.  两个节点上关闭开机耗时的服务。  

[root@rac02 ~]# chkconfig  cups off  
[root@rac02 ~]# chkconfig  sendmail off 
[root@rac02 ~]# chkconfig  isdn  off  
[root@rac02 ~]# chkconfig  smartd off 
[root@rac02 ~]# chkconfig  iptables off  




8.  建立信任关系 


rac01$mkdir .ssh 
rac01$chmod 700 .ssh/

rac01$ssh-keygen -t rsa 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
ea:9b:ed:1e:3d:9e:c9:3c:92:6f:b2:1c:ce:d1:5e:b5 oracle@rac01


rac01$ssh-keygen -t dsa 
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
78:12:ec:f6:60:24:1a:3a:2a:63:05:67:a1:2a:10:f4 oracle@rac01




rac02$ssh-keygen -t rsa  
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
c3:30:11:b3:13:8e:c7:b7:62:87:0b:1f:e6:ef:4b:1f oracle@rac02


rac02$ssh-keygen -t dsa 
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
23:e9:02:32:f5:18:2e:9b:72:50:cf:9e:54:16:26:b9 oracle@rac02



rac01$cd .ssh/
rac01$
rac01$ssh rac01 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys 
The authenticity of host 'rac01 (10.161.34.111)' can't be established.
RSA key fingerprint is 25:a2:67:c5:a6:58:e3:78:34:0e:36:6d:a5:be:6b:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac01,10.161.34.111' (RSA) to the list of known hosts.
oracle@rac01's password: 
rac01$


rac01$ssh rac01 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys 


rac01$ssh rac02 cat /home/oracle/.ssh/id_rsa.pub >>authorized_keys 
The authenticity of host 'rac02 (10.161.34.112)' can't be established.
RSA key fingerprint is 25:a2:67:c5:a6:58:e3:78:34:0e:36:6d:a5:be:6b:a7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rac02,10.161.34.112' (RSA) to the list of known hosts.
oracle@rac02's password: 

rac01$ssh rac02 cat /home/oracle/.ssh/id_dsa.pub >>authorized_keys 
oracle@rac02's password: 

rac01$scp authorized_keys rac02:/home/oracle/.ssh/
oracle@rac02's password: 
authorized_keys                                                      100% 1648     1.6KB/s   00:00    
rac01$

rac01$chmod 600 authorized_keys 

rac02$chmod 600 authorized_keys  





9.  新增加裸設備共享磁盤 :  

新增硬件向导: 
a. 虚拟设备节点:选择SCSI 1:0。一定要选择跟系统盘不相同的总线,
如系统盘一般是SCSI(0:1),而这些共享盘一定要选择SCSI(1:x)、SCSI(2:x)、
SCSI(3:x);
b. 模式:选择Independent,针对所有共享磁盘选择Persistent。 
c.      选择"Allocate all disk space now"
d. 单击 Finish。


为了两个虚拟rac之间的磁盘共享,还需要配置虚拟机文件:
http://xjnobadyit.blog.sohu.com/162291611.html



关闭两个虚拟机 。 到D:\VM\Linux4_TestRawDev,打开Linux4_TestRawDev.vmx , 
在最后空白处添加这几段内容(注意,vmx文件中每行都不能重复,否则会报错,
所以下面的语句行如果已经存在,请不要重复添加) :  

scsi1.present = "TRUE" 
scsi1.virtualDev = "lsilogic" 
scsi1.sharedBus = "virtual" 
-- 这段是打开 scsi1上的使用,并且设置成virtual, controller设置成lsilogic, 


-- 然后依次添加 
scsi1:0.present = "TRUE"
scsi1:0.mode = "independent-persistent" 
scsi1:0.fileName = "E:\SharedDisk\shareddisk01.vmdk"
scsi1:0.deviceType = "disk" 


-- 最后添加这个,这段是对vmware使用共享硬盘的方式进行定义,必须添加
disk.locking = "false" 
diskLib.dataCacheMaxSize = "0" 
diskLib.dataCacheMaxReadAheadSize = "0" 
diskLib.DataCacheMinReadAheadSize = "0" 
diskLib.dataCachePageSize = "4096" 
diskLib.maxUnsyncedWrites = "0"


修改节点2上的Linux4_TestRawDev_rac02.vmx 文件, 加入类似上面的语句。 
保存退出之后,启动虚拟机就可以看到刚才添加的硬盘了. 


节点1上 
[root@rac01 ~]# fdisk -l 

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1           6       48163+  83  Linux
/dev/sda2               7         515     4088542+  83  Linux
/dev/sda3             516        1771    10088820   83  Linux
/dev/sda4            1772        1958     1502077+   5  Extended
/dev/sda5            1772        1958     1502046   82  Linux swap

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table


节点2上 
[root@rac02 ~]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1           6       48163+  83  Linux
/dev/sda2               7         515     4088542+  83  Linux
/dev/sda3             516        1771    10088820   83  Linux
/dev/sda4            1772        1958     1502077+   5  Extended
/dev/sda5            1772        1958     1502046   82  Linux swap

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table







10.   配置 hangcheck-timer 模块

查看模块位置:
[root@rac01 etc]# find /lib/modules -name "hangcheck-timer.ko" 
/lib/modules/2.6.9-55.EL/kernel/drivers/char/hangcheck-timer.ko
/lib/modules/2.6.9-55.ELsmp/kernel/drivers/char/hangcheck-timer.ko

配置系统启动时自动加载模块,在/etc/rc.d/rc.local 中添加如下内容
[root@rac01 etc]# modprobe hangcheck-timer
[root@rac01 etc]# vi /etc/rc.d/rc.local 
modprobe hangcheck-timer

配置hangcheck-timer参数, 在/etc/modprobe.conf 中添加如下内容:
[root@rac01 etc]# vi /etc/modprobe.conf
options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180


确认模块加载成功:
[root@rac02 u01]# grep Hangcheck /var/log/messages | tail -2 

Oct 20 15:23:38 rac02 kernel: Hangcheck: starting hangcheck timer 0.9.0 (tick is 180 seconds, margin is 60 seconds).
Oct 20 15:23:38 rac02 kernel: Hangcheck: Using monotonic_clock().








11.  开始磁盘分区 (节点1上执行)

[root@rac02 u01]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1           6       48163+  83  Linux
/dev/sda2               7         515     4088542+  83  Linux
/dev/sda3             516        1771    10088820   83  Linux
/dev/sda4            1772        1958     1502077+   5  Extended
/dev/sda5            1772        1958     1502046   82  Linux swap

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table






12.   建立VG的时候,同一卷组(VG)所有物理卷(PV)的物理区域(PE)大小需一致. 类似
Oracle block大小一致。  


[root@rac01 ~]# fdisk /dev/sdb

The number of cylinders for this disk is set to 2610.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2610, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2610, default 2610): +10240M

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 2
First cylinder (1247-2610, default 1247): 
Using default value 1247
Last cylinder or +size or +sizeM or +sizeK (1247-2610, default 2610): +10240M

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks. 



[root@rac01 ~]# fdisk -l

Disk /dev/sda: 16.1 GB, 16106127360 bytes
255 heads, 63 sectors/track, 1958 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1           6       48163+  83  Linux
/dev/sda2               7         515     4088542+  83  Linux
/dev/sda3             516        1771    10088820   83  Linux
/dev/sda4            1772        1958     1502077+   5  Extended
/dev/sda5            1772        1958     1502046   82  Linux swap

Disk /dev/sdb: 21.4 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        1246    10008463+  83  Linux
/dev/sdb2            1247        2492    10008495   83  Linux






13.   开始配置裸设备 :  

将分区初始化为物理卷(节点1上执行): 

[root@rac01 ~]# pvcreate    /dev/sdb1  /dev/sdb2  
  Physical volume "/dev/sdb1" successfully created
  Physical volume "/dev/sdb2" successfully created

[root@rac01 ~]# pvscan
  PV /dev/sdb1         lvm2 [9.54 GB]
  PV /dev/sdb2         lvm2 [9.54 GB]
  Total: 2 [19.09 GB] / in use: 0 [0   ] / in no VG: 2 [19.09 GB]

[root@rac01 ~]# pvdisplay 
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name               
  PV Size               9.54 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               8v1kbs-dPsT-X7jG-qh7p-3AzM-i12P-OhNV4U
   
  --- NEW Physical volume ---
  PV Name               /dev/sdb2
  VG Name               
  PV Size               9.54 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               KooBSy-RpVJ-JxeY-vp2B-dOLt-UALf-CUl2nN





14.   创建VG (节点1上执行):  

在PV的基础上创建卷组,语法:vgcreate vgname  pvname . 
[root@rac01 ~]# vgcreate  datavg01   /dev/sdb1  /dev/sdb2 
  Volume group "datavg01" successfully created

[root@rac01 etc]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "datavg01" using metadata type lvm2

[root@rac01 etc]# vgdisplay
  --- Volume group ---
  VG Name               datavg01
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  15
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                14
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19.09 GB
  PE Size               4.00 MB
  Total PE              4886
  Alloc PE / Size       330 / 1.29 GB
  Free  PE / Size       4556 / 17.80 GB
  VG UUID               OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi
   





15.   创建LV (节点1上执行):  

在VG中创建逻辑卷,语法: lvcreate -n lvname -L size vgname . 

我们可以创建一个文件createlv.txt批量执行: 

lvcreate -n ocr2.dbf -L 120m  datavg01  
lvcreate -n votingdisk2 -L 120m  datavg01 
lvcreate -n system01.dbf -L 400m  datavg01 
lvcreate -n sysaux01.dbf -L 300m  datavg01
lvcreate -n users01.dbf -L 10m  datavg01
lvcreate -n undotbs01.dbf -L 200m  datavg01
lvcreate -n temp01.dbf -L 200m  datavg01
lvcreate -n spfileorcl.ora -L 10m  datavg01
lvcreate -n control01.ctl -L 30m  datavg01
lvcreate -n control02.ctl -L 30m  datavg01
lvcreate -n control03.ctl -L 30m  datavg01 
lvcreate -n redo01.log -L 20m  datavg01
lvcreate -n redo02.log -L 20m  datavg01
lvcreate -n redo03.log -L 20m  datavg01


[root@rac01 ~]# sh  createlv.txt  
  Logical volume "ocr.dbf" created 
  Logical volume "votingdisk" created
  Logical volume "system01.dbf" created
  Logical volume "sysaux01.dbf" created
  Rounding up size to full physical extent 12.00 MB
  Logical volume "users01.dbf" created
  Logical volume "undotbs01.dbf" created
  Logical volume "temp01.dbf" created
  Rounding up size to full physical extent 12.00 MB
  Logical volume "spfileorcl.ora" created
  Rounding up size to full physical extent 32.00 MB
  Logical volume "control01.ctl" created
  Rounding up size to full physical extent 32.00 MB
  Logical volume "control02.ctl" created
  Rounding up size to full physical extent 32.00 MB
  Logical volume "control03.ctl" created
  Logical volume "redo01.log" created
  Logical volume "redo02.log" created
  Logical volume "redo03.log" created

备注:  通过vgdisplay可以看到 PE Size是4.00 MB,所以设置
10M, 30M的非4M倍数的大小都被设置为了4M的倍数(12M,32M). 如
果需要删除lv, 可以使用 # lvremove -f /dev/datavg01/xxx.dbf

lvextend, lvreduce, lvremove 可以修改大小及删除。 
[root@rac01 etc]# lvremove  /dev/datavg01/data01.dbf  
Do you really want to remove active logical volume "data01.dbf"? [y/n]: y
  Logical volume "data01.dbf" successfully removed


查看lv (节点1上执行) : 
[root@rac01 ~]# lvscan
  ACTIVE            '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/votingdisk' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/system01.dbf' [400.00 MB] inherit
  ACTIVE            '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
  ACTIVE            '/dev/datavg01/users01.dbf' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/control01.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control02.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control03.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo01.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo02.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo03.log' [20.00 MB] inherit
[root@rac01 ~]# lvdisplay  


创建完成后,可以在 /dev/datavg01 及/dev/mapper 下看到新创建的lv信息。 
备注:  删除lv的命令类似lvremove   /dev/datavg01/sysaux01.dbf  

节点1上执行,其他节点这时看不到 :   

[root@rac01 ~]# cd   /dev/mapper/ 
[root@rac01 mapper]# ls -al
total 0
drwxr-xr-x   2 root root     340 Oct 22 15:05 .
drwxr-xr-x  10 root root    7100 Oct 22 15:05 ..
crw-------   1 root root  10, 63 Oct 20 11:43 control
brw-rw----   1 root disk 253,  8 Oct 22 15:05 datavg01-control01.ctl
brw-rw----   1 root disk 253,  9 Oct 22 15:05 datavg01-control02.ctl
brw-rw----   1 root disk 253, 10 Oct 22 15:05 datavg01-control03.ctl
brw-rw----   1 root disk 253,  0 Oct 22 15:03 datavg01-ocr.dbf
brw-rw----   1 root disk 253, 11 Oct 22 15:05 datavg01-redo01.log
brw-rw----   1 root disk 253, 12 Oct 22 15:05 datavg01-redo02.log
brw-rw----   1 root disk 253, 13 Oct 22 15:05 datavg01-redo03.log
brw-rw----   1 root disk 253,  7 Oct 22 15:05 datavg01-spfileorcl.ora
brw-rw----   1 root disk 253,  3 Oct 22 15:05 datavg01-sysaux01.dbf
brw-rw----   1 root disk 253,  2 Oct 22 15:05 datavg01-system01.dbf
brw-rw----   1 root disk 253,  6 Oct 22 15:05 datavg01-temp01.dbf
brw-rw----   1 root disk 253,  5 Oct 22 15:05 datavg01-undotbs01.dbf
brw-rw----   1 root disk 253,  4 Oct 22 15:05 datavg01-users01.dbf
brw-rw----   1 root disk 253,  1 Oct 22 15:05 datavg01-votingdisk






16.   激活或重新启动其他节点 (这里只有节点2): 

一般不用重新启动其他节点,只需要在其他节点上激活VG,LV就可以了。 
即 
# vgchange -a  y  datavgxxx
# lvchange -a  y  /dev/datavg01/xxxx.dbf


这里仅仅演示一下未激活之前及重启之后的过程 : 

在没有重新启动其他节点是看不到pv及vg,lv的。
[root@rac02 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  No volume groups found
[root@rac02 ~]# pvscan
  No matching physical volumes found 

重新启动后(其他节点lv状态inactive):   

[root@rac02 mapper]# pvscan
  PV /dev/sdb1   VG datavg01   lvm2 [9.54 GB / 8.90 GB free]
  PV /dev/sdb2   VG datavg01   lvm2 [9.54 GB / 8.90 GB free]
  Total: 2 [19.09 GB] / in use: 2 [19.09 GB] / in no VG: 0 [0   ]
[root@rac02 mapper]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "datavg01" using metadata type lvm2
[root@rac02 mapper]# lvscan
  inactive          '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
  inactive          '/dev/datavg01/votingdisk' [20.00 MB] inherit
  inactive          '/dev/datavg01/system01.dbf' [400.00 MB] inherit
  inactive          '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
  inactive          '/dev/datavg01/users01.dbf' [12.00 MB] inherit
  inactive          '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
  inactive          '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
  inactive          '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
  inactive          '/dev/datavg01/control01.ctl' [32.00 MB] inherit
  inactive          '/dev/datavg01/control02.ctl' [32.00 MB] inherit
  inactive          '/dev/datavg01/control03.ctl' [32.00 MB] inherit
  inactive          '/dev/datavg01/redo01.log' [20.00 MB] inherit
  inactive          '/dev/datavg01/redo02.log' [20.00 MB] inherit
  inactive          '/dev/datavg01/redo03.log' [20.00 MB] inherit



查看vg及lv是否激活(创建后默认是active的, 重启OS后需要激活):

在被激活之前,VG与LV是无法访问的,这时可用命令激活所要使用的卷组:
# vgchange -a y   datavg01   

我们看到除节点1 之外的其他节点lv状态为inactive, 这里我们需要激活。

[root@rac02 dev]# vgchange -a y 
  14 logical volume(s) in volume group "datavg01" now active
[root@rac02 dev]# lvscan
  ACTIVE            '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/votingdisk' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/system01.dbf' [400.00 MB] inherit
  ACTIVE            '/dev/datavg01/sysaux01.dbf' [300.00 MB] inherit
  ACTIVE            '/dev/datavg01/users01.dbf' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/control01.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control02.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control03.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo01.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo02.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo03.log' [20.00 MB] inherit

备注: 
vgchange -a y/n        # y 激活卷组,   n 禁用卷组
lvchange -a y/n        # y 激活逻辑卷,n 禁用逻辑卷




17.  在所有节点上建立 /dev/raw目录(建立一个raw目录只是为了管理方便):

[root@rac01 ~]# cd /dev 
[root@rac01 dev]# mkdir -p  raw
[root@rac01 dev]# chmod 755 raw 




18.  绑定裸设备(所有节点),将如下语句作为一个脚本boundraw.txt执行:  

vgscan                     ----  注释: 扫描并显示系统中的卷组  
vgchange -a y              ----  注释: 激活VG 
/usr/bin/raw    /dev/raw/raw1     /dev/datavg01/ocr.dbf
/usr/bin/raw    /dev/raw/raw2     /dev/datavg01/votingdisk
/usr/bin/raw    /dev/raw/raw3     /dev/datavg01/system01.dbf
/usr/bin/raw    /dev/raw/raw4     /dev/datavg01/sysaux01.dbf
/usr/bin/raw    /dev/raw/raw5     /dev/datavg01/users01.dbf
/usr/bin/raw    /dev/raw/raw6     /dev/datavg01/undotbs01.dbf
/usr/bin/raw    /dev/raw/raw7     /dev/datavg01/temp01.dbf
/usr/bin/raw    /dev/raw/raw8     /dev/datavg01/spfileorcl.ora
/usr/bin/raw    /dev/raw/raw9     /dev/datavg01/control01.ctl
/usr/bin/raw    /dev/raw/raw10    /dev/datavg01/control02.ctl
/usr/bin/raw    /dev/raw/raw11    /dev/datavg01/control03.ctl
/usr/bin/raw    /dev/raw/raw12    /dev/datavg01/redo01.log
/usr/bin/raw    /dev/raw/raw13    /dev/datavg01/redo02.log
/usr/bin/raw    /dev/raw/raw14    /dev/datavg01/redo03.log 
/usr/bin/raw    /dev/raw/raw15    /dev/datavg01/ocr2.dbf 
/usr/bin/raw    /dev/raw/raw16    /dev/datavg01/votingdisk2 



[root@rac01 ~]# sh   boundraw.txt 
  Reading all physical volumes.  This may take a while...
  Found volume group "datavg01" using metadata type lvm2
  14 logical volume(s) in volume group "datavg01" now active
/dev/raw/raw1:  bound to major 253, minor 0 
/dev/raw/raw2:  bound to major 253, minor 1
/dev/raw/raw3:  bound to major 253, minor 2
/dev/raw/raw4:  bound to major 253, minor 3
/dev/raw/raw5:  bound to major 253, minor 4
/dev/raw/raw6:  bound to major 253, minor 5
/dev/raw/raw7:  bound to major 253, minor 6
/dev/raw/raw8:  bound to major 253, minor 7
/dev/raw/raw9:  bound to major 253, minor 8
/dev/raw/raw10: bound to major 253, minor 9
/dev/raw/raw11: bound to major 253, minor 10
/dev/raw/raw12: bound to major 253, minor 11
/dev/raw/raw13: bound to major 253, minor 12
/dev/raw/raw14: bound to major 253, minor 13
..... 


备注:  在一些地方可以会看到绑定裸设备的后面/dev/datavg01/control03.ctl 
部分会写成 /dev/mapper/datavg01-control03.ctl,他们是软连接的关系,所以
也是可以的。 

备注: 
# raw -qa 查看raw绑定, 
# raw /dev/raw/raw1  0  0   取消绑定。 

[root@rac01 ~]# raw -qa
/dev/raw/raw1:  bound to major 253, minor 0
/dev/raw/raw2:  bound to major 253, minor 1
/dev/raw/raw3:  bound to major 253, minor 2
/dev/raw/raw4:  bound to major 253, minor 3
/dev/raw/raw5:  bound to major 253, minor 4
/dev/raw/raw6:  bound to major 253, minor 5
/dev/raw/raw7:  bound to major 253, minor 6
/dev/raw/raw8:  bound to major 253, minor 7
/dev/raw/raw9:  bound to major 253, minor 8
/dev/raw/raw10: bound to major 253, minor 9
/dev/raw/raw11: bound to major 253, minor 10
/dev/raw/raw12: bound to major 253, minor 11
/dev/raw/raw13: bound to major 253, minor 12
/dev/raw/raw14: bound to major 253, minor 13







19.  修改raw的权限 (所有节点执行,可编辑为脚本批量执行): 

/bin/chmod  600  /dev/raw/raw1
/bin/chmod  600  /dev/raw/raw2 
/bin/chmod  600  /dev/raw/raw3
/bin/chmod  600  /dev/raw/raw4 
/bin/chmod  600  /dev/raw/raw5
/bin/chmod  600  /dev/raw/raw6 
/bin/chmod  600  /dev/raw/raw7
/bin/chmod  600  /dev/raw/raw8 
/bin/chmod  600  /dev/raw/raw9
/bin/chmod  600  /dev/raw/raw10 
/bin/chmod  600  /dev/raw/raw11
/bin/chmod  600  /dev/raw/raw12 
/bin/chmod  600  /dev/raw/raw13
/bin/chmod  600  /dev/raw/raw14 
.... 

/bin/chown  oracle.dba  /dev/raw/raw1
/bin/chown  oracle.dba  /dev/raw/raw2
/bin/chown  oracle.dba  /dev/raw/raw3
/bin/chown  oracle.dba  /dev/raw/raw4
/bin/chown  oracle.dba  /dev/raw/raw5
/bin/chown  oracle.dba  /dev/raw/raw6
/bin/chown  oracle.dba  /dev/raw/raw7
/bin/chown  oracle.dba  /dev/raw/raw8
/bin/chown  oracle.dba  /dev/raw/raw9
/bin/chown  oracle.dba  /dev/raw/raw10
/bin/chown  oracle.dba  /dev/raw/raw11
/bin/chown  oracle.dba  /dev/raw/raw12
/bin/chown  oracle.dba  /dev/raw/raw13
/bin/chown  oracle.dba  /dev/raw/raw14 
... 


查看系统权限(所有节点执行,读写权限及属主都改变):    

[root@rac01 raw]# pwd
/dev/raw
[root@rac01 raw]# ls -al
total 0
drwxr-xr-x   2 root   root     320 Oct 22 17:06 .
drwxr-xr-x  11 root   root    7120 Oct 22 16:52 ..
crw-------   1 oracle dba  162,  1 Oct 22 17:04 raw1
crw-------   1 oracle dba  162, 10 Oct 22 17:06 raw10
crw-------   1 oracle dba  162, 11 Oct 22 17:06 raw11
crw-------   1 oracle dba  162, 12 Oct 22 17:06 raw12
crw-------   1 oracle dba  162, 13 Oct 22 17:06 raw13
crw-------   1 oracle dba  162, 14 Oct 22 17:06 raw14
crw-------   1 oracle dba  162,  2 Oct 22 17:06 raw2
crw-------   1 oracle dba  162,  3 Oct 22 17:06 raw3
crw-------   1 oracle dba  162,  4 Oct 22 17:06 raw4
crw-------   1 oracle dba  162,  5 Oct 22 17:06 raw5
crw-------   1 oracle dba  162,  6 Oct 22 17:06 raw6
crw-------   1 oracle dba  162,  7 Oct 22 17:06 raw7
crw-------   1 oracle dba  162,  8 Oct 22 17:06 raw8
crw-------   1 oracle dba  162,  9 Oct 22 17:06 raw9



备注:  
---------------------------------------------------------  
在linux中,会在/dev下存在3个目录: 
a. /dev/raw 裸设备目录 
b. /dev/mapper/ 裸设备对应的块设备目录 
c. /dev/datavg01/ 裸设备和块设备的链接文件目录  
修改以上3个目录的权限后,Oracle才能使用。 

linux5以上,挂载及授权等信息,可以配置在相应文件中/etc/udev/rules.d/60-raw.rules:
ACTION=="add", KERNEL=="pv/lvol70", RUN+="/bin/raw /dev/raw/raw70 %N"
ACTION=="add", KERNEL=="raw*", WNER=="oracle", GROUP=="dba", MODE=="0600" 

linux5以下,则需修改:
修改/etc/sysconfig/rawdevices文件如下,以开机时自动加载裸设备,如:
/dev/raw/raw70   /dev/vg01/lovl70 
这种方式是通过启动服务的方式来绑定裸设备。
将/etc/udev/permissions.d/50-udev.permissions的113行
从 raw/*:root:disk:0660 
修改为
raw/*:oracle:dba:0600 
这个的意思是修改裸设备的默认属主为oracle:dba,默认的mode是0600。

---------------------------------------------------------  






20.   创建Oracle数据文件(所有节点执行) 

创建oracle的数据文件和参数文件的软连接文件,对应到每一个裸设备文件,编辑
oracle逻辑文件名与raw的映射文件,可批量处理。  

ln -s    /dev/raw/raw1     /u01/product/oradata/orcl/ocr.dbf
ln -s    /dev/raw/raw2     /u01/product/oradata/orcl/votingdisk 
ln -s    /dev/raw/raw3     /u01/product/oradata/orcl/system01.dbf
ln -s    /dev/raw/raw4     /u01/product/oradata/orcl/sysaux01.dbf
ln -s    /dev/raw/raw5     /u01/product/oradata/orcl/users01.dbf
ln -s    /dev/raw/raw6     /u01/product/oradata/orcl/undotbs01.dbf
ln -s    /dev/raw/raw7     /u01/product/oradata/orcl/temp01.dbf
ln -s    /dev/raw/raw8     /u01/product/oradata/orcl/spfileorcl.ora
ln -s    /dev/raw/raw9     /u01/product/oradata/orcl/control01.ctl
ln -s    /dev/raw/raw10    /u01/product/oradata/orcl/control02.ctl
ln -s    /dev/raw/raw11    /u01/product/oradata/orcl/control03.ctl
ln -s    /dev/raw/raw12    /u01/product/oradata/orcl/redo01.log
ln -s    /dev/raw/raw13    /u01/product/oradata/orcl/redo02.log
ln -s    /dev/raw/raw14    /u01/product/oradata/orcl/redo03.log
ln -s    /dev/raw/raw15    /u01/product/oradata/orcl/ocr2.dbf 
ln -s    /dev/raw/raw16    /u01/product/oradata/orcl/votingdisk2  


查看系统软连接(所有节点执行)  : 

[root@rac01 orcl]# pwd
/u01/product/oradata/orcl
[root@rac01 orcl]# 
[root@rac01 orcl]# ls -al
total 72
drwxr-xr-x  2 oracle dba  4096 Oct 24 09:18 .
drwxr-xr-x  3 oracle dba  4096 Oct 24 09:15 ..
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 control01.ctl -> /dev/raw/raw9
lrwxrwxrwx  1 root   root   14 Oct 24 09:18 control02.ctl -> /dev/raw/raw10
lrwxrwxrwx  1 root   root   14 Oct 24 09:18 control03.ctl -> /dev/raw/raw11
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 ocr.dbf -> /dev/raw/raw1
lrwxrwxrwx  1 root   root   14 Oct 24 09:18 redo01.log -> /dev/raw/raw12
lrwxrwxrwx  1 root   root   14 Oct 24 09:18 redo02.log -> /dev/raw/raw13
lrwxrwxrwx  1 root   root   14 Oct 24 09:18 redo03.log -> /dev/raw/raw14
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 spfileorcl.ora -> /dev/raw/raw8
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 sysaux01.dbf -> /dev/raw/raw4
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 system01.dbf -> /dev/raw/raw3
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 temp01.dbf -> /dev/raw/raw7
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 undotbs01.dbf -> /dev/raw/raw6
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 users01.dbf -> /dev/raw/raw5
lrwxrwxrwx  1 root   root   13 Oct 24 09:18 votingdisk -> /dev/raw/raw2








21.   系统重启自动挂载raw设置(所有节点执行) :  

修改/etc/rc.local 文件,加入以上手工执行的脚本,使系统重启后可自动
挂载裸设备。  

如果不设置 /etc/rc.local (假设其他地方也没有挂载设置),那么可以看到: 

[root@rac01 dev]# cd raw
-bash: cd: raw: No such file or directory 
[root@rac02 dev]# cd raw
-bash: cd: raw: No such file or directory

自动挂载raw脚本如下(因ln -s一次设置就生效,不设置在开机自动脚本中): 

vgscan  
vgchange -a y   
/usr/bin/raw    /dev/raw/raw1     /dev/datavg01/ocr.dbf
/usr/bin/raw    /dev/raw/raw2     /dev/datavg01/votingdisk
/usr/bin/raw    /dev/raw/raw3     /dev/datavg01/system01.dbf
/usr/bin/raw    /dev/raw/raw4     /dev/datavg01/sysaux01.dbf
/usr/bin/raw    /dev/raw/raw5     /dev/datavg01/users01.dbf
/usr/bin/raw    /dev/raw/raw6     /dev/datavg01/undotbs01.dbf
/usr/bin/raw    /dev/raw/raw7     /dev/datavg01/temp01.dbf
/usr/bin/raw    /dev/raw/raw8     /dev/datavg01/spfileorcl.ora
/usr/bin/raw    /dev/raw/raw9     /dev/datavg01/control01.ctl
/usr/bin/raw    /dev/raw/raw10    /dev/datavg01/control02.ctl
/usr/bin/raw    /dev/raw/raw11    /dev/datavg01/control03.ctl
/usr/bin/raw    /dev/raw/raw12    /dev/datavg01/redo01.log
/usr/bin/raw    /dev/raw/raw13    /dev/datavg01/redo02.log
/usr/bin/raw    /dev/raw/raw14    /dev/datavg01/redo03.log

/bin/chmod  600  /dev/raw/raw1
/bin/chmod  600  /dev/raw/raw2 
/bin/chmod  600  /dev/raw/raw3
/bin/chmod  600  /dev/raw/raw4 
/bin/chmod  600  /dev/raw/raw5
/bin/chmod  600  /dev/raw/raw6 
/bin/chmod  600  /dev/raw/raw7
/bin/chmod  600  /dev/raw/raw8 
/bin/chmod  600  /dev/raw/raw9
/bin/chmod  600  /dev/raw/raw10 
/bin/chmod  600  /dev/raw/raw11
/bin/chmod  600  /dev/raw/raw12 
/bin/chmod  600  /dev/raw/raw13
/bin/chmod  600  /dev/raw/raw14 

/bin/chown  oracle.dba  /dev/raw/raw1
/bin/chown  oracle.dba  /dev/raw/raw2
/bin/chown  oracle.dba  /dev/raw/raw3
/bin/chown  oracle.dba  /dev/raw/raw4
/bin/chown  oracle.dba  /dev/raw/raw5
/bin/chown  oracle.dba  /dev/raw/raw6
/bin/chown  oracle.dba  /dev/raw/raw7
/bin/chown  oracle.dba  /dev/raw/raw8
/bin/chown  oracle.dba  /dev/raw/raw9
/bin/chown  oracle.dba  /dev/raw/raw10
/bin/chown  oracle.dba  /dev/raw/raw11
/bin/chown  oracle.dba  /dev/raw/raw12
/bin/chown  oracle.dba  /dev/raw/raw13
/bin/chown  oracle.dba  /dev/raw/raw14 


这里碰到一个问题, 通过rc.lcoal中的权限自动运行出现问题,
反复重新启动后得到的 /dev/raw/rawx 的属主及读写权限不一样,
每次重新启动后都不同,关闭一个节点也是一样。 有待解决.....






22.  开始安装clusterware .  

主要是注意 ocr.dbf及votingdisk 文件写软连接中的即可。 

比如  /u01/app/product/oradata/orcl/ocr.dbf .  


[root@rac01 ~]# sh  /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script. is complete
[root@rac01 ~]# sh  /u01/app/oracle/root.sh 
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac01 pri01 rac01
node 2: rac02 pri02 rac02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /dev/raw/raw16
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
CSS is inactive on these nodes.
        rac02
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.
[root@rac01 ~]# 





[root@rac02 raw]# sh  /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script. is complete

[root@rac02 raw]#  sh  /u01/app/oracle/root.sh 
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
assigning default hostname rac01 for node 1.
assigning default hostname rac02 for node 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: rac01 pri01 rac01
node 2: rac02 pri02 rac02
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        rac01
        rac02
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs.
[root@rac02 raw]# 







rac01$crs_stat -t  
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.rac01.gsd  application    ONLINE    ONLINE    rac01       
ora.rac01.ons  application    ONLINE    ONLINE    rac01       
ora.rac01.vip  application    ONLINE    ONLINE    rac01       
ora.rac02.gsd  application    ONLINE    ONLINE    rac02       
ora.rac02.ons  application    ONLINE    ONLINE    rac02       
ora.rac02.vip  application    ONLINE    ONLINE    rac02       
rac01$
rac01$vncserver 

New 'rac01:2 (oracle)' desktop is rac01:2

Starting applications specified in /home/oracle/.vnc/xstartup
Log file is /home/oracle/.vnc/rac01:2.log

rac01$crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy





23.   开始安装Oracle RDBMS及DBCA .  

空间不足的话,可以 

[root@rac01 ~]# lvresize -L +200M  /dev/datavg01/system01.dbf
  Extending logical volume system01.dbf to 600.00 MB
  Logical volume system01.dbf successfully resized 

DBCA的时候注意选择raw device, 在后面更改数据文件的路径的时候,注意
也是写入软连接中的路径及名称 。如果大小不足,可以lvresize更改。 





24.  裸设备的操作 



A.  在数据库中新加入用户表空间(vg空间还足够的情况) 。 

(1)  创建LV . 

[root@rac01 ~]# lvcreate -n tony_test01.dbf -L 300m  datavg01    
  Logical volume "tony_test01.dbf" created

[root@rac01 ~]# lvscan
...  
  ACTIVE            '/dev/datavg01/ocr2.dbf' [120.00 MB] inherit
  ACTIVE            '/dev/datavg01/votingdisk2' [120.00 MB] inherit
  ACTIVE            '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/tony_test01.dbf' [300.00 MB] inherit
...  

节点2上非激活。

[root@rac02 ~]# lvscan
... 
  ACTIVE            '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
  inactive          '/dev/datavg01/tony_test01.dbf' [300.00 MB] inherit
... 

虽然未激活,但是lv的大小在节点2上还是可以修改,在节点1上看到修改了。
如果觉得lv设置小了,可以修改。  

[root@rac02 ~]# lvresize  -L +100M  /dev/datavg01/tony_test01.dbf 
  Extending logical volume tony_test01.dbf to 400.00 MB
  Logical volume tony_test01.dbf successfully resized


(2)  激活其他节点上的lv . 

[root@rac02 etc]# vgchange -a y
  18 logical volume(s) in volume group "datavg01" now active 

其实这里使用lvchange -a y /dev/datavg01/tony_test01.dbf  也是可以的。 
检查lv激活状态: 

[root@rac02 etc]# lvscan
  ACTIVE            '/dev/datavg01/ocr.dbf' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/votingdisk' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/system01.dbf' [800.00 MB] inherit
  ACTIVE            '/dev/datavg01/sysaux01.dbf' [800.00 MB] inherit
  ACTIVE            '/dev/datavg01/users01.dbf' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/undotbs01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/temp01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/spfileorcl.ora' [12.00 MB] inherit
  ACTIVE            '/dev/datavg01/control01.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control02.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/control03.ctl' [32.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo01.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo02.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/redo03.log' [20.00 MB] inherit
  ACTIVE            '/dev/datavg01/ocr2.dbf' [120.00 MB] inherit
  ACTIVE            '/dev/datavg01/votingdisk2' [120.00 MB] inherit
  ACTIVE            '/dev/datavg01/log_date01.dbf' [200.00 MB] inherit
  ACTIVE            '/dev/datavg01/tony_test01.dbf' [400.00 MB] inherit


(3)  挂载raw (在每个节点上执行) . 

# /usr/bin/raw    /dev/raw/raw18     /dev/datavg01/tony_test01.dbf  

[root@rac01 ~]# cd /dev/mapper/
[root@rac01 mapper]# ls
control                  datavg01-ocr2.dbf    datavg01-spfileorcl.ora   datavg01-undotbs01.dbf
datavg01-control01.ctl   datavg01-ocr.dbf     datavg01-sysaux01.dbf     datavg01-users01.dbf
datavg01-control02.ctl   datavg01-redo01.log  datavg01-system01.dbf     datavg01-votingdisk
datavg01-control03.ctl   datavg01-redo02.log  datavg01-temp01.dbf       datavg01-votingdisk2
datavg01-log_date01.dbf  datavg01-redo03.log  datavg01-tony_test01.dbf


(4)  权限修改 (在每个节点上执行) . 

# /bin/chmod  600    /dev/raw/raw18 
# /bin/chown  oracle.dba   /dev/raw/raw18 

[root@rac01 raw]# ls -al  raw18 
crw-------  1 oracle dba 162, 18 Oct 28 15:03 raw18


(5)  建立软链接 (在每个节点上执行) . 

# ln -s  /dev/raw/raw18   /u01/product/oradata/orcl/tony_test01.dbf 


(6)  加入表空间 (节点1上执行) . 

rac01$sqlplus / as sysdba 
SQL*Plus: Release 10.2.0.1.0 - Production on Fri Oct 28 17:01:37 2011
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options

SQL> create tablespace tony_test datafile '/u01/product/oradata/orcl/tony_test01.dbf' size 200M ; 
Tablespace created.


(7)  类似其他lv, 需要在/etc/rc.local中加入

/usr/bin/raw       /dev/raw/raw18   /dev/datavg01/tony_test01.dbf  
/bin/chmod   600     /dev/raw/raw18 
/bin/chown   oracle.dba     /dev/raw/raw18 






B.  在存在的VG中加入PV (vg空间不足) 

[root@rac01 ~]# vgdisplay
  --- Volume group ---
  VG Name               datavg01
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  34
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                18
  Open LV               15
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19.09 GB
  PE Size               4.00 MB
  Total PE              4886
  Alloc PE / Size       765 / 2.99 GB
  Free  PE / Size       4121 / 16.10 GB
  VG UUID               OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi

可以通过vgdisplay 查看vg的总大小及目前已经使用了多少空间。

需要扩展空间有两种方式,一种是新加入一个VG, 还有就是扩展现有的VG . 
这里假设VG空间不足,需要在已有VG中加入新的PV. 


(1).   扩展现有的VG    

新划分一个分区(虚拟机中如何加磁盘这里不再重复),比如 :  
# fdisk    /dev/sdc,    假设划分出 /dev/sdc1 
[root@rac01 orcl]# pvcreate  /dev/sdc1     (节点1执行)
  Physical volume "/dev/sdc1" successfully created
[root@rac01 orcl]# 
[root@rac01 orcl]# pvscan
  PV /dev/sdb1   VG datavg01   lvm2 [9.54 GB / 8.20 GB free]
  PV /dev/sdb2   VG datavg01   lvm2 [9.54 GB / 7.90 GB free]
  PV /dev/sdc1                 lvm2 [9.54 GB]
  Total: 3 [28.63 GB] / in use: 2 [19.09 GB] / in no VG: 1 [9.54 GB] 
[root@rac01 orcl]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb1
  VG Name               datavg01
  PV Size               9.54 GB / not usable 1.89 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              2443
  Free PE               2098
  Allocated PE          345
  PV UUID               8v1kbs-dPsT-X7jG-qh7p-3AzM-i12P-OhNV4U
   
  --- Physical volume ---
  PV Name               /dev/sdb2
  VG Name               datavg01
  PV Size               9.54 GB / not usable 1.92 MB
  Allocatable           yes 
  PE Size (KByte)       4096
  Total PE              2443
  Free PE               2023
  Allocated PE          420
  PV UUID               KooBSy-RpVJ-JxeY-vp2B-dOLt-UALf-CUl2nN
   
  --- NEW Physical volume ---
  PV Name               /dev/sdc1
  VG Name               
  PV Size               9.54 GB
  Allocatable           NO
  PE Size (KByte)       0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS

这时候可以看到新的PV还没有加入到VG中,一些属性都是空,这时其他节点
上pvscan看不到新的PV.  



把物理卷/dev/sdc1加入到datavg01卷组中 (节点1执行),/dev/sdc1必须是可用状态 

[root@rac01 orcl]# vgextend     datavg01      /dev/sdc1 
  Volume group "datavg01" successfully extended  
这时候通过pvdisplay就可以看到新pv上的属性了。 同样通过vgdisplay 可以看到大小
增加了(原来是20G左右,现在加入了10G左右) 。 

[root@rac01 orcl]# vgdisplay  
  --- Volume group ---
  VG Name               datavg01
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  35
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                18
  Open LV               15
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               28.63 GB
  PE Size               4.00 MB
  Total PE              7329
  Alloc PE / Size       765 / 2.99 GB
  Free  PE / Size       6564 / 25.64 GB
  VG UUID               OeuV86-HCSQ-yFFp-IrgT-e3Dt-xRrs-P1kUWi

备注: 
------------------------------------------------------------------------------
如果要从VG中删除一个物理卷(注:不能删除卷组中的最后一个物理卷)
# vgreduce datavg01   /dev/sdc1 
------------------------------------------------------------------------------  


这时候在其他节点上查看: 

[root@rac02 etc]# pvscan         (其中下面的uuid对应的就是/dev/sdc1)   
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  PV /dev/sdb1        VG datavg01   lvm2 [9.54 GB / 8.20 GB free]
  PV /dev/sdb2        VG datavg01   lvm2 [9.54 GB / 7.90 GB free]
  PV unknown device   VG datavg01   lvm2 [9.54 GB / 9.54 GB free]
  Total: 3 [28.63 GB] / in use: 3 [28.63 GB] / in no VG: 0 [0   ]

这里提示PV unknown device , 不过大小倒是显示出来了。 

这时候在所有节点上Oracle RAC状态是正常的 : 
rac02$crs_stat -t 
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac02       
ora....l1.inst application    ONLINE    ONLINE    rac01       
ora....l2.inst application    ONLINE    ONLINE    rac02       
ora....01.lsnr application    ONLINE    ONLINE    rac01       
ora.rac01.gsd  application    ONLINE    ONLINE    rac01       
ora.rac01.ons  application    ONLINE    ONLINE    rac01       
ora.rac01.vip  application    ONLINE    ONLINE    rac01       
ora....02.lsnr application    ONLINE    ONLINE    rac02       
ora.rac02.gsd  application    ONLINE    ONLINE    rac02       
ora.rac02.ons  application    ONLINE    ONLINE    rac02       
ora.rac02.vip  application    ONLINE    ONLINE    rac02       
rac02$





rac01$crs_stop -all  
[root@rac01 init.d]# ./init.crs  stop  all   

[root@rac01 init.d]# vgchange -a  n   datavg01  
  0 logical volume(s) in volume group "datavg01" now active

[root@rac01 init.d]# vgexport  datavg01  
  Volume group "datavg01" successfully exported

[root@rac02 lvm]# vgimport    datavg01  
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find all physical volumes for volume group datavg01.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find all physical volumes for volume group datavg01.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find all physical volumes for volume group datavg01.
  Couldn't find device with uuid 'wDarOK-QDmg-MXAy-tLRo-xDlq-Pt9w-0URfWS'.
  Couldn't find all physical volumes for volume group datavg01.
  Volume group "datavg01" not found

还是不行, 重新启动节点2, 可以看到新加入的pv . 

[root@rac01 init.d]# pvscan
  PV /dev/sdb1    is in exported VG datavg01 [9.54 GB / 8.20 GB free]
  PV /dev/sdb2    is in exported VG datavg01 [9.54 GB / 7.90 GB free]
  PV /dev/sdc1    is in exported VG datavg01 [9.54 GB / 9.54 GB free]
  Total: 3 [28.63 GB] / in use: 3 [28.63 GB] / in no VG: 0 [0   ]

在节点 1查看vg状态  (注意:多了一个exported)
[root@rac01 init.d]# vgscan
  Reading all physical volumes.  This may take a while...
  Found exported volume group "datavg01" using metadata type lvm2


在节点2上执行vmimport 并vgscan查看  

[root@rac02 ~]# vgimport    datavg01  
  Volume group "datavg01" successfully imported  
[root@rac02 ~]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "datavg01" using metadata type lvm2
同时在节点1上查看vg状态,exported关键消失。
[root@rac01 init.d]# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "datavg01" using metadata type lvm2


在其他节点上vgimport之后, 需要手工进行raw device绑定,权限赋予等。
因为在vgimport之前,  显然vg在其他节点上都是不能识别的。 rc.local 中
的一些绑定动作自然也是没有作用了(如下raw路径消失)。   

[root@rac02 dev]# cd    raw
-bash: cd: raw: No such file or directory   



通过lvscan在所有节点检查lv是否激活,  在所有节点激活vg .
[root@rac01 init.d]# vgchange  -a   y    datavg01  
  18 logical volume(s) in volume group "datavg01" now active
[root@rac02 etc]# vgchange  -a   y    datavg01   
  18 logical volume(s) in volume group "datavg01" now active

手工执行一次rc.local中的代码,确认/dev/raw下的设备,权限正确。 

开启RAC服务: 
rac01$crs_start -all  
或者 
[root@rac01 init.d]# ./init.crs start 
Startup will be queued to init within 90 seconds.


查看状态:  

rac01$crs_stat -t 
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.orcl.db    application    ONLINE    ONLINE    rac01       
ora....l1.inst application    ONLINE    ONLINE    rac01       
ora....l2.inst application    ONLINE    ONLINE    rac02       
ora....01.lsnr application    ONLINE    ONLINE    rac01       
ora.rac01.gsd  application    ONLINE    ONLINE    rac01       
ora.rac01.ons  application    ONLINE    ONLINE    rac01       
ora.rac01.vip  application    ONLINE    ONLINE    rac01       
ora....02.lsnr application    ONLINE    ONLINE    rac02       
ora.rac02.gsd  application    ONLINE    ONLINE    rac02       
ora.rac02.ons  application    ONLINE    ONLINE    rac02       
ora.rac02.vip  application    ONLINE    ONLINE    rac02     





附录:  
------------------------------------------------------------
裸设备常用操作命令:

pvcreate (创建物理卷) 
pvdisplay (显示物理卷信息) 
pvscan (扫描物理卷) 
pvmove (转移物理卷资料)  
pvmove /dev/hda1 /dev/hda2 (转移/dev/hda1资料到/dev/hda2) 
pvmove /dev/hda1 (转到/dev/hda1资料到别的物理卷) 
pvremove (删除物理卷) 
  
vgcreate (创建卷组) 
vgdisplay (显示卷组信息) 
vgscan (扫描卷组) 
vgextend (扩展卷组)   vgextend vg0 /dev/hda2  (把物理卷/dev/hda2加到vg0卷组中) 
vgreduce (删除卷组中的物理卷) vgreduce vg0 /dev/hda2 (把物理卷/dev/hda2从卷组vg0中删除) 
vgchange (激活卷组) vgchange -a y vg0 (激活卷组vg0) vgchange -a n vg0 (相反) 
vgremove (删除卷组)   vgremove vg0 (删除卷组vg0) 
  
lvcreate (创建逻辑卷) 
lvdisplay (显示逻辑卷信息) 
lvscan (扫描逻辑卷) 
lvextend (扩展逻辑卷) lvextend -l +5G /dev/vg0/data  (扩展逻辑卷/dev/vg0/data 5个G)

------------------------------------------------------------





来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/110321/viewspace-710729/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论

注册时间:2009-06-26

  • 博文量
    167
  • 访问量
    294404