ITPub博客

首页 > Linux操作系统 > Linux操作系统 > Linux 5.3 64bit Oracle 11G RAC install-2

Linux 5.3 64bit Oracle 11G RAC install-2

原创 Linux操作系统 作者:golden_zhou 时间:2011-03-11 08:30:32 0 删除 编辑

九. 建立用户等效性

使用 SSH 建立用户等效性。在集群就绪服务 (CRS) 和 RAC 安装过程中,Oracle Universal Installer (OUI) 必须能够以 oracle 的身份将软件复制到所有 RAC 节点,而不提示输入口令。
要建立用户等效性,请在两个节点上以 grid和oracle 用户身份分別生成用户的公钥和私钥。

9.1 grid 用戶等效性
在 node1 上执行
wmrac01<*+ASM1*/home/grid>$mkdir ~/.ssh
wmrac01<*+ASM1*/home/grid>$chmod 700 ~/.ssh
wmrac01<*+ASM1*/home/grid>$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
7a:7c:80:df:d5:26:15:e1:41:91:3e:cd:d4:f2:05:62 grid@wmrac01
wmrac01<*+ASM1*/home/grid>$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
dc:0a:68:9f:24:e5:4e:38:03:9e:bd:a5:6c:86:9e:7a grid@wmrac01

在 node2 上执行
wmrac02<*+ASM2*/home/grid>$mkdir ~/.ssh
wmrac02<*+ASM2*/home/grid>$chmod 700 ~/.ssh
wmrac02<*+ASM2*/home/grid>$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
0b:a4:1b:3e:7c:15:23:d4:d1:ff:a3:7f:7d:33:73:eb grid@wmrac02
wmrac02<*+ASM2*/home/grid>$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/grid/.ssh/id_dsa.
Your public key has been saved in /home/grid/.ssh/id_dsa.pub.
The key fingerprint is:
2e:cd:ed:d9:de:5d:a5:6b:dd:67:07:14:d9:3f:0b:2c grid@wmrac02

在 node1 上执行
wmrac01<*+ASM1*/home/grid>$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
wmrac01<*+ASM1*/home/grid>$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
wmrac01<*+ASM1*/home/grid>$ssh wmrac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'wmrac02 (10.118.253.42)' can't be established.
RSA key fingerprint is 66:d8:2f:b4:58:8b:10:d8:ac:9d:7e:e4:43:a4:18:1c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'wmrac02,10.118.253.42' (RSA) to the list of known hosts.
grid@wmrac02's password:
wmrac01<*+ASM1*/home/grid>$ssh wmrac02 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
grid@wmrac02's password:
wmrac01<*+ASM1*/home/grid>$scp ~/.ssh/authorized_keys wmrac02:~/.ssh/authorized_keys
grid@wmrac02's password:
authorized_keys                               100% 1992     2.0KB/s   00:00

在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不提示您输入口令。
wmrac01<*+ASM1*/home/grid>$ssh 10.118.253.41 date
Mon Jan 24 17:25:59 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh 10.118.253.42 date
Mon Jan 24 17:26:02 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh 192.168.1.11 date
Mon Jan 24 17:26:12 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh 192.168.1.12 date
Mon Jan 24 17:26:15 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh wmrac01 date
Mon Jan 24 17:26:26 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh wmrac02 date
Mon Jan 24 17:26:29 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh wmpri01 date
Mon Jan 24 17:26:41 CST 2011
wmrac01<*+ASM1*/home/grid>$ssh wmpri02 date
Mon Jan 24 17:26:44 CST 2011

一定要确认不需要密码就能执行 否则后面从node1远程安装grid到node2上的时候会报错.

9.2 oracle 用戶等效性
在 node1 上执行
wmrac01<*ccptdb1*/home/oracle>$mkdir ~/.ssh
wmrac01<*ccptdb1*/home/oracle>$chmod 700 ~/.ssh
wmrac01<*ccptdb1*/home/oracle>$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
fa:8b:a8:79:e2:14:6e:e5:c0:c3:37:aa:da:62:8d:46 oracle@wmrac01
wmrac01<*ccptdb1*/home/oracle>$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
73:4e:9d:cc:7d:d4:85:ee:ce:84:59:04:47:51:38:89 oracle@wmrac01
在 node2 上执行
wmrac02<*ccptdb2*/home/oracle>$mkdir ~/.ssh
wmrac02<*ccptdb2*/home/oracle>$chmod 700 ~/.ssh
wmrac02<*ccptdb2*/home/oracle>$ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
df:dd:50:e5:65:a5:41:1a:6e:8d:84:84:04:5d:4c:82 oracle@wmrac02
wmrac02<*ccptdb2*/home/oracle>$ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
50:75:fe:b5:a9:10:ea:96:df:07:ec:ac:77:3f:48:bc oracle@wmrac02

在 node1 上执行
wmrac01<*ccptdb1*/home/oracle>$cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
wmrac01<*ccptdb1*/home/oracle>$cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
wmrac01<*ccptdb1*/home/oracle>$ssh wmrac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
The authenticity of host 'wmrac02 (10.118.253.42)' can't be established.
RSA key fingerprint is 66:d8:2f:b4:58:8b:10:d8:ac:9d:7e:e4:43:a4:18:1c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'wmrac02,10.118.253.42' (RSA) to the list of known hosts.
oracle@wmrac02's password:
wmrac01<*ccptdb1*/home/oracle>$ssh wmrac02 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
oracle@wmrac02's password:
wmrac01<*ccptdb1*/home/oracle>$scp ~/.ssh/authorized_keys wmrac02:~/.ssh/authorized_keys
oracle@wmrac02's password:
authorized_keys                               100% 2000     2.0KB/s   00:00  
在每个节点上测试连接。验证当您再次运行以下命令时,系统是否不提示您输入口令。
wmrac01<*ccptdb1*/home/oracle>$ssh 10.118.253.41 date
Mon Jan 24 17:31:26 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh 10.118.253.42 date
Mon Jan 24 17:31:31 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh 192.168.1.11 date
Mon Jan 24 17:31:36 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh 192.168.1.12 date
Mon Jan 24 17:31:40 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh wmrac01 date
Mon Jan 24 17:31:51 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh wmrac02 date
Mon Jan 24 17:31:44 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh wmpri01 date
Mon Jan 24 17:31:58 CST 2011
wmrac01<*ccptdb1*/home/oracle>$ssh wmpri02 date
Mon Jan 24 17:32:03 CST 2011

一定要确认不需要密码就能执行 否则后面从node1远程安装oracle到node2上的时候会报错.

 


十. 安裝 oracle grid控件

10.1 安裝cvuqdisk包
用於發現和使用共享存儲,在兩個node以root用戶執行
[root@wmrac01 rpm]# CVUQDISK_GRP=oinstall;export CVUQDISK_GRP
[root@wmrac01 rpm]# rpm -Uvh cvuqdisk-1.0.7-1.rpm
Preparing...                ########################################### [100%]
   1:cvuqdisk               ########################################### [100%]
10.2 驗証grid安裝環境
wmrac01<*+ASM1*/u01/packages/grid>$./runcluvfy.sh stage -pre crsinst -n wmrac01,wmrac02 -fixup -verbose
驗証結果除下述結果failed外,其他結果都應該是passed
Check: Membership of user "grid" in group "dba"
  Node Name         User Exists   Group Exists  User in Group  Comment        
  ----------------  ------------  ------------  ------------  ----------------
  poland-rac02      yes           yes           no            failed         
  poland-rac01      yes           yes           no            failed         
Result: Membership check for user "grid" in group "dba" failed
注意:如果交換分區沒有按標準設置也會報錯,可以忽略.
SWAP: 實體記憶體在4~8G的,SWAP為實體記憶體的2倍;
    實體記憶體在8~32G的,SWAP為16G;
       實體記憶體大於32G的,SWAP為32G
Check: Swap space
  Node Name     Available                 Required                  Comment  
  ------------  ------------------------  ------------------------  ----------
  wmrac02       7.81GB (8193108.0KB)      16GB (1.6777216E7KB)      failed   
  wmrac01       7.81GB (8193108.0KB)      16GB (1.6777216E7KB)      failed   
Result: Swap space check failed
在檢查的時,一般發報NTP fail,我們可以先通過前面的時間同步腳本同步節點間的時間,然後通過如下方式重命名/etc/ntp.conf,再次Check就pass了.
[root@wmrac01 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak(所有節點設置)
Check: Liveness for "ntpd"
  Node Name                             Running?               
  ------------------------------------  ------------------------
  wmrac02                               no                     
  wmrac01                               no                     
Result: Liveness check failed for "ntpd"
PRVF-5415 : Check to see if NTP daemon is running failed
Result: Clock synchronization check using Network Time Protocol(NTP) failed
10.3 安裝oracle grid控件
安裝很簡單,按照如下圖示操作即可
 

 
 
按圖示要求在兩個node以root用戶執行
[root@wmrac01 product]# /u01/product/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/product/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/product/grid/oraInventory to oinstall.
The execution of the script. is complete.

[root@wmrac02 product]# /u01/product/grid/oraInventory/orainstRoot.sh
Changing permissions of /u01/product/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/product/grid/oraInventory to oinstall.
The execution of the script. is complete.
[root@wmrac01 product]# /u01/product/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/product/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-02-28 15:56:11: Parsing the host name
2011-02-28 15:56:11: Checking for super user privileges
2011-02-28 15:56:11: User has super user privileges
Using configuration parameter file: /u01/product/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'wmrac01'
CRS-2672: Attempting to start 'ora.mdnsd' on 'wmrac01'
CRS-2676: Start of 'ora.gipcd' on 'wmrac01' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wmrac01'
CRS-2676: Start of 'ora.gpnpd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wmrac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wmrac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'wmrac01'
CRS-2676: Start of 'ora.diskmon' on 'wmrac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wmrac01'
CRS-2676: Start of 'ora.ctssd' on 'wmrac01' succeeded

ASM created and started successfully.

DiskGroup DATA created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-2672: Attempting to start 'ora.crsd' on 'wmrac01'
CRS-2676: Start of 'ora.crsd' on 'wmrac01' succeeded
CRS-4256: Updating the profile
Successful addition of voting disk 8eef0cb64e674f33bf8d998f5875c052.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   8eef0cb64e674f33bf8d998f5875c052 (ORCL:DATAV1) [DATA]
Located 1 voting disk(s).
CRS-2673: Attempting to stop 'ora.crsd' on 'wmrac01'
CRS-2677: Stop of 'ora.crsd' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'wmrac01'
CRS-2677: Stop of 'ora.asm' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'wmrac01'
CRS-2677: Stop of 'ora.ctssd' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'wmrac01'
CRS-2677: Stop of 'ora.cssdmonitor' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'wmrac01'
CRS-2677: Stop of 'ora.cssd' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'wmrac01'
CRS-2677: Stop of 'ora.gpnpd' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'wmrac01'
CRS-2677: Stop of 'ora.gipcd' on 'wmrac01' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'wmrac01'
CRS-2677: Stop of 'ora.mdnsd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.mdnsd' on 'wmrac01'
CRS-2676: Start of 'ora.mdnsd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'wmrac01'
CRS-2676: Start of 'ora.gipcd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wmrac01'
CRS-2676: Start of 'ora.gpnpd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wmrac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wmrac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'wmrac01'
CRS-2676: Start of 'ora.diskmon' on 'wmrac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wmrac01'
CRS-2676: Start of 'ora.ctssd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'wmrac01'
CRS-2676: Start of 'ora.asm' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'wmrac01'
CRS-2676: Start of 'ora.crsd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.evmd' on 'wmrac01'
CRS-2676: Start of 'ora.evmd' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'wmrac01'
CRS-2676: Start of 'ora.asm' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'wmrac01'
CRS-2676: Start of 'ora.DATA.dg' on 'wmrac01' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'wmrac01'
CRS-2676: Start of 'ora.registry.acfs' on 'wmrac01' succeeded

wmrac01     2011/02/28 16:04:00     /u01/product/grid/11.2.0/cdata/wmrac01/backup_20110228_160400.olr
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 8001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/product/grid/oraInventory
'UpdateNodeList' was successful.

[root@wmrac02 product]# /u01/product/grid/11.2.0/root.sh
Running Oracle 11g root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/product/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:    
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-02-28 16:05:19: Parsing the host name
2011-02-28 16:05:19: Checking for super user privileges
2011-02-28 16:05:19: User has super user privileges
Using configuration parameter file: /u01/product/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting
CRS-2672: Attempting to start 'ora.gipcd' on 'wmrac02'
CRS-2672: Attempting to start 'ora.mdnsd' on 'wmrac02'
CRS-2676: Start of 'ora.gipcd' on 'wmrac02' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'wmrac02'
CRS-2676: Start of 'ora.gpnpd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'wmrac02'
CRS-2676: Start of 'ora.cssdmonitor' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'wmrac02'
CRS-2672: Attempting to start 'ora.diskmon' on 'wmrac02'
CRS-2676: Start of 'ora.diskmon' on 'wmrac02' succeeded
CRS-2676: Start of 'ora.cssd' on 'wmrac02' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'wmrac02'
CRS-2676: Start of 'ora.ctssd' on 'wmrac02' succeeded

DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0


Configuration of ASM failed, see logs for details
Did not succssfully configure and start ASM
CRS-2500: Cannot stop resource 'ora.crsd' as it is not running
CRS-4000: Command Stop failed, or completed with errors.
Command return code of 1 (256) from command: /u01/product/grid/11.2.0/bin/crsctl stop resource ora.crsd -init
Stop of resource "ora.crsd -init" failed
Failed to stop CRSD
CRS-2673: Attempting to stop 'ora.asm' on 'wmrac02'
CRS-2677: Stop of 'ora.asm' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'wmrac02'
CRS-2677: Stop of 'ora.ctssd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'wmrac02'
CRS-2677: Stop of 'ora.cssdmonitor' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'wmrac02'
CRS-2677: Stop of 'ora.cssd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'wmrac02'
CRS-2677: Stop of 'ora.gpnpd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'wmrac02'
CRS-2677: Stop of 'ora.gipcd' on 'wmrac02' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'wmrac02'
CRS-2677: Stop of 'ora.mdnsd' on 'wmrac02' succeeded
Initial cluster configuration failed.  See /u01/product/grid/11.2.0/cfgtoollogs/crsconfig/rootcrs_wmrac02.log for details

Symptoms:
DiskGroup DATA creation failed with the following message:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 1 regular failure groups, discovered only 0

Cause:
After configuring multipath disks on Linux x86-64, proper parameters have not been configured in /etc/sysconfig/oracleasm

Solution:
On all nodes,
1. Modify the /etc/sysconfig/oracleasm with:
ORACLEASM_SCANORDER="dm"
ORACLEASM_SCANEXCLUDE="sd"
2. restart the asmlib by :
# /etc/init.d/oracleasm restart
3. Run root.sh on the 2nd node(注﹕由于之前有運行過該腳本﹐再運行時會報錯﹐需要運行下面的命令后再重新執行)
root@wmrac02 sysconfig]# /u01/product/grid/11.2.0/crs/install/roothas.pl  -delete -force -verbose

問題修復后進入如下畫面
 
選擇OK,跳過該提示
 
選擇Close,完成grid安裝.按如下圖示進行驗証

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25198367/viewspace-689053/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论

注册时间:2011-03-09

  • 博文量
    238
  • 访问量
    300987