ITPub博客

首页 > 数据库 > Oracle > 11g RAC 删除节点

11g RAC 删除节点

原创 Oracle 作者:yingyifeng306 时间:2014-02-13 09:15:18 0 删除 编辑
参照官方文档,进行一次节点删除操作
1.Using DBCA in Interactive Mode to Delete Instances from Nodes
Start DBCA
备注:如果无法启动图形界面,考虑使用静默DBCA删除:
Using DBCA in Silent Mode to Delete Instances from Nodes:
dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name
-instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password

2.Verify that the dropped instance's redo thread has been removed by using SQL*Plus on an existing node to query the GV$LOG view. If the redo thread is not disabled, then disable the thread. For example:
SQL> ALTER DATABASE DISABLE THREAD 2;

3.Verify that the instance has been removed from OCR by running the following command, where db_unique_name is the database unique name for your Oracle RAC database:
srvctl config database -d db_unique_name

If you are deleting more than one node, then repeat these steps to delete the instances from all the nodes that you are going to delete.

以上为删除节点步骤,如果需要删除多个节点,重复以上步骤

2.Removing Oracle RAC:
2.1
If there is a listener in the Oracle RAC home on the node you are deleting, then you must disable and stop it before deleting the Oracle RAC software. Run the following commands on any node in the cluster, specifying the name of the listener and the name of the node you are deleting:

$ srvctl disable listener -l listener_name -n name_of_node_to_delete
$ srvctl stop listener -l listener_name -n name_of_node_to_delete

2.2:
Run the following command from $ORACLE_HOME/oui/bin on the node that you are deleting to update the inventory on that node:

$ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
"CLUSTER_NODES={name_of_node_to_delete}" -local

2.3:
remove the Oracle RAC software:
For a nonshared home, deinstall the Oracle home from the node that you are deleting by running the following command:

$ORACLE_HOME/deinstall/deinstall -local

2.4:
update the inventories
Run the following command from the $ORACLE_HOME/oui/bin directory on any one of the remaining nodes in the cluster to update the inventories of those nodes, specifying a comma-delimited list of remaining node names:
$ ./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location
"CLUSTER_NODES={remaining_node_list}"

(以上内容摘录自Real Application Clusters Administration and Deployment Guide中)

以下为操作日志记录:
确认需要删除节点的thread信息:
本次操作需要删除一节点:
SQL> show parameter thread
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_threads_per_cpu             integer     2
thread                               integer     1


启动dbca删除一节点:
图形界面略
在list clust database中
此处一直报错,报错信息:service name or instance name is not specified
无法解决,决定采用静默删除:


[oracle@s2-11g ~]$ dbca -silent -deleteInstance -nodeList s1-10g -gdbName ora11g -instanceName ora11g1 -sysDBAUserName sys -sysDBAPassword oracle
Look at the log file "/oracle/app/cfgtoollogs/dbca/silent.log_2013-07-09_02-33-16-PM" for further details.


从静默删除的报错信息看:
[main] [ 2013-07-09 14:35:10.443 CST ] [HADatabaseUtils.getDefaultListenerConnectString:2309]  PRCR-1001 : Resource ora.LISTENER.lsnr does not exist
PRCR-1001 : Resource ora.LISTENER.lsnr does not exist
        at oracle.cluster.impl.common.SoftwareModuleImpl.crsResource(SoftwareModuleImpl.java:775)
        at oracle.cluster.impl.nodeapps.ListenerImpl.crsResource(ListenerImpl.java:1107)
        at oracle.cluster.impl.nodeapps.NodeAppsFactoryImpl.getListener(NodeAppsFactoryImpl.java:1129)
        at oracle.cluster.nodeapps.NodeAppsFactory.getListener(NodeAppsFactory.java:1435)
        at oracle.sysman.assistants.util.hasi.HADatabaseUtils.getDefaultListenerConnectString(HADatabaseUtils.java:2283)
        at oracle.sysman.assistants.dbca.backend.SilentHost.performOperation(SilentHost.java:303)
        at oracle.sysman.assistants.dbca.backend.Host.startOperation(Host.java:3613)
        at oracle.sysman.assistants.dbca.Dbca.execute(Dbca.java:119)
        at oracle.sysman.assistants.dbca.Dbca.main(Dbca.java:180)
是监听有问题

最后检查由于该库为测试库,监听存在问题
修复监听问题后继续
实例删除结束

确认数据库db信息:
[grid@s2-11g ~]$ srvctl config database -d ora11g
Database unique name: ora11g
Database name: 
Oracle home: /oracle/app/product/11.2.0/db_1
Oracle user: oracle
Spfile: 
Domain: 
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ora11g
Database instances: ora11g2
Disk Groups: DATA,DATA2,DATA3,TEMPOCR
Mount point paths: 
Services: 
Type: RAC
Database is administrator managed
[grid@s2-11g ~]$ 

禁用thread 1
[oracle@s2-11g ~]$ sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 9 15:14:14 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select inst_id,instance_name from gv$instance;

   INST_ID INSTANCE_NAME
---------- ----------------
         2 ora11g2

SQL> ALTER DATABASE DISABLE THREAD 1;

Database altered.

关闭一节点监听资源:
[grid@s2-11g ~]$ srvctl disable listener -l listener -n s1-11g
[grid@s2-11g ~]$ srvctl stop listener -l listener -n s1-11g

删除节点更新信息:(rdbms信息)

[oracle@s1-11g ~]$ /oracle/app/product/11.2.0/db_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/product/11.2.0/db_1 "CLUSTER_NODES=s1-11g" -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 9840 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.

[oracle@s1-11g deinstall]$ /oracle/app/product/11.2.0/db_1/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /oracle/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /oracle/app/product/11.2.0/db_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /oracle/app
Checking for existence of central inventory location /oracle/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /oracle/grid
The following nodes are part of this cluster: s1-11g
Checking for sufficient temp space availability on node(s) : 's1-11g'

在保留节点更新inventory 信息:
[oracle@s2-11g bin]$ /oracle/app/product/11.2.0/db_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/product/11.2.0/db_1 "CLUSTER_NODES=s2-11g"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.

以上步骤结束,整个删除rdbms的过程已经完成
接下来为删除grid的过程:
3 删除clust
3.1
确认ONS资源是否正常:
[grid@s2-11g ~]$  olsnodes -s -t
s1-11g  Active  Unpinned
s2-11g  Active  Unpinned
注意:If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

3.2 暂停dbconsole
emctl stop dbconsole

3.3 清除grid 配置信息:(在需要删除节点执行) $Grid_home/crs/install
Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted, as follows:
# ./rootcrs.pl -deconfig -deinstall -force

注意:
If you are using Oracle Clusterware 11g release 2 (11.2.0.1) or Oracle Clusterware 11g release 2 (11.2.0.2), then do not include the -deinstall flag when running the rootcrs.pl script.

[root@s1-11g install]# /oracle/grid/crs/install/rootcrs.pl -deconfig -deinstall -force
Using configuration parameter file: /oracle/grid/crs/install/crsconfig_params
Network exists: 1/172.16.0.0/255.255.0.0/eth0, type static
VIP exists: /172.16.10.46/172.16.10.46/172.16.0.0/255.255.0.0/eth0, hosting node s1-11g
VIP exists: /172.16.10.56/172.16.10.56/172.16.0.0/255.255.0.0/eth0, hosting node s2-11g
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 's1-11g'
CRS-2673: Attempting to stop 'ora.crsd' on 's1-11g'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 's1-11g'
CRS-2673: Attempting to stop 'ora.oc4j' on 's1-11g'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 's1-11g'
CRS-2673: Attempting to stop 'ora.DATA2.dg' on 's1-11g'
CRS-2673: Attempting to stop 'ora.DATA3.dg' on 's1-11g'
CRS-2673: Attempting to stop 'ora.OCRVOTE.dg' on 's1-11g'
CRS-2673: Attempting to stop 'ora.TEMPOCR.dg' on 's1-11g'
CRS-2677: Stop of 'ora.DATA3.dg' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.DATA2.dg' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.TEMPOCR.dg' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.oc4j' on 's1-11g' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 's2-11g'
CRS-2676: Start of 'ora.oc4j' on 's2-11g' succeeded
CRS-2677: Stop of 'ora.OCRVOTE.dg' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 's1-11g'
CRS-2677: Stop of 'ora.asm' on 's1-11g' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 's1-11g' has completed
CRS-2677: Stop of 'ora.crsd' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 's1-11g'
CRS-2673: Attempting to stop 'ora.evmd' on 's1-11g'
CRS-2673: Attempting to stop 'ora.asm' on 's1-11g'
CRS-2673: Attempting to stop 'ora.mdnsd' on 's1-11g'
CRS-2677: Stop of 'ora.evmd' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.asm' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 's1-11g'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 's1-11g' succeeded
CRS-2677: Stop of 'ora.ctssd' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 's1-11g'
CRS-2677: Stop of 'ora.cssd' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 's1-11g'
CRS-2677: Stop of 'ora.crf' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 's1-11g'
CRS-2677: Stop of 'ora.gipcd' on 's1-11g' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 's1-11g'
CRS-2677: Stop of 'ora.gpnpd' on 's1-11g' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 's1-11g' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node

3.4 确认节点信息并删除:
[grid@s2-11g ~]$ olsnodes -s -t
s1-11g  Inactive        Unpinned
s2-11g  Active  Unpinned

[root@s2-11g grid]# crsctl delete node -n s1-11g
CRS-4661: Node s1-11g successfully deleted.

再次查看节点状态:
[grid@s2-11g ~]$ olsnodes -s -t
s2-11g  Active  Unpinned

3.5
On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:
更新inventory信息:
[grid@s1-11g ~]$ /oracle/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/grid "CLUSTER_NODES=s1-11g" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.

3.6 卸载软件:
For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:
[grid@s1-11g deinstall]$ /oracle/grid/deinstall/deinstall –local
Checking for required files and bootstrapping ...
Please wait ...

注意:如果不指定-local选项,那么默认将会把所有的集群信息全部删除,这是非常危险的操作

更新剩余节点的inventory 信息:
3.7:
On any node other than the node you are deleting, run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:
[grid@s2-11g ~]$ /oracle/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/grid "CLUSTER_NODES=s2-11g" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 10001 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'UpdateNodeList' was successful.

3.8 校验整个删除过程是否由问题:
Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:
cluvfy stage -post nodedel -n s1-11g -verbose
[grid@s2-11g ~]$ cluvfy stage -post nodedel -n s1-11g -verbose

Performing post-checks for node removal 

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "s2-11g"


CRS integrity check passed
Result: 
Node removal check passed

Post-check for node removal was successful. 


校验通过,不存在s1-11g 节点信息

 ------------------------------------------------------------------------------------
<版权所有,文章允许转载,但必须以链接方式注明源地址,否则追究法律责任!>
原博客地址:http://blog.itpub.net/23732248/
原作者:应以峰 (frank-ying)
-------------------------------------------------------------------------------------

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23732248/viewspace-1080549/,如需转载,请注明出处,否则将追究法律责任。

下一篇: 11g RAC 增加节点
请登录后发表评论 登录
全部评论
ITpub论坛高可用版主,擅长研究Oracle 内部原理、新特性、高可用和性能调优等,多年来一直保持着对新事务旺盛的求知欲。热切关注 Oracle 和其它相关技术

注册时间:2011-10-12

  • 博文量
    64
  • 访问量
    1354972