首页 > Linux操作系统 > Linux操作系统 > 转帖---oracle rac 节点崩溃重装

转帖---oracle rac 节点崩溃重装

原创 Linux操作系统 作者:andyxu 时间:2009-12-10 14:47:26 0 删除 编辑

linux下oracle rac一个节点崩溃,而另一个节点正常,如何重装崩溃节点?请高手赐教!


node1 已坏
node2 正常



2,如果有ASM,删除ASM实例,srvctl remove asm -n node1;
runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES=node2";
$CRS_HOME/install/ node1;
runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME "CLUSTER_NODES=node2";
cluvfy comp crs -n all


1 Install CRS
2 Add ONS
3 Install ASM
4 Config Listener
5 Install DB software
6 Add DB instance into this node

在node1上运行netca,选择cluster database,按步骤配置
先选择...Cluster database...然后instance management然后add an instance然后...


This document is intented to provide the steps to be taken to remove a node from the Oracle cluster. The node itself is unavailable due to some OS issue or hardware issue which prevents the node from starting up. This document will provide the steps to remove such a node so that it can be added back after the node is fixed.

The steps to remove a node from a Cluster is already documented in the Oracle documentation at

Version Documentation Link
10gR2 ... elunix.htm#BEIFDCAF
11g ... erware.htm#BEIFDCAF

This note is different because the documentation covers the scenario where the node is accessible and the removal is a planned procedure. This note covers the scenario where the Node is unable to boot up and therefore it is not possible to run the clusterware commands from this node.

Basically all the steps documented in the Oracle® Clusterware Administration and Deployment Guide must be followed. The difference here is that we skip the steps that are to be executed on the node which is not available and we run some extra commands on the other node which is going to remain in the cluster to remove the resources from the node that is to be removed.

Example Configuration
Node Names Halinux1

OS RHAS 4.0 Update 4 RHAS 4.0 Update 4
Oracle Clusterware Oracle 11g Oracle 11g

Assume that Halinux2 is down due to a hardware failure and cannot even boot up. The plan is to remove it from the clusterware, fix the issue and then add it again to the Clusterware. In this document, we will cover the steps to remove the node from the clusterware

Initial Stage
At this state, the Oracle Clusterware on Halinux1 (Good Node) is up and running. The node Halinux2 is down and cannot be accessed. Note that the Virtual IP of halinux2 is running on Node 1. The rest of halinux2 resources are OFFLINE

Step 1 - Remove oifcfg information for the failed node
Generally most installations use the global flag of the oifcfg command and therefore they can skip this step. They can confirm this using

$oifcfg getif
If the output of the above command returns global as shown below then you can skip this step (executing the command below on a global defination will return an error as shown below.

If the output of the oifcfg getif command does not return global then use the following command

$oifcfg delif -node

Step 2 Remove ONS information
Execute as root the following command to find out the remote port number to be used

$cat $CRS_HOME/opmn/conf/ons.config
and remove the information pertaining the node to be deleted using

#$CRS_HOME/bin/racgons remove_config  harh2:6200

Step 3 Remove resources
In this step, the resources that were defined on this node has to be removed. These resources include (a) Database (b) Instance (c) ASM. A list of this can be acquired by running crs_stat -t command from any node

Step 4 Execute
From the node that you are not deleting execute as root the following command which will help find out the node number of the node that you want to delete

#$CRS_HOME/bin/olsnodes -n
this number can be passed to the command which is to be executed as root from any node which is going to remain in the cluster.

#$CRS_HOME/install/ halinux2,2

Step 5 Update the Inventory
From the node which is going to remain in the cluster run the following command as owner of the CRS_HOME. The argument to be passed to the CLUSTER_NODES is a comma seperated list of node names of the cluster which are going to remain in the cluster. This step needs to be performed from ASM home and RAC home.

$CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=CRS_HOME "CLUSTER_NODES=halinux1" CRS=TRUE                       ## Optionally enclose the host names with {}

来自 “ ITPUB博客 ” ,链接:,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录


  • 博文量
  • 访问量