ITPub博客

首页 > Linux操作系统 > Linux操作系统 > 11gR2删除节点

11gR2删除节点

原创 Linux操作系统 作者:linfeng_oracle 时间:2013-08-26 23:53:12 0 删除 编辑

Note:

How to Add Node/Instance or Remove Node/Instance in 10gR2, 11gR1 and 11gR2 Oracle Clusterware and RAC [ID 1332451.1]

 

Deleting a Cluster Node on Linux and UNIX Systems

This section describes the procedure for deleting a node from a cluster.

Notes:

  • You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance

  • If you run a dynamic Grid Plug and Play cluster using DHCP and GNS, then you need only perform. step 3 (remove VIP resource), step 4 (delete node), and step 7 (update inventory on remaining nodes).

    Also, in a Grid Plug and Play cluster, if you have nodes that are unpinned, Oracle Clusterware forgets about those nodes after a time and there is no need for you to remove them.

  • If one creates node-specific configuration for a node (such as disabling a service on a specific node, or adding the node to the candidate list for a server pool) that node-specific configuration is not removed when the node is deleted from the cluster. Such node-specific configuration must be removed manually.

  • Voting disks are automatically backed up in OCR after any changes you make to the cluster.

To delete a node from a cluster:

  1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.

  2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:

    $ olsnodes -s -t
    

    If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

  3.  

    Note:

    This step is required only if you are using Oracle Clusterware 11g release 2 (11.2.0.1) or 11g release 2 (11.2.0.2).

    Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script. as root from the Grid_home/crs/install directory on the node to be deleted, as follows:

    Note:

    Before you run this command, you must stop the EMAGENT, as follows:
    $ emctl stop dbconsole
    
    # ./rootcrs.pl -deconfig -force
    

    If you are deleting multiple nodes, then run the rootcrs.pl script. on each node that you are deleting.

    If you are deleting all nodes from a cluster, then append the -lastnode option to the preceding command to clear OCR and the voting disks, as follows:

    # ./rootcrs.pl -deconfig -force -lastnode
    

    Caution:

    Only use the -lastnode option if you are deleting all cluster nodes because that option causes the rootcrs.pl script. to clear OCR and the voting disks of data.

    Note:

    If you do not use the -force option in the preceding command or the node you are deleting is not accessible for you to execute the preceding command, then the VIP resource remains running on the node. You must manually stop and remove the VIP resource using the following commands as root from any node that you are not deleting:
    # srvctl stop vip -i vip_name -f
    # srvctl remove vip -i vip_name -f
    

    Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names, then separate the names with commas and surround the list in double quotation marks ("").

  4. From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:

    # crsctl delete node -n node_to_be_deleted
    
  5. On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {node_to_be_deleted}" CRS=TRUE -silent -local
    
  6. Depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:

    • If you have a shared home, then run the following commands in the following order on the node you want to delete.

      Run the following command to deconfigure Oracle Clusterware:

      $ Grid_home/perl/bin/perl Grid_home/crs/install/rootcrs.pl -deconfig
      

      Run the following command from the Grid_home/oui/bin directory to detach the Grid home:

      $ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
      

      Manually delete the following files:

      /etc/oraInst.loc
      /etc/oratab
      /etc/oracle/
      /opt/ORCLfmap/
      $OraInventory/
      
    • For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:

      $ Grid_home/deinstall/deinstall –local
      

      Caution:

      If you do not specify the -local flag, then the command removes the Grid Infrastructure home from every node in the cluster.
  7. On any node other than the node you are deleting, run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {remaining_nodes_list}" CRS=TRUE -silent
    

    Notes:

    • You must run this command a second time where ORACLE_HOME=ORACLE_HOME, and CRS=TRUE -silent is omitted from the syntax, as follows:

      $ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME
       "CLUSTER_NODES={remaining_nodes_list}"
      
    • If you have a shared Oracle Grid Infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.

  8. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

    $ cluvfy stage -post nodedel -n node_list [-verbose]
    

    See Also:

    "cluvfy stage -post nodedel" for more information about this CVU command

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/24996904/viewspace-769306/,如需转载,请注明出处,否则将追究法律责任。

上一篇: 11gR2维护service
下一篇: 12c em安装
请登录后发表评论 登录
全部评论

注册时间:2011-09-14

  • 博文量
    76
  • 访问量
    414000