ITPub博客

首页 > 数据库 > Oracle > oracle 11g RAC 启动和关闭和一些维护命令

oracle 11g RAC 启动和关闭和一些维护命令

Oracle 作者:yhj20041128001 时间:2014-12-28 21:29:44 0 删除 编辑

启动和关闭是经常使用的操作,必须牢记,以下整理的文档:

集群名称 rac-cluster
集群数据库  RACDB
一 关闭rac
1,确认srvctl 和ps -ef|grep smon
[grid@rac1 ~]$ srvctl status database -d RACDB
实例 RACDB1 正在节点 rac1 上运行
实例 RACDB2 正在节点 rac2 上运行
[grid@rac1 ~]$ ps -ef|grep smon
oracle    3676     1  0 06:05 ?        00:00:02 ora_smon_RACDB1
grid     12840     1  0 01:54 ?        00:00:00 asm_smon_+ASM1
grid     27890 27621  0 07:52 pts/3    00:00:00 grep smon

2,将数据库关闭并再次确认
[grid@rac1 ~]$ srvctl stop database -d RACDB
[grid@rac1 ~]$  ps -ef|grep smon

3,使用root 帐号关闭ASM
[grid@rac1 ~]$ su -
口令:
[root@rac1 ~]# cd /home/grid

[root@rac1 grid]# sh .bash_profile
4,使用crs_stat 确认集群各项资源和服务运行状态
[root@rac1 bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v
5,使用crsctl 指令关闭集群
[root@rac1 bin]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
6,使用crs_stat 确认集群各项资源和服务运行状态
[root@rac1 bin]# /u01/app/11.2.0/grid/bin/crs_stat -t -v
[root@rac2 ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v
CRS-0184: Cannot communicate with the CRS daemon.
 说明顺利关闭


二 。RAC 开启
1,root 执行grid 下面的环境变量 (可以不执行直接到/u01/app/11.2.0/grid/bin/模式下)
2,[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster -all
3,开启集群
  [root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -all
  检查状态
  [root@rac2 ~]# /u01/app/11.2.0/grid/bin/crs_stat -t -v
4,使用srvcel 确认数据库状态

 [root@rac1 ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB
实例 RACDB1 没有在 rac1 节点上运行
实例 RACDB2 没有在 rac2 节点上运行
5,打开RAC
[root@rac1 ~]# /u01/app/11.2.0/grid/bin/srvctl start database -d RACDB
 确认状态
[root@rac2 ~]# /u01/app/11.2.0/grid/bin/srvctl status database -d RACDB
实例 RACDB1 正在节点 rac1 上运行
实例 RACDB2 正在节点 rac2 上运行
 
6,打开OEM

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/emctl start  RACDB


命令:


1)、检查集群状态:
  [grid@rac02 ~]$ crsctl check cluster
  CRS-4537: Cluster Ready Services is online
  CRS-4529: Cluster Synchronization Services is online
  CRS-4533: Event Manager is online

2)、所有 Oracle 实例 —(数据库状态):
  [grid@rac02 ~]$ srvctl status database -d racdb
  Instance racdb1 is running on node rac01
  Instance racdb2 is running on node rac02

3)、检查单个实例状态:
  [grid@rac02 ~]$ srvctl status instance -d racdb -i racdb1
  Instance racdb1 is running on node rac01

4)、节点应用程序状态:
  [grid@rac02 ~]$ srvctl status nodeapps
  VIP rac01-vip is enabled
  VIP rac01-vip is running on node: rac01
  VIP rac02-vip is enabled
  VIP rac02-vip is running on node: rac02
  Network is enabled
  Network is running on node: rac01
  Network is running on node: rac02
  GSD is disabled
  GSD is not running on node: rac01
  GSD is not running on node: rac02
  ONS is enabled
  ONS daemon is running on node: rac01
  ONS daemon is running on node: rac02
  eONS is enabled
  eONS daemon is running on node: rac01
  eONS daemon is running on node: rac02

5)、列出所有的配置数据库:
  [grid@rac02 ~]$ srvctl config database
  racdb

6)、数据库配置:
  [grid@rac02 ~]$ srvctl config database -d racdb -a
  Database unique name: racdb
  Database name: racdb
  Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
  Oracle user: oracle
  Spfile: +RACDB_DATA/racdb/spfileracdb.ora
  Domain: xzxj.edu.cn
  Start options: open
  Stop options: immediate
  Database role: PRIMARY
  Management policy: AUTOMATIC
  Server pools: racdb
  Database instances: racdb1,racdb2
  Disk Groups: RACDB_DATA,FRA
  Services:
  Database is enabled
  Database is administrator managed

7)、ASM状态以及ASM配置:
  [grid@rac02 ~]$ srvctl status asm
  ASM is running on rac01,rac02
  [grid@rac02 ~]$ srvctl config asm -a
  ASM home: /u01/app/11.2.0/grid
  ASM listener: LISTENER
  ASM is enabled.

8)、TNS监听器状态以及配置:
  [grid@rac02 ~]$ srvctl status listener
  Listener LISTENER is enabled
  Listener LISTENER is running on node(s): rac01,rac02
  [grid@rac02 ~]$ srvctl config listener -a
  Name: LISTENER
  Network: 1, Owner: grid
  Home:
  /u01/app/11.2.0/grid on node(s) rac02,rac01
  End points: TCP:1521

9)、SCAN状态以及配置:
  [grid@rac02 ~]$ srvctl status scan
  SCAN VIP scan1 is enabled
  SCAN VIP scan1 is running on node rac02
  [grid@rac02 ~]$ srvctl config scan
  SCAN name: rac-scan.xzxj.edu.cn, Network: 1/192.168.1.0/255.255.255.0/eth0
  SCAN VIP name: scan1, IP: /rac-scan.xzxj.edu.cn/192.168.1.55

10)、VIP各个节点的状态以及配置:
  [grid@rac02 ~]$ srvctl status vip -n rac01
  VIP rac01-vip is enabled
  VIP rac01-vip is running on node: rac01
  [grid@rac02 ~]$ srvctl status vip -n rac02
  VIP rac02-vip is enabled
  VIP rac02-vip is running on node: rac02
  [grid@rac02 ~]$ srvctl config vip -n rac01
  VIP exists.:rac01
  VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0
  [grid@rac02 ~]$ srvctl config vip -n rac02
  VIP exists.:rac02
  VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0

11)、节点应用程序配置 —(VIP、GSD、ONS、监听器)
  [grid@rac02 ~]$ srvctl config nodeapps -a -g -s -l
  -l option has been deprecated and will be ignored.
  VIP exists.:rac01
  VIP exists.: /rac01-vip/192.168.1.53/255.255.255.0/eth0
  VIP exists.:rac02
  VIP exists.: /rac02-vip/192.168.1.54/255.255.255.0/eth0
  GSD exists.
  ONS daemon exists. Local port 6100, remote port 6200
  Name: LISTENER
  Network: 1, Owner: grid
  Home:
  /u01/app/11.2.0/grid on node(s) rac02,rac01
  End points: TCP:1521

12)、验证所有集群节点间的时钟同步:
  [grid@rac02 ~]$ cluvfy comp clocksync -verbose
  Verifying Clock Synchronization across the cluster nodes
  Checking if Clusterware is installed on all nodes...
  Check of Clusterware install passed
  Checking if CTSS Resource is running on all nodes...
  Check: CTSS Resource running on all nodes
  Node Name Status
  ------------------------------------ ------------------------
  rac02 passed
  Result: CTSS resource check passed
  Querying CTSS for time offset on all nodes...
  Result: Query of CTSS for time offset passed
  Check CTSS state started...
  Check: CTSS state
  Node Name State
  ------------------------------------ ------------------------
  rac02 Active
  CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
  Reference Time Offset Limit: 1000.0 msecs
  Check: Reference Time Offset
  Node Name Time Offset Status
  ------------ ------------------------ ------------------------
  rac02 0.0 passed
  Time offset is within the specified limits on the following set of nodes:
  "[rac02]"
  Result: Check of clock time offsets passed
  Oracle Cluster Time Synchronization Services check passed
  Verification of Clock Synchronization across the cluster nodes was successful.

13)、集群中所有正在运行的实例 — (SQL):
  SELECT inst_id , instance_number inst_no , instance_name inst_name , parallel , status ,
database_status db_status , active_state state , host_name host FROM gv$instance ORDER BY inst_id;
14)、所有数据库文件及它们所在的 ASM 磁盘组 — (SQL):
15)、ASM 磁盘卷:
16)、启动和停止集群:
  以下操作需用root用户执行。
  (1)、在本地服务器上停止Oracle Clusterware 系统:
  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster

 注:在运行“crsctl stop cluster”命令之后,如果 Oracle Clusterware 管理的
资源中有任何一个还在运行,则整个命令失败。使用 -f 选项无条件地停止所有资源并
停止 Oracle Clusterware 系统。
  另请注意,可通过指定 -all 选项在集群中所有服务器上停止 Oracle Clusterware
系统。如下所示,在rac01和rac02上停止oracle clusterware系统:
  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl stop cluster –all
  在本地服务器上启动oralce clusterware系统:
  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster
  注:可通过指定 -all 选项在集群中所有服务器上启动 Oracle Clusterware 系统。
  [root@rac02 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster –all
  还可以通过列出服务器(各服务器之间以空格分隔)在集群中一个或多个指定的

服务器上启动 Oracle Clusterware 系统:
  [root@rac01 ~]# /u01/app/11.2.0/grid/bin/crsctl start cluster -n rac01 rac02
  使用 SRVCTL 启动/停止所有实例:
  [oracle@rac01 ~]#srvctl stop database -d racdb
  [oracle@rac01 ~]#srvctl start database -d racdb

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/23757700/viewspace-1383074/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论

注册时间:2010-09-17

  • 博文量
    163
  • 访问量
    356261