ITPub博客

首页 > 数据库 > Oracle > Oracle RAC 11.2.0.1升级到11.2.0.4(for Redhat Linux)

Oracle RAC 11.2.0.1升级到11.2.0.4(for Redhat Linux)

Oracle 作者:germany006 时间:2015-09-01 16:37:36 0 删除 编辑


 

一、 准备工作

1. 所需补丁

可以到MateLink上下载11204 for Redhat Linux的补丁,补丁号为8202632,文件名为p13390677_112040_Linux-x86-64_1of7.zipp13390677_112040_Linux-x86-64_2of7.zipp13390677_112040_Linux-x86-64_3of7.zip (包含clusterwaresoftware)。

2. RAC环境

 

节点1

节点2

主机名

racdb01

racdb02

SID

racdb1

racdb2

DB_NAME

racdb

当前Oracle版本

11.2.0.1

操作系统

Red Hat Enterprise Linux Server release 5.4 (Tikanga)

ORACLE_HOME目录

/app/product/oracle/11.2.0/db_1

GRID_HOME目录

/app/product/grid/11.2.0

二、 备份数据库

升级数据库是一个十分危险的事情,在生产库上,升级之前最好做一个全库的备份,以便在升级失败时可以还原数据库。

停掉所有与这个数据库相关的业务系统,如中间件。确定没有业务在运行。在数据库正常关闭后,还需要备份Oracle主目录,目的还是为了升级失败时,能够还原出数据库软件到升级前的版本。备份如下:tar -xcvf oracle11201_bak.tar.gz $ORACLE_BASE

 

三、 Grid Infrastructure升级前检查

[grid@racdb01 ~]$ cd /app/product/grid/11.2.0.4/grid/

su - grid

./runcluvfy.sh stage -pre crsinst -upgrade -n racdb01,racdb02 -rolling -src_crshome $ORACLE_HOME -dest_crshome /app/product/grid/11.2.0.4 -dest_version 11.2.0.4.0 -fixup -fixupdir /tmp -verbose

[grid@racdb01 ~]$ cat /tmp/runcluvfy.log|more

 

Performing pre-checks for cluster services setup

 

Checking node reachability...

 

Check: Node reachability from node "racdb01"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  racdb01                               yes                    

  racdb02                               yes                    

Result: Node reachability check passed from node "racdb01"

 

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racdb02                               passed                 

  racdb01                               passed                 

Result: User equivalence check passed for user "grid"

 

Checking CRS user consistency

Result: CRS user consistency check successful

 

Checking node connectivity...

 

Checking hosts config file...

  Node Name                             Status           

  ------------------------------------  ------------------------

  racdb02                               passed                 

  racdb01                               passed                 

 

Verification of the hosts config file successful

 

 

Interface information for node "racdb02"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----------------- ------

 eth1   10.1.1.12       10.1.1.0        0.0.0.0         192.168.1.1     00:0C:29:98:3B:08 1500 

 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         192.168.1.1     00:00:00:00:00:00 1500 

 eth0   192.168.1.113   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:98:3B:FE 1500 

 eth0   192.168.1.117   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:98:3B:FE 1500 

 eth0   192.168.1.114   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:98:3B:FE 1500 

 

 

Interface information for node "racdb01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----------------- ------

 eth1   10.1.1.11       10.1.1.0        0.0.0.0         192.168.1.1     00:0C:29:52:4E:4D 1500 

 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         192.168.1.1     00:00:00:00:00:00 1500 

 eth0   192.168.1.111   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:52:4E:43 1500 

 eth0   192.168.1.118   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:52:4E:43 1500 

 eth0   192.168.1.119   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:52:4E:43 1500 

 eth0   192.168.1.112   192.168.1.0     0.0.0.0         192.168.1.1     00:0C:29:52:4E:43 1500 

 

 

Check: Node connectivity for interface "eth1"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racdb02[10.1.1.12]              racdb01[10.1.1.11]              yes            

Result: Node connectivity passed for interface "eth1"

 

 

Check: TCP connectivity of subnet "10.1.1.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racdb01:10.1.1.11               racdb02:10.1.1.12               passed         

Result: TCP connectivity check passed for subnet "10.1.1.0"

 

 

Check: Node connectivity for interface "eth0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racdb02[192.168.1.113]          racdb02[192.168.1.117]          yes            

  racdb02[192.168.1.113]          racdb02[192.168.1.114]          yes            

  racdb02[192.168.1.113]          racdb01[192.168.1.111]          yes            

  racdb02[192.168.1.113]          racdb01[192.168.1.118]          yes            

  racdb02[192.168.1.113]          racdb01[192.168.1.119]          yes            

  racdb02[192.168.1.113]          racdb01[192.168.1.112]          yes            

  racdb02[192.168.1.117]          racdb02[192.168.1.114]          yes            

  racdb02[192.168.1.117]          racdb01[192.168.1.111]          yes            

  racdb02[192.168.1.117]          racdb01[192.168.1.118]          yes            

  racdb02[192.168.1.117]          racdb01[192.168.1.119]          yes            

  racdb02[192.168.1.117]          racdb01[192.168.1.112]          yes            

  racdb02[192.168.1.114]          racdb01[192.168.1.111]          yes            

  racdb02[192.168.1.114]          racdb01[192.168.1.118]          yes            

racdb02[192.168.1.114]          racdb01[192.168.1.119]          yes            

  racdb02[192.168.1.114]          racdb01[192.168.1.112]          yes            

  racdb01[192.168.1.111]          racdb01[192.168.1.118]          yes            

  racdb01[192.168.1.111]          racdb01[192.168.1.119]          yes            

  racdb01[192.168.1.111]          racdb01[192.168.1.112]          yes            

  racdb01[192.168.1.118]          racdb01[192.168.1.119]          yes            

  racdb01[192.168.1.118]          racdb01[192.168.1.112]          yes            

  racdb01[192.168.1.119]          racdb01[192.168.1.112]          yes             

Result: Node connectivity passed for interface "eth0"

 

 

Check: TCP connectivity of subnet "192.168.1.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racdb01:192.168.1.111           racdb02:192.168.1.113           passed         

  racdb01:192.168.1.111           racdb02:192.168.1.117           passed         

  racdb01:192.168.1.111           racdb02:192.168.1.114           passed         

  racdb01:192.168.1.111           racdb01:192.168.1.118           passed         

  racdb01:192.168.1.111           racdb01:192.168.1.119           passed         

  racdb01:192.168.1.111           racdb01:192.168.1.112           passed         

Result: TCP connectivity check passed for subnet "192.168.1.0"

 

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet "10.1.1.0".

Subnet mask consistency check passed for subnet "192.168.1.0".

Subnet mask consistency check passed.

 

Result: Node connectivity check passed

Checking multicast communication...

 

Checking subnet "10.1.1.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "10.1.1.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Checking subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0"...

Check of subnet "192.168.1.0" for multicast communication with multicast group "230.0.1.0" passed.

 

Check of multicast communication passed.

 

Checking OCR integrity...

 

OCR integrity check passed

 

Checking ASMLib configuration.

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racdb02                               passed                 

  racdb01                               passed                 

Result: Check for ASMLib configuration passed.

 

Check: Total memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       1.8994GB (1991680.0KB)    1.5GB (1572864.0KB)       passed   

  racdb01       1.8994GB (1991680.0KB)    1.5GB (1572864.0KB)       passed   

Result: Total memory check passed

 

Check: Available memory

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       883.9297MB (905144.0KB)   50MB (51200.0KB)          passed   

  racdb01       628.4648MB (643548.0KB)   50MB (51200.0KB)          passed   

Result: Available memory check passed

 

Check: Swap space

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       1.172GB (1228964.0KB)     2.8491GB (2987520.0KB)    failed   

  racdb01       1.172GB (1228964.0KB)     2.8491GB (2987520.0KB)    failed   

Result: Swap space check failed

 

Check: Free disk space for "racdb02:/app/product/grid/11.2.0.4"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /app/product/grid/11.2.0.4  racdb02       /app/product/grid/11.2.0.4  19.4121GB     5.5GB         passed     

Result: Free disk space check passed for "racdb02:/app/product/grid/11.2.0.4"

 

Check: Free disk space for "racdb01:/app/product/grid/11.2.0.4"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /app/product/grid/11.2.0.4  racdb01       /app/product/grid/11.2.0.4  15.1885GB     5.5GB         passed     

Result: Free disk space check passed for "racdb01:/app/product/grid/11.2.0.4"

 

Check: Free disk space for "racdb02:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              racdb02       /             2.3613GB      1GB           passed     

Result: Free disk space check passed for "racdb02:/tmp"

 

Check: Free disk space for "racdb01:/tmp"

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------------

  /tmp              racdb01       /             1.8843GB      1GB           passed     

Result: Free disk space check passed for "racdb01:/tmp"

 

Check: User existence for "grid"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racdb02       passed                    exists(500)            

  racdb01       passed                    exists(500)            

 

Checking for multiple users with UID value 500

Result: Check for multiple users with UID value 500 passed

Result: User existence check passed for "grid"

 

Check: Group existence for "oinstall"

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racdb02       passed                    exists                 

  racdb01       passed                    exists                 

Result: Group existence check passed for "oinstall"

 

Check: Membership of user "grid" in group "oinstall" [as Primary]

  Node Name         User Exists   Group Exists  User in Group  Primary       Status      

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           yes           yes           yes           yes           passed     

  racdb01           yes           yes           yes           yes           passed     

Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

 

Check: Run level

  Node Name     run level                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       5                         3,5                       passed   

  racdb01       5                         3,5                       passed   

Result: Run level check passed

 

Check: Hard limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racdb02           hard          250000        65536         passed         

  racdb01           hard          250000        65536         passed         

Result: Hard limits check passed for "maximum open file descriptors"

 

Check: Soft limits for "maximum open file descriptors"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racdb02           soft          1024          1024          passed         

  racdb01           soft          1024          1024          passed         

Result: Soft limits check passed for "maximum open file descriptors"

 

Check: Hard limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racdb02           hard          32768         16384         passed         

  racdb01           hard          32768         16384         passed         

Result: Hard limits check passed for "maximum user processes"

 

Check: Soft limits for "maximum user processes"

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racdb02           soft          2047          2047          passed         

  racdb01           soft          2047          2047          passed         

Result: Soft limits check passed for "maximum user processes"

 

Checking for Oracle patch "9413827 or 9706490" in home "/app/product/grid/11.2.0".

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  racdb02       missing                   9413827 or 9706490        failed   

  racdb01       missing                   9413827 or 9706490        failed   

Result: Check for Oracle patch "9413827 or 9706490" in home "/app/product/grid/11.2.0" failed

 

There are no oracle patches required for home "/app/product/grid/11.2.0.4".

 

Check: System architecture

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       i686                      x86                       passed   

  racdb01       i686                      x86                       passed   

Result: System architecture check passed

 

Check: Kernel version

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       2.6.18-164.el5xen         2.6.18                    passed   

  racdb01       2.6.18-164.el5xen         2.6.18                    passed   

Result: Kernel version check passed

 

Check: Kernel parameter for "semmsl"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           250           250           250           passed         

  racdb01           250           250           250           passed         

Result: Kernel parameter check passed for "semmsl"

 

Check: Kernel parameter for "semmns"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           32000         32000         32000         passed         

  racdb01           32000         32000         32000         passed         

Result: Kernel parameter check passed for "semmns"

 

Check: Kernel parameter for "semopm"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           100           100           100           passed         

  racdb01           100           100           100           passed         

Result: Kernel parameter check passed for "semopm"

 

Check: Kernel parameter for "semmni"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           128           128           128           passed         

  racdb01           128           128           128           passed         

Result: Kernel parameter check passed for "semmni"

 

Check: Kernel parameter for "shmmax"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           1073741824    1073741824    1019740160    passed         

  racdb01           1073741824    1073741824    1019740160    passed         

Result: Kernel parameter check passed for "shmmax"

 

Check: Kernel parameter for "shmmni"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           4096          4096          4096          passed         

  racdb01           4096          4096          4096          passed         

Result: Kernel parameter check passed for "shmmni"

 

Check: Kernel parameter for "shmall"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           2097152       2097152       2097152       passed         

  racdb01           2097152       2097152       2097152       passed         

Result: Kernel parameter check passed for "shmall"

 

Check: Kernel parameter for "file-max"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           6815744       6815744       6815744       passed         

  racdb01           6815744       6815744       6815744       passed         

Result: Kernel parameter check passed for "file-max"

 

Check: Kernel parameter for "ip_local_port_range"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed         

  racdb01           between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed         

Result: Kernel parameter check passed for "ip_local_port_range"

 

Check: Kernel parameter for "rmem_default"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           262144        262144        262144        passed         

  racdb01           262144        262144        262144        passed         

Result: Kernel parameter check passed for "rmem_default"

 

Check: Kernel parameter for "rmem_max"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           4194304       4194304       4194304       passed         

  racdb01           4194304       4194304       4194304       passed         

Result: Kernel parameter check passed for "rmem_max"

 

Check: Kernel parameter for "wmem_default"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           262144        262144        262144        passed         

  racdb01           262144        262144        262144        passed         

Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           1048576       1048576       1048576       passed         

  racdb01           1048576       1048576       1048576       passed         

Result: Kernel parameter check passed for "wmem_max"

 

Check: Kernel parameter for "aio-max-nr"

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------------

  racdb02           1048576       1048576       1048576       passed         

  racdb01           1048576       1048576       1048576       passed         

Result: Kernel parameter check passed for "aio-max-nr"

 

Check: Package existence for "make"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       make-3.81-3.el5           make-3.81                 passed   

  racdb01       make-3.81-3.el5           make-3.81                 passed   

Result: Package existence check passed for "make"

 

Check: Package existence for "binutils"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       binutils-2.17.50.0.6-12.el5  binutils-2.17.50.0.6      passed   

  racdb01       binutils-2.17.50.0.6-12.el5  binutils-2.17.50.0.6      passed   

Result: Package existence check passed for "binutils"

 

Check: Package existence for "gcc"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       gcc-4.1.2-46.el5          gcc-4.1.2                 passed   

  racdb01       gcc-4.1.2-46.el5          gcc-4.1.2                 passed   

Result: Package existence check passed for "gcc"

 

Check: Package existence for "gcc-c++"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       gcc-c++-4.1.2-46.el5      gcc-c++-4.1.2             passed   

  racdb01       gcc-c++-4.1.2-46.el5      gcc-c++-4.1.2             passed   

Result: Package existence check passed for "gcc-c++"

 

Check: Package existence for "libgomp"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libgomp-4.4.0-6.el5       libgomp-4.1.2             passed   

  racdb01       libgomp-4.4.0-6.el5       libgomp-4.1.2             passed   

Result: Package existence check passed for "libgomp"

 

Check: Package existence for "libaio"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libaio-0.3.106-3.2        libaio-0.3.106            passed   

  racdb01       libaio-0.3.106-3.2        libaio-0.3.106            passed   

Result: Package existence check passed for "libaio"

 

Check: Package existence for "glibc"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       glibc-2.5-42              glibc-2.5-24              passed   

  racdb01       glibc-2.5-42              glibc-2.5-24              passed   

Result: Package existence check passed for "glibc"

 

Check: Package existence for "compat-libstdc++-33"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       compat-libstdc++-33-3.2.3-61  compat-libstdc++-33-3.2.3  passed   

  racdb01       compat-libstdc++-33-3.2.3-61  compat-libstdc++-33-3.2.3  passed   

Result: Package existence check passed for "compat-libstdc++-33"

 

Check: Package existence for "elfutils-libelf"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       elfutils-libelf-0.137-3.el5  elfutils-libelf-0.125     passed   

  racdb01       elfutils-libelf-0.137-3.el5  elfutils-libelf-0.125     passed   

Result: Package existence check passed for "elfutils-libelf"

 

Check: Package existence for "elfutils-libelf-devel"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed   

  racdb01       elfutils-libelf-devel-0.137-3.el5  elfutils-libelf-devel-0.125  passed   

Result: Package existence check passed for "elfutils-libelf-devel"

 

Check: Package existence for "glibc-common"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       glibc-common-2.5-42       glibc-common-2.5          passed   

  racdb01       glibc-common-2.5-42       glibc-common-2.5          passed   

Result: Package existence check passed for "glibc-common"

 

Check: Package existence for "glibc-devel"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       glibc-devel-2.5-42        glibc-devel-2.5           passed   

  racdb01       glibc-devel-2.5-42        glibc-devel-2.5           passed   

Result: Package existence check passed for "glibc-devel"

 

Check: Package existence for "glibc-headers"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       glibc-headers-2.5-42      glibc-headers-2.5         passed   

  racdb01       glibc-headers-2.5-42      glibc-headers-2.5         passed   

Result: Package existence check passed for "glibc-headers"

 

Check: Package existence for "libaio-devel"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libaio-devel-0.3.106-3.2  libaio-devel-0.3.106      passed   

  racdb01       libaio-devel-0.3.106-3.2  libaio-devel-0.3.106      passed   

Result: Package existence check passed for "libaio-devel"

 

Check: Package existence for "libgcc"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libgcc-4.1.2-46.el5       libgcc-4.1.2              passed   

  racdb01       libgcc-4.1.2-46.el5       libgcc-4.1.2              passed   

Result: Package existence check passed for "libgcc"

 

Check: Package existence for "libstdc++"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libstdc++-4.1.2-46.el5    libstdc++-4.1.2           passed   

  racdb01       libstdc++-4.1.2-46.el5    libstdc++-4.1.2           passed   

Result: Package existence check passed for "libstdc++"

 

Check: Package existence for "libstdc++-devel"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       libstdc++-devel-4.1.2-46.el5  libstdc++-devel-4.1.2     passed   

  racdb01       libstdc++-devel-4.1.2-46.el5  libstdc++-devel-4.1.2     passed   

Result: Package existence check passed for "libstdc++-devel"

 

Check: Package existence for "sysstat"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       sysstat-7.0.2-3.el5       sysstat-7.0.2             passed   

  racdb01       sysstat-7.0.2-3.el5       sysstat-7.0.2             passed   

Result: Package existence check passed for "sysstat"

 

Check: Package existence for "ksh"

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       ksh-20080202-14.el5       ksh-20060214              passed   

  racdb01       ksh-20080202-14.el5       ksh-20060214              passed   

Result: Package existence check passed for "ksh"

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Check: Current group ID

Result: Current group ID check passed

 

Starting check for consistency of primary group of root user

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racdb02                               passed                 

  racdb01                               passed                 

 

Check for consistency of root user's primary group passed

 

Starting Clock synchronization checks using Network Time Protocol(NTP)...

 

NTP Configuration file check started...

Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS

) can be used instead of NTP for time synchronization on the cluster nodes

No NTP Daemons or Services were found to be running

 

Result: Clock synchronization check using Network Time Protocol(NTP) passed

 

Checking Core file name pattern consistency...

Core file name pattern consistency check passed.

 

Checking to make sure user "grid" is not in "root" group

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racdb02       passed                    does not exist         

  racdb01       passed                    does not exist         

Result: User "grid" is not part of "root" group. Check passed

 

Check default user file creation mask

  Node Name     Available                 Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  racdb02       0022                      0022                      passed   

  racdb01       0022                      0022                      passed   

Result: Default user file creation mask check passed

 

Checking consistency of file "/etc/resolv.conf" across nodes

 

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

File "/etc/resolv.conf" does not have both domain and search entries defined

Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...

domain entry in file "/etc/resolv.conf" is consistent across nodes

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

search entry in file "/etc/resolv.conf" is consistent across nodes

Checking file "/etc/resolv.conf" to make sure that only one search entry is defined

All nodes have one search entry defined in file "/etc/resolv.conf"

Checking all nodes to make sure that search entry is "localdomain" as found on node "racdb02"

All nodes of the cluster have same value for 'search'

Checking DNS response time for an unreachable node

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racdb02                               failed                 

  racdb01                               failed                 

PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racdb02,racdb01

 

File "/etc/resolv.conf" is not consistent across nodes

 

 

UDev attributes check for OCR locations started...

Result: UDev attributes check passed for OCR locations

 

 

UDev attributes check for Voting Disk locations started...

Result: UDev attributes check passed for Voting Disk locations

 

Check: Time zone consistency

Result: Time zone consistency check passed

Checking VIP configuration.

Checking VIP Subnet configuration.

Check for VIP Subnet configuration passed.

Checking VIP reachability

Check for VIP reachability passed.

 

Checking Oracle Cluster Voting Disk configuration...

 

ASM Running check passed. ASM is running on all specified nodes

 

Oracle Cluster Voting Disk configuration check passed

 

Clusterware version consistency passed

 

Pre-check for cluster services setup was unsuccessful on all the nodes.

 

 

 

Check: Swap space

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racdb02       1.172GB (1228964.0KB)     2.8491GB (2987520.0KB)    failed   

  racdb01       1.172GB (1228964.0KB)     2.8491GB (2987520.0KB)    failed   

Result: Swap space check failed

解决方法: 附录8: 增大swap交换空间

 

Checking for Oracle patch "9413827 or 9706490" in home "/app/product/grid/11.2.0".

  Node Name     Applied                   Required                  Comment  

  ------------  ------------------------  ------------------------  ----------

  racdb02       missing                   9413827 or 9706490        failed   

  racdb01       missing                   9413827 or 9706490        failed   

Result: Check for Oracle patch "9413827 or 9706490" in home "/app/product/grid/11.2.0" failed

 

There are no oracle patches required for home "/app/product/grid/11.2.0.4".

解决方法:见附录1:安装9706490补丁包

 

 

 

Checking consistency of file "/etc/resolv.conf" across nodes

 

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined

File "/etc/resolv.conf" does not have both domain and search entries defined

Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...

domain entry in file "/etc/resolv.conf" is consistent across nodes

Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...

search entry in file "/etc/resolv.conf" is consistent across nodes

Checking file "/etc/resolv.conf" to make sure that only one search entry is defined

All nodes have one search entry defined in file "/etc/resolv.conf"

Checking all nodes to make sure that search entry is "localdomain" as found on node "racdb02"

All nodes of the cluster have same value for 'search'

Checking DNS response time for an unreachable node

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racdb02                               failed                 

  racdb01                               failed                 

PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: racdb02,racdb01

 

File "/etc/resolv.conf" is not consistent across nodes

解决方法:

方案一、

[]Bug 11775332 - cluvfy fails with PRVF-5636 with DNS response timeout error [ID 11775332.8]

 

方案二、

dns 服务器中加入缺少的域 或者将 /etc/resolv.conf中多余的查找域删除

 

[root@racdb02 ~]# vi /etc/resolv.conf

; generated by /sbin/dhclient-script

search localdomain    改为search landf.com

nameserver 192.168.1.111

~

 

方案三、

1,修改 /var/named/chroot/etc/named.conf/etc/named.conf文件,在options选项内加入

        recursion       no;

        allow-query     { any; };

        allow-query-cache { any; };

2,重新启动named服务

         service named restart

 

四、 升级Oracle Grid Infrastructure11204

如果使用out of space 方式升级

先关闭两节点Oracle相关进程,用root用户登陆,执行以下命令:

[root@racdb01 ~]# /app/product/grid/11.2.0/bin/crsctl stop crs

[root@racdb02 ~]# /app/product/grid/11.2.0/bin/crsctl stop crs

目前使用rolling 方式升级

 

[root@racdb01 ~]#export DISPLAY=192.168.1.1:0.0

[root@racdb01 ~]#xhost +

su - grid

[grid@racdb01 ~]$ cd /app/product/grid/11.2.0.4/grid/

[grid@racdb01 grid]$ unset ORACLE_HOME

[grid@racdb01 grid]$ unset ORACLE_BASE

[grid@racdb01 grid]$ unset ORACLE_SID

[grid@racdb01 grid]$ ./runInstaller

再运行runInstaller进行升级,在出现“指定主目录详细信息”窗口,选择CRS_HOME,其它都点击“下一步”。

 

 

 

 

 

 

 

安装到最后一步,用root用户,在每个节点上执行以下脚本:

[root@racdb01 ~]# /app/product/grid/11.2.0.4/rootupgrade.sh

输出见2

 

[root@racdb02 ~]# /app/product/grid/11.2.0.4/rootupgrade.sh

输出见附录3

 

 

 

 

Oracle Grid Infrastructure升级后验证

su - grid

[grid@racdb01 ~]$ crsctl query crs activeversion

[grid@racdb01 ~]$ crsctl query crs releaseversion

[grid@racdb01 ~]$ crsctl query crs softwareversion

[grid@racdb01 ~]$ . oraenv

[grid@racdb01 ~]$ sqlplus / as sysasm

SQL> select version from v$instance ;

 

su - grid

[grid@racdb02 ~]$ crsctl query crs activeversion

[grid@racdb02 ~]$ crsctl query crs releaseversion

[grid@racdb02 ~]$ crsctl query crs softwareversion

[grid@racdb02 ~]$ . oraenv

[grid@racdb02 ~]$ sqlplus / as sysasm

SQL> select version from v$instance ;

 

接下来升级 rdbms

五、 升级Oracle数据库软件到11204

 

目前使用rolling 方式升级

 

[root@racdb01 ~]#export DISPLAY=192.168.1.1:0.0

[root@racdb01 ~]#xhost +

su - oracle

[oracle@racdb01 ~]$ cd /app/product/grid/11.2.0.4/database/

[oracle@racdb01 database]$ unset ORACLE_HOME

[oracle@racdb01 database]$ unset ORACLE_BASE

[oracle@racdb01 database]$ unset ORACLE_SID

[oracle@racdb01 database]$./runInstaller

再运行runInstaller进行升级,在出现“指定主目录详细信息”窗口,选择ORACLE_HOME,其它都点击“下一步”。

 

 

 

 

 

 

 

 

解决方案见4

 

 

 

安装到最后一步,用root用户,在每个节点上执行以下脚本:

/app/product/oracle/11.2.0.4/db_1/root.sh

 

 

[root@racdb01 ~]# /app/product/oracle/11.2.0.4/db_1/root.sh

输出见附录5

 

[root@racdb02 ~]# /app/product/oracle/11.2.0.4/db_1/root.sh

输出见附录6

 

 

六、 升级Oracle数据库数据字典到11204

开始使用dbua进行数据库升级

 

 

 

 

Database contains schemas with objects dependent on DBMS_LDAP package. Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.

 

Oracle recommends gathering dictionary statistics prior to upgrading the database. Refer to the Upgrade Guide for instructions to gather statistics prior to upgrading the database.

 

Your database has EVENT or _TRACE_EVENT initialization parameters set. Oracle recommends reviewing any defined events prior to upgrade. DBUA will retain these parameters during upgrade.

 

解决方案见附录7configure Network ACLs&gathering dictionary statistics

 

 

 

 

 

 

 

 

数据库软件安装完成,数据库升级完成

七、 验证升级

下面来验证升级,及一些后期工作

修改环境变量,把GRIDDB的主目录更改到新的/app/product/grid/11.2.0.4/app/product/oracle/11.2.0.4/db_1,两台都改

 

 

 

 

source .bash_profile

使刚修改的环境变量生效

 

 

 

 

下面查询数据库所有组件的版本是不是已经成功升级

 

 

 

 

下面关闭racdb01上的crs,到racdb02上面查询,看到所有集群服务被racdb02代理

 

 

 

 

 

下面完全关闭集群,然后重启两台服务器

 

 

 

 

 

 

重启完成后,等一段时间后,尝试查看集群服务,完全启动后再手动启动数据库

 

至于手动升级catalog,utlrp那些脚本就不必再执行,在下图中这一步已经完成了(看日志,里面记录的很清楚有哪些操作)

到此,升级完成

 

已经成功的把数据库从11.2.0.1升级成Oracle 11.2.0.4

 

 

 

 

 

附录1:安装9706490补丁包

1, Ensure that the directory containing the opatch script appears in

  your $PATH. Execute "which opatch" to confirm.

  CRS_HOME   = the full path to the crs home.

  RDBMS_HOME = the full path to the server home.

 

su - grid

vi .bash_profile

添加/app/product/grid/11.2.0/OPatchpath环境变量

 

su - oracle

vi .bash_profile

添加/app/product/oracle/11.2.0/db_1/OPatchpath环境变量

 

2, Configuration B:   When each node of the cluster has its own CRS Home,

  the patch should be applied as a rolling upgrade. All of the following

  steps should be followed for each node. Do not patch two nodes at once.

 

a. Verify that the Oracle Inventory is properly configured.

su - grid

opatch lsinventory -detail -oh $ORACLE_HOME

 

su - oracle

opatch lsinventory -detail -oh $ORACLE_HOME

 

#  This should list the components the list of nodes.

#  If the Oracle inventory is not setup correctly this utility will fail.

 

3, Unzip the PSE container file

 [root@racdb01 11.2.0.4]# unzip p9706490_112010_LINUX.zip

 

4,In configuration B, shut down the CRS managed resources running from DB

home followed by CRS daemons on the local node.

   4.1  Stop the CRS managed resources running from DB homes using

              su - oracle

       racdb01<*racdb1*/home/oracle>$/app/product/oracle/11.2.0/db_1/bin/srvctl stop home -o /app/product/oracle/11.2.0/db_1/ -s /home/oracle/stop.log -n racdb01

#      note the status file is created by the process

 

5. Prior to applying this part of the fix, you must invoke this script as root to unlock protected files.

[root@racdb01 ~]# /app/product/grid/11.2.0/crs/install/rootcrs.pl -unlock

#  Note: In configuration A, invoke this only on one node.

 

6. Save the RDBMS home configuration settings

#

#  ( Please review section 6.2 first ) As the RDBMS software owner;

su - oracle

racdb01<*racdb1*/app/product/grid/11.2.0.4/9706490>$cd /app/product/grid/11.2.0.4/9706490/custom/server/9706490/custom/scripts

racdb01<*racdb1*/app/product/grid/11.2.0.4/9706490/custom/server/9706490/custom/scripts>$./prepatch.sh -dbhome $ORACLE_HOME

./prepatch.sh completed successfully.

 

7. Patch the Files

7.1 Patch the CRS home files

#    After unlocking any protected files and saving configuration settings

#    you are now ready to run opatch using the following command.

#    As the Oracle Clusterware (CRS) software owner,

#    from the directory where the patch was unzipped;

su - grid

[grid@racdb01 ~]$ cd /app/product/grid/11.2.0.4/9706490

[grid@racdb01 9706490]$ opatch napply -local -oh $ORACLE_HOME -id 9706490

#

#    Note: In configuration A, invoke this only on one node.

#

#

7.2 Patch the RDBMS home files.

#

#    Note: The RDBMS portion can only be applied to an RDBMS home that

#          has been upgraded to *11.2.0.1.0*.

#    For additional information please read Note.363254.1;

#    Applying one-off Oracle Clusterware patches in a mixed version home

#    environment

#    As the RDBMS software owner,

#    from the directory where the patch was unzipped;

su - oracle

racdb01<*racdb1*/home/oracle>$cd /app/product/grid/11.2.0.4/9706490

opatch napply custom/server/ -local -oh $ORACLE_HOME -id 9706490

#

#    Note: In configuration A, invoke this only on one node.

#

###########################################################################

 

8. Configure the HOME

8.1 Configure the RDBMS HOME

#    After opatch completes, some configuration settings need to be applied

#    to the patched files. As the RDBMS software owner execute the following;

#

su - oracle

racdb01<*racdb1*/home/oracle>$cd /app/product/grid/11.2.0.4/9706490

custom/server/9706490/custom/scripts/postpatch.sh -dbhome $ORACLE_HOME

#

#    Note: In configuration A, invoke this only on one node.

#

###########################################################################

#

在第二个节点重复步骤1~8

 

9. Now security settings need to be restored on the CRS Home. This script

#  will also restart the CRS daemons. Invoke this script as root.

/crs/install/rootcrs.pl -patch

/app/product/grid/11.2.0/crs/install/rootcrs.pl -patch

/bin/srvctl start home -o -s -n

 

/app/product/oracle/11.2.0/db_1/bin/srvctl start home -o /app/product/oracle/11.2.0/db_1/ -s /home/oracle/start.log -n racdb01

 

#

#  Note: This script should only be invoked as part of the patch process.

#

#  Note: In configuration A, invoke this on each node. Do not invoke this

#        in parallel on two nodes.

###########################################################################

#

10. On success you can determine whether the patch has been installed by

#  using the following command;

#

#  % opatch lsinventory -detail -oh

#  % opatch lsinventory -detail -oh

#

###########################################################################

 

su - grid

opatch lsinventory -detail -oh $ORACLE_HOME

 

su - oracle

opatch lsinventory -detail -oh $ORACLE_HOME

 

 

附录2:节点1输出

 

 

附录3:节点2输出

 

 

 

附录4:处理listener_scan1状态为intermediate

1,在节点racdb01 查看crs状态

 

发现其中节点racdb01上的listener_scan1状态为intermediate

 

2,到节点racdb01上查看listener_scan1详细信息

 

3,删除节点racdb01/app/product/grid/11.2.0.4/network/admin/listener.ora内手动添加部分listeners_racdb

 

4,重新启动listener_scan1

 

5,在节点racdb01 查看crs状态

 

 

发现其中节点racdb01上的listener_scan1状态已为online

 

6,到节点racdb01上确认listener_scan1详细信息

 

 

附录5: 节点1输出

 

附录6: 节点2输出

 

附录7: configure Network ACLs &gathering dictionary statistics

To get the users owning UTL_% dependent objects

Schema with dependent objects:
SQL> SELECT DISTINCT owner
     FROM DBA_DEPENDENCIES
     WHERE referenced_name
     IN ('UTL_TCP','UTL_SMTP','UTL_MAIL','UTL_HTTP','UTL_INADDR','DBMS_LDAP')
     AND owner NOT IN ('SYS','PUBLIC','ORDPLUGINS');

Schema's using interMedia and may have dependent objects:

SQL> SELECT DISTINCT owner 
     FROM all_tab_columns
     WHERE data_type 
     IN ('ORDIMAGE', 'ORDAUDIO', 'ORDVIDEO', 'ORDDOC','ORDSOURCE', 'ORDDICOM') 
     AND (data_type_owner = 'ORDSYS' OR data_type_owner = owner) 
     AND (owner != 'PM');

 

 

Connect as the user from the above result using SQL*Plus. (for Instance, Connect as mdsys)

Executing the code with UTL Package will error ORA-24247

SQL> set serveroutput on
SQL> DECLARE
        l_url varchar2(32767);
        l_conn utl_http.req;
     BEGIN
        l_url := 'http://www.oracle.com';
        l_conn := utl_http.begin_request(url => l_url, method => 'POST', http_version=> 'HTTP/1.0');
        dbms_output.put_line('Anonymous Block Executed Successfully');
     END;
      /

SQL> select utl_inaddr.get_host_address('www.oracle.com') from dual;

 

 

 

 

 

 

 

 

 

Connect as SYS as sysdba

Execute the below anonymous block to give access to mdsys, APEX_030200.
Here the Privilege has to be 'connect' for UTL_HTTP package and 'resolve' for UTL_INADDR Package.

Anonymous Block give Connect Privilege to Scott:

SQL> set serveroutput on
SQL> DECLARE
        ACL_PATH  VARCHAR2(32767);
     BEGIN

-- Look for the ACL currently assigned to '*' and give APEX_030200
-- the "connect" privilege if APEX_030200 does not have the privilege yet

        SELECT ACL INTO ACL_PATH FROM DBA_NETWORK_ACLS
        WHERE HOST = '*' AND LOWER_PORT IS NULL AND UPPER_PORT IS NULL;

dbms_output.put_line('acl_path = '|| acl_path);
        dbms_output.put_line('ACL already Exists. Checks for Privilege and add the Privilege');

    IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(ACL_PATH,'APEX_030200','connect') IS NULL THEN
      DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(ACL_PATH,'APEX_030200', TRUE, 'connect');
      COMMIT;
    END IF;

    EXCEPTION
-- When no ACL has been assigned to '*'
    WHEN NO_DATA_FOUND THEN

        dbms_output.put_line('APEX_030200 does not have privilege, create ACL now');

        DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('users1.xml',
           'ACL that lets APEX_030200 to use the UTL Package',
           'APEX_030200', TRUE, 'connect');
   
        dbms_output.put_line('APEX_030200 does not have privilege, assign ACL now');

        DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('users1.xml','www.oracle.com');
        COMMIT;
    END;
 /

 

 

 

 

 

 

DECLARE

  ACL_PATH  VARCHAR2(4000);

  CURSOR C1 IS

  SELECT distinct owner FROM DBA_DEPENDENCIES

  WHERE referenced_name IN ('UTL_TCP','UTL_SMTP','UTL_MAIL','UTL_HTTP','UTL_INADDR','DBMS_LDAP')

  AND owner NOT IN ('SYS','PUBLIC','ORDPLUGINS');

BEGIN

  FOR R1 IN C1 LOOP

  BEGIN

  SELECT acl INTO acl_path FROM dba_network_acls

  WHERE host = 'host_name' AND lower_port IS NULL AND upper_port IS NULL;

       IF DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE(acl_path,

                                         r1.owner,'connect') IS NULL THEN

              DBMS_NETWORK_ACL_ADMIN.ADD_PRIVILEGE(acl_path,

                                         r1.owner, TRUE, 'connect');

       END IF;

  EXCEPTION

     WHEN no_data_found THEN

       DBMS_NETWORK_ACL_ADMIN.CREATE_ACL('myACL.xml',

                                         'ACL for network packages',

                                         r1.owner,

                                         TRUE,

                                         'connect');

       DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL('myACL.xml','host_name');

END;

COMMIT;

END LOOP;

END;

/

 

 

 

 

 

 

 

收集数据字典信息

 

 

附录8: 增大swap交换空间

l  创建swap文件(推荐)

使用如下指令新增一个1024Mswap文件。

dd if=/dev/zero of=/opt/.myswap bs=32k count=16384
chmod 600 /opt/.myswap
mkswap /opt/.myswap
swapon /opt/.myswap

 

说明:

·  参数bs制定每次读取及输入多少个字节。因为硬盘存取的最小单位为扇区,所以设置bs参数就相当于设置每个扇区的大小。通常情况下这个参数设置为32为好。

·  参数count则用来设置可以使用bs的数量。

·  这个文件所占用的空间就是以上两个参数的乘积:32k*32768=262144 (KB),亦等于1024MB

·  if=文件 参数表示读取<文件>内容而非标准输入的数据

·  of=文件 参数表示将数据写入<文件>而非标准输出显示(即指定新增的SWAP文件应该创建的位置和名称)

·  指令mkswap 用于将新建的文件格式化为swap文件格式

·  指令swapon 用于启用交换分区文件

 

要在引导时启用,还要编辑 /etc/fstab 文件,在最后一行增加类似如下内容:

#LABEL=SWAP-sda2 swap swap  defaults 0 0

/dev/sda2   /data   ext3  defaults  1 2  #磁盘

/opt/.myswap swap swap defaults 0 0   #文件

 

要停止使用新创建的swap文件,只要执行如下命令即可:

# swapoff /opt/.myswap

 

 

附录9: RAC环境重建Enterprise Manager

1.查看dbconsole的状态

su - oracle
emctl status dbconsole


2.
查看RAC数据库 dbcontrol 的配置信息

su - oracle
emca -displayConfig dbcontrol -cluster

 

[oracle@racdb02 ~]$ emca -displayConfig dbcontrol -cluster

STARTED EMCA at Nov 18, 2013 5:54:19 PM

EM Configuration Assistant, Version 11.2.0.3.0 Production

Copyright (c) 2003, 2011, Oracle.  All rights reserved.

 

Enter the following information:

Database unique name: racdb

Service name: racdb

Do you wish to continue? [yes(Y)/no(N)]: y

Nov 18, 2013 5:54:34 PM oracle.sysman.emcp.EMConfig perform

INFO: This operation is being logged at /app/product/oracle/cfgtoollogs/emca/racdb/emca_2013_11_18_17_54_18.log.

Nov 18, 2013 5:54:39 PM oracle.sysman.emcp.EMDBPostConfig showClusterDBCAgentMessage

INFO:

****************  Current Configuration  ****************

 INSTANCE            NODE           DBCONTROL_UPLOAD_HOST

----------        ----------        ---------------------

 

racdb             racdb01             racdb01

racdb             racdb02             racdb01

 

 

Enterprise Manager configuration completed successfully

FINISHED EMCA at Nov 18, 2013 5:54:39 PM


3.
清除RAC数据库现有的配置信息

方案一、

su - oracle
emca -deconfig dbcontrol db -repos drop –cluster

 

方案二、
SQL> drop user sysman cascade;
SQL> drop role MGMT_USER;
SQL> drop user MGMT_VIEW cascade;
SQL> drop public synonym MGMT_TARGET_BLACKOUTS;
SQL> drop public synonym SETEMVIEWUSERCONTEXT;

 


4.
重新建立RAC数据库的db control的配置信息

su - oracle
emca -config dbcontrol db -repos create -cluster
emca -config dbcontrol db -repos recreate -cluster


附录10: 11gR2 RAC 重建OEM ORA-12514问题处理

[oracle@racdb01 ~]$ emca -config dbcontrol db -repos create -cluster  

重建oem命令

STARTED EMCA at Nov 20, 2013 10:48:24 AM

EM Configuration Assistant, Version 11.2.0.3.0 Production

Copyright (c) 2003, 2011, Oracle.  All rights reserved.

Enter the following information:

Database unique name: racdb

Service name: racdb

Listener ORACLE_HOME [ /app/product/grid/11.2.0.4 ]: /app/product/grid/11.2.0.4

Password for SYS user: racle

Database Control is already configured for the database racdb

You have chosen to configure Database Control for managing the database racdb

This will remove the existing configuration and the default settings and perform a fresh configuration

Do you wish to continue? [yes(Y)/no(N)]: y

Password for DBSNMP user: eracl

Password for SYSMAN user: oracle

Cluster name: rac

Email address for notifications (optional): hhb@citycloud.com.cn

Outgoing Mail (SMTP) server for notifications (optional):

ASM ORACLE_HOME [ /app/product/grid/11.2.0.4 ]: /app/product/grid/11.2.0.4

ASM port [ 1521 ]: 1521

ASM username [ ASMSNMP ]: asmsnmp

ASM user password: racle

Nov 20, 2013 10:49:26 AM oracle.sysman.emcp.util.GeneralUtil initSQLEngineRemotely

WARNING: Error during db connection : ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Nov 20, 2013 10:49:31 AM oracle.sysman.emcp.util.GeneralUtil initSQLEngineRemotely

WARNING: ORA-12541: TNS:no listener

Nov 20, 2013 10:49:32 AM oracle.sysman.emcp.util.GeneralUtil initSQLEngineRemotely

WARNING: Error during db connection : ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

 

上面的报错是因为ASM实例没有注册到SCAN listener里面,只要手工添加进去即可。

修改remote_listener参数

SQL>

查看监听器状态

 

 

[oracle@racdb01 ~]$ emca -config dbcontrol db -repos recreate -cluster

 

STARTED EMCA at Nov 20, 2013 11:54:58 AM

EM Configuration Assistant, Version 11.2.0.3.0 Production

Copyright (c) 2003, 2011, Oracle.  All rights reserved.

 

Enter the following information:

Database unique name: racdb

Service name: racdb

Listener ORACLE_HOME [ /app/product/grid/11.2.0.4 ]: /app/product/grid/11.2.0.4

Password for SYS user: oracle

Database Control is already configured for the database racdb

You have chosen to configure Database Control for managing the database racdb

This will remove the existing configuration and the default settings and perform a fresh configuration

----------------------------------------------------------------------

WARNING : While repository is dropped the database will be put in quiesce mode.

----------------------------------------------------------------------

Do you wish to continue? [yes(Y)/no(N)]: y

Password for DBSNMP user:  racle

Password for SYSMAN user: cle

Cluster name: rac

Email address for notifications (optional):

Outgoing Mail (SMTP) server for notifications (optional):

ASM ORACLE_HOME [ /app/product/grid/11.2.0.4 ]: /app/product/grid/11.2.0.4

ASM port [ 1521 ]: 1521

ASM username [ ASMSNMP ]:

ASM user password: leac

-----------------------------------------------------------------

 

You have specified the following settings

 

Database ORACLE_HOME ................ /app/product/oracle/11.2.0.4/db_1

 

Database instance hostname ................ Listener ORACLE_HOME ................ /app/product/grid/11.2.0.4

Listener port number ................ 1521

Cluster name ................ rac

Database unique name ................ racdb

Email address for notifications ...............

Outgoing Mail (SMTP) server for notifications ...............

ASM ORACLE_HOME ................ /app/product/grid/11.2.0.4

ASM port ................ 1521

ASM user role ................ SYSDBA

ASM username ................ ASMSNMP

 

-----------------------------------------------------------------

----------------------------------------------------------------------

WARNING : While repository is dropped the database will be put in quiesce mode.

----------------------------------------------------------------------

Do you wish to continue? [yes(Y)/no(N)]: y

Nov 20, 2013 11:55:37 AM oracle.sysman.emcp.EMConfig perform

INFO: This operation is being logged at /app/product/oracle/cfgtoollogs/emca/racdb/emca_2013_11_20_11_54_58.log.

Nov 20, 2013 11:55:41 AM oracle.sysman.emcp.util.PortManager isPortInUse

WARNING: Specified port 5540 is already in use.

Nov 20, 2013 11:55:41 AM oracle.sysman.emcp.util.PortManager isPortInUse

WARNING: Specified port 5520 is already in use.

Nov 20, 2013 11:55:41 AM oracle.sysman.emcp.util.PortManager isPortInUse

WARNING: Specified port 1158 is already in use.

Nov 20, 2013 11:55:41 AM oracle.sysman.emcp.util.PortManager isPortInUse

WARNING: Specified port 3938 is already in use.

Nov 20, 2013 11:55:41 AM oracle.sysman.emcp.util.DBControlUtil stopOMS

INFO: Stopping Database Control (this may take a while) ...

Nov 20, 2013 11:55:48 AM oracle.sysman.emcp.EMReposConfig invoke

INFO: Dropping the EM repository (this may take a while) ...

Nov 20, 2013 12:07:27 PM oracle.sysman.emcp.EMReposConfig invoke

INFO: Repository successfully dropped

Nov 20, 2013 12:07:28 PM oracle.sysman.emcp.EMReposConfig createRepository

INFO: Creating the EM repository (this may take a while) ...

Nov 20, 2013 1:43:29 PM oracle.sysman.emcp.EMReposConfig invoke

INFO: Repository successfully created

Nov 20, 2013 1:44:00 PM oracle.sysman.emcp.EMReposConfig uploadConfigDataToRepository

INFO: Uploading configuration data to EM repository (this may take a while) ...

Nov 20, 2013 1:46:59 PM oracle.sysman.emcp.EMReposConfig invoke

INFO: Uploaded configuration data successfully

Nov 20, 2013 1:47:03 PM oracle.sysman.emcp.EMDBCConfig instantiateOC4JConfigFiles

INFO: Propagating /app/product/oracle/11.2.0.4/db_1/oc4j/j2ee/OC4J_DBConsole_racdb01_racdb to remote nodes ...

Nov 20, 2013 1:47:06 PM oracle.sysman.emcp.EMDBCConfig instantiateOC4JConfigFiles

INFO: Propagating /app/product/oracle/11.2.0.4/db_1/oc4j/j2ee/OC4J_DBConsole_racdb02_racdb to remote nodes ...

Nov 20, 2013 1:47:47 PM oracle.sysman.emcp.EMAgentConfig deployStateDirs

INFO: Propagating /app/product/oracle/11.2.0.4/db_1/racdb01_racdb to remote nodes ...

Nov 20, 2013 1:47:51 PM oracle.sysman.emcp.EMAgentConfig deployStateDirs

INFO: Propagating /app/product/oracle/11.2.0.4/db_1/racdb02_racdb to remote nodes ...

Nov 20, 2013 1:47:55 PM oracle.sysman.emcp.util.DBControlUtil secureDBConsole

INFO: Securing Database Control (this may take a while) ...

Nov 20, 2013 1:48:52 PM oracle.sysman.emcp.util.DBControlUtil startOMS

INFO: Starting Database Control (this may take a while) ...

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/28371090/viewspace-1788645/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
ORACLE,MYSQL,POSTGRESQL,SQLSERVER

注册时间:2013-03-06

  • 博文量
    763
  • 访问量
    916910