ITPub博客

首页 > 数据库 > Oracle > 【DG】Oracle之级联DG--(cascade dg) --(一主一备一级联)--Cascaded Standby

【DG】Oracle之级联DG--(cascade dg) --(一主一备一级联)--Cascaded Standby

原创 Oracle 作者:lhrbest 时间:2019-09-05 11:15:38 0 删除 编辑

【DG】Oracle之级联DG--(cascade dg) --(一主一备一级联)

dataguard 一主一备一级联,意思是主库将日志传输给备库,然后备库在将日志传输给级联库,主库和级联库其实没有任何关系。另外,在Oracle 11g中,关于数据同步问题,主库上的操作一般情况下是可以实时同步到备库的,但是级联库必须等备库归档时,才能同步。如果主库切换日志,那么这时级联库也能及时同步。

Oracle 11g的级联备库是不支持实时应用的,要等源库日志切换后才会应用。Oracle 12c的级联备库支持实时应用。


在11.2及以上版本支持级联备库,就是第二备库从第一个备库接受redo日志,而不是直接从主库接受redo日志。

这样会减少主库的压力。实际上和正常搭建DG没什么区别,只是改一下参数即可。

最多支持30个级联备库,因为LOG_ARCHIVE_DEST_n,只有31个。


更多详细信息,参考官方文档: http://docs.oracle.com/database/121/SBYDB/log_transport.htm#SBYDB5126


Oracle 级联DG部署以及切换测试:

https://blog.csdn.net/weixin_36239782/article/details/91316703



1 说明

A standby database that cascades redo to other standby databases can transmit redo directly from its standby redo log file as soon as it is received from the primary database. Cascaded standby databases receive redo in real-time. They no longer have to wait for standby redo log files to be archived before redo is transmitted.


启用real-time redo,不需要等归档standby redo日志文件,然后再传输到级联备库上。


As of Oracle Database 12c Release 1 (12.1), a cascading standby database can either cascade redo in real-time (as it is being written to the standby redo log file) or non-real-time (as complete standby redo log files are being archived on the cascading standby).


从12c开始,支持real-time级联redo(等写入备库redo log)。


限制:


Only physical standby databases can cascade redo.


Real-time cascading requires a license for the Oracle Active Data Guard option.


Non-real-time cascading is supported on destinations 1 through 10 only. (Real-time cascading is supported on all destinations.)


If you specify ASYNC transport mode on destinations 1 through 10, then redo is shipped in real-time. If you do not specify a transport mode or you specify SYNC on destinations 1 through 10, then redo is shipped in non-real-time. Destinations 11 through 31 operate only in ASYNC (real-time) transport mode.


在用于级联的备库中的LOG_ARCHIVE_DEST_n(1…10)指定ASYNC,则是real-time。如果不指定,或者指定SYNC,则是non-real-time。


LOG_ARCHIVE_DEST_n(11…31)只支持ASYNC,即real-time传输模式。


oracle 12c 支持实时级联同步了,很11g不支持

There are new Possibilities for cascading Standby Databases in Oracle 12c. The main Differents between Oracle 12c and the previous Releases are:

1.        Real-Time Cascading
2.        Far Sync Standby Database
3.        Data Guard Broker now supports cascaded Standby Database





12c 的 Cascaded Standby 数据库 (文档 ID 2179701.1)


适用于:

Oracle Database - Enterprise Edition - 版本 12.1.0.1 和更高版本
Oracle Database Cloud Schema Service - 版本 N/A 和更高版本
Oracle Database Exadata Cloud Machine - 版本 N/A 和更高版本
Oracle Cloud Infrastructure - Database Service - 版本 N/A 和更高版本
Oracle Database Cloud Exadata Service - 版本 N/A 和更高版本
本文档所含信息适用于所有平台
***Checked for relevance on 13-Jul-2015***

用途

 这篇文档解释了 Cascaded Standby 在 oracle 12c 上的增强特性。

详细信息

 

Oracle 12c 的 cascading standby 数据库为用户增加了更多的选项。12c 的版本对比以前的版本增加了以下的新选项:

 

  1. 实时 Cascading
  2. Far Sync Standby 数据库
  3. Data Guard Broker 提供对 cascading standby 数据库的支持

 

但是,你还只能从配置 physical standby 数据库去 cascade 另一个 standby 数据库。 目前 logical standby 数据库还不支持 cascade 另一个 standby 数据库。

 

实时 Cascading:

 

新版本现在支持以实时的模式将 redo 从第一个 standby 数据库传递到 cascaded standby 数据库。因此在第一个 standby 数据库,Redo 的信息会在被写到 Standby Redolog 后立即传递到 cascaded standby 数据库。

而非实时 Cascading 意味着:只有主库的 log Switch 之后,整个 log sequence 才会被传递到最终的 Standby 数据库上。

 

先决条件:

 

  • 第一个(Cascading)standby 数据库必须是物理的或者是 Far Sync Standby 数据库
  • 必须保证至少在 Cascading standby 数据库上使用 Standby Redolog
  • Active Data guard 的选项必须是有 license 的
  • Primary,Cascading,Cascaded standby 数据库的 db_unique_name 必须体现在所有数据库 log_archive_config 的 dg_config 中

 

设置:

 

首先 ,创建一个通常的 Dataguard 环境到 cascading standby 数据库。Log 的传输模式应该为 SYNC,同时在 cascading standby 配置 Standby Redolog。在创建完 cascaded standby 数据库以后就可以设置 cascading log 的传输服务了,下面是一些注意事项:

 

  • Primary,Cascading,Cascaded standby 数据库的 db_unique_name 必须体现在所有数据库 log_archive_config 的 dg_config 中。
  • 在 Cascading standby 数据库的 log_archive_dest_n 里面设置 ‘valid_for=(STANDBY_LOGFILES,STANDBY_ROLE)’ 的属性来传输给 cascaded(最终)standby 数据库。
  • 你可以通过设置 Log Transport 的模式来切换实时以及非实时的 cascading 模式:

ASYNC = Real-Time Cascading

SYNC = Non Real-Time Cascading

  • 你只可以设置从 log_archive_dest_1 到 log_archive_dest_10 作为非实时模式的目的地,而你可以在 cascading standby 数据库上设置所有的 log_archive_dest_n 作为实时 cascading 的目的地。
  • Cascading Standby 数据库可以运行在任何保护模式下。
  • Cascading Standby 数据库可以传输给一个或者多的 terminal standby 数据库。
  • Cascading Standby 数据库的 FAL_SERVER 应该设置为 primary 库或者是其他的 primary 直接连接的 standby 数据库。
  • Terminal Standby 数据库的 FAL_SERVER应该设置 cascading Standby 数据库或者 Primary 数据库。

 

Far Sync Standby 数据库:

 

Far Sync Standby 数据库对于 Terminal standby 数据库来说是作为一个 RedoLog repository 数据库的作用。他不含有任何的数据文件。Far Sync Standby 数据库只是启动了 Log 传输服务。Far Sync Standby 数据库的优点是它可以作为 Primary 数据库的一个在最大保护模式下的本地的 ArchiveLog Repository,而 Physical 和 logical standby 数据库可以运行在远端,请参考文档:

Note 1565071.1: Data Guard 12c New Feature: Far Sync Standby

来了解具体的关于 Far Sync Standby 数据库内容以及设置的步骤。

 

 

Data Guard Broker 和 Cascaded Standby 数据库:

 

Data Guard Broker 有一个新的‘RedoRoutes’的属性可以用来构建和部署 cascaded Data Guard Broker 的配置。 以下是它的格式:

 

RedoRoutes = ‘(<Redo Source> : <Redo Destination>)’

 

Redo Source: Redo 的来源,他可以是 db_unique_name 或者是一个本地数据库名别名的 ‘LOCAL’-Keyword(不能被 Far Sync Standby 数据库使用)

Redo Destination: Redo 从这个数据库传输到的目的地。他可以是一个或者多个(用逗号分开)db_unique_name 或者是代表所有在 Data Guard Broker 配置中可能目的地的别名的‘ALL’-Keyword。 你可以设置到目的地的传输的模式。包括以下:

  • SYNC:                 等同于 log_archive_dest_n 中的属性 ‘SYNC AFFIRM’ 或者是非实时 Cascade
  • ASYNC:               等同于 log_archive_dest_n 中的属性 ‘ASYNC’ 或者是 实时 Cascade
  • FASTSYNC :         等同于 log_archive_dest_n 中的属性 ‘SYNC NOAFFIRM’

 

 

例子:

 

Primary Database:                               prim

Cascading Standby Database:               local_stdby

Cascaded (terminal) Standby Database: remote_stdby

如果想实现’SYNC NOAFFIRM’的本地 standby 数据库和在实时 cascade 模式下的远程 standby 数据库,设置如下:

 

Primary Database (prim)

RedoRoutes = ‘(LOCAL : local_stdby FASTSYNC)’

 -> Primary 数据库只传送 Redo 到 local Standby 数据库,但是有到远程 standby 数据库的 Archive 目的地。

 

Local Standby Database (local_stdby)

RedoRoutes = ‘(prim : remote_stdby ASYNC)’

 -> 这里需要配置来源于 ‘prim’的 REDO 是以实时 cascade(ASYNC)的方式转发到远程的 Standby 数据库。

 

参考

NOTE:1565071.1  - Data Guard 12c New Feature: Far Sync Standby


Cascaded Standby Databases in Oracle 12c (文档 ID 1542969.1)

In this Document


Purpose

Details

Real-Time Cascading

Prerequisites

Setup

Far Sync Standby Database

Data Guard Broker and Cascaded Standby Database

References

APPLIES TO:

Oracle Database - Enterprise Edition - Version 12.1.0.1 and later
Information in this document applies to any platform.
***Checked for relevance on 13-Jul-2015***

PURPOSE

 This Documents explains the Enhancements for Cascaded Standby Databases in Oracle 12c.

DETAILS

 

There are new Possibilities for cascading Standby Databases in Oracle 12c. The main Differents between Oracle 12c and the previous Releases are:

 

  1. Real-Time Cascading
  2. Far Sync Standby Database
  3. Data Guard Broker now supports cascaded Standby Database

 

However, you can still only cascade a Standby Database from a Physical Standby Database. It is not supported to cascade a Standby Database from a Logical Standby Database.

 

Real-Time Cascading

 

It is now possible to forward Redo in Real-Time Mode from the first to the cascaded Standby Database. So the Redo Record is forwarded to the cascaded Standby Database once written into a Standby RedoLog of the first Standby Database.

Non Real-Time Cascading means that the whole Log Sequence is transferred to the terminal Standby Database(s) after a Log Switch on the Primary Database.

 

Prerequisites

 

  • First (Cascading) Standby must be a Physical or Far Sync Standby Database
  • Standby RedoLogs must be in Place and used at least on the Cascading Standby Database
  • Active Data Guard Option must be licensed
  • Primary, Cascading and Cascaded Standby Database db_unique_name must be present in the dg_config of log_archive_config on all the Databases

 

Setup

 

First of all setup a Data Guard Environment as usual to the cascading Standby Database. The Log Transport Method should be ‘SYNC’ and Standby RedoLogs must be configured on the cascading Standby Database. Once you created the cascaded Standby Database you can now setup the cascading Log Transport Services. Here are some Hints for correct Setup:

 

  • Primary, Cascading and Cascaded Standby Database db_unique_name must be present in the dg_config of log_archive_config on all the Databases
  • Setup log_archive_dest_n on the cascading Standby Database to serve the cascaded (terminal) Standby Databases using the Attribute ‘valid_for=(STANDBY_LOGFILES,STANDBY_ROLE)’
  • You can toggle between Real-Time and Non Real-Time Cascading using the Log Transport Method.

ASYNC = Real-Time Cascading

SYNC = Non Real-Time Cascading

  • You can only use log_archive_dest_1 until log_archive_dest_10 for Non Real-Time Cascading Destinations where all log_archive_dest_n’s can be used for Real-Time Cascading on the Cascading Standby Database
  • The Cascading Standby Standby can be in any Protection Mode
  • A Cascading Standby Database can serve one or multiple terminal Standby Databases
  • FAL_SERVER on the cascading Standby Database should be set to the Primary or any other Standby Database served by the Primary Database directly
  • FAL_SERVER on the terminal Standby Database should be set to the cascading Standby Database or the Primary Database

 

Far Sync Standby Database

 

A Far Sync Standby Database is a cascading Standby Database which acts as a Redo Log Repository for a Terminal Database. It does not contain any Datafiles. Only Log Transport Services are active on a Far Sync Standby Database. The Advantage of a Far Sync Standby Database is that it can be a local ArchiveLog Repository for the Primary Database acting in Maximum Protection Mode where the Physical or Logical Standby Database can be on a far remote Site. See

Note 1565071.1: Data Guard 12c New Feature: Far Sync Standby

for further Details and Setup of a Far Sync Standby Database.

 

 

Data Guard Broker and Cascaded Standby Database

 

There is a new Data Guard Broker Property called ‘RedoRoutes’ used to build and implement a cascaded Data Guard Broker Configuration. It has the following Format:

 

RedoRoutes = ‘(<Redo Source> : <Redo Destination>)’

 

Redo Source: This is the Source the Redo is coming from. It can be a db_unique_name or the ‘LOCAL’-Keyword which is an Alias for the local Database Name (Cannot be used for a Far Sync Standby Database)

Redo Destination: This is the Destination where the Redo is shipped to from this Database. It can be one or more (comma separated) db_unique_name’s or the ‘ALL’-Keyword which is an Alias for all possible Destinations inside the Data Guard Broker Configuration. Optional you can also specify the Transport Method to be used to the Destination. This can be

  • SYNC:                   corresponds to log_archive_dest_n Attributes ‘SYNC AFFIRM’ or Non Real Time Cascade
  • ASYNC:                corresponds to log_archive_dest_n Attribute ‘ASYNC’ or Real Time Cascade
  • FASTSYNC :        corresponds to log_archive_dest_n Attributes ‘SYNC NOAFFIRM’

 

 

Example:

 

Primary Database:                               prim

Cascading Standby Database:               local_stdby

Cascaded (terminal) Standby Database: remote_stdby

We want to serve the local Standby Database with ‘SYNC NOAFFIRM’ and the remote Standby Database in Real-Time Cascade Mode. So the Setting would be:

 

Primary Database (prim)

RedoRoutes = ‘(LOCAL : local_stdby FASTSYNC)’

 -> So the Primary Database only ships Redo to the local Standby Database, but has Archive Destination to the remote Standby Database

 

Local Standby Database (local_stdby)

RedoRoutes = ‘(prim : remote_stdby ASYNC)’

 -> Here we configure that the Redo coming from ‘prim’ is forwarded in Real-Time Cascade (ASYNC) to the remote Standby Database

 

 

REFERENCES

NOTE:1565071.1  - Data Guard 12c New Feature: Far Sync Standby


FAL_SERVER And FAL_CLIENT Settings For Cascaded Standby (文档 ID 358767.1)


In this Document


Goal

Solution

References

This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.

APPLIES TO:

Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.2.0.4 [Release 9.2 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 27-Sep-2012***
***Checked for relevance on 10-Dec-2015*** 

GOAL

How to configure the  FAL_CLIENT and FAL_SERVER parameters in cascaded standby setup.

FAL_SERVERspecifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.

FAL_CLIENTspecifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the FAL_SERVERparameter, to refer to the FAL client. The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database). Given the dependency of FAL_CLIENTon FAL_SERVER, the two parameters should be configured or changed at the same time.

You can read about the cascaded redo log solution in

Note 409013.1: Cascaded Standby Databases in Oracle 10g/11g

SOLUTION

For simplification the following 3 service names are assumed, and assume all these 3 service names are available at all the 3 sites tnsnames.ora file (primary,cascaded standby and remote standby) in the same form.

 dg_prim -> primary database
 dg_standby_cas -> cascaded standby database
 dg_standby_rem -> the remote standby database.

Assuming the above configuration the parameter needs to be the following:
At primary:

fal_server = ' '
fal_client = ' '

Primary will never will have gap, so no need for any fal* parameter here.

At cascaded standby

fal_server = 'dg_prim'
fal_client = 'dg_standby_cas'

Cascaded when has gap, can only get the archive logs from the primary 
database. Hence the fal_server parameter. It wants the primary to send the FAL 
request response to 'dg_standby_cas', hence fal_client setting.

At remote standby database:

fal_server = 'dg_standby_cas','dg_prim'
fal_client = 'dg_standby_rem'

Remote standby when has gap, can get the archive logs from the primary 
database or cascaded standby database. Hence the fal_server parameter. It wants 
the primary to send the FAL request response to 'dg_standby_rem', hence 
fal_client setting.

Note: If primary receives a FAL request from the remote standby in the above case then It ships the archive logs directly to the remote standby without going via cascaded standby. FAL_CLIENT is obsolete in Oracle 11.2.0 and is not required any more


REFERENCES

NOTE:1537316.1  - Data Guard Gap Detection and Resolution Possibilities
NOTE:409013.1  - Cascaded Standby Databases in Oracle 10g/11g



Cascaded Standby Databases in Oracle 10g/11g (文档 ID 409013.1)


APPLIES TO:

Oracle Database - Enterprise Edition - Version 9.0.1.0 to 11.2.0.1 [Release 9.0.1 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 02-OCT-2014***
** checked for relevance '23-Nov-2015' **


GOAL

The information below replaces  Appendix E Cascaded Destinations of Oracle Data Guard Concepts and Administration 10g Release 2 (10.2) part number B14239.

This information also applies to Oracle10g Release 1 and Oracle9i releases.


For information on Cascaded Destinations in Data Guard 11g Release 1, please see Appendix E  here

For 11g Release 2, see Chapter 6 Redo Transport Services :- http://docs.oracle.com/cd/E11882_01/server.112/e41134/log_transport.htm#SBYDB00400

Please note that as of Version 11.2.0.2 many of the restrictions with cascaded standby databases have been lifted.  Please refer to the documentation link above for up to date information in 11.2.0.2 and cascaded standby databases.

SOLUTION

Summary: 

The significant changes from the previous Oracle Database 10g Release 2 documentation include:

  1. Cascading logical standby databases from a logical standby database is not supported.
  2. Cascading standby databases (logical or physical) from a primary database that is part of an Oracle Real Application Cluster (RAC) is not supported (This restriction has been lifted in 11.2.0.2)
  3. Using Cascaded standby databases in a Data Guard Broker environment is not supported.


Details: 

To reduce the load on your primary system, or to reduce the bandwidth requirements imposed when your standbys are separated from the primary database through a Wide Area Network (WAN), you can implement cascaded destinations, whereby a standby database receives its redo data from another standby database, instead of directly from the primary database. 

In a Data Guard configuration using a cascaded destination, a physical standby database can forward the redo data it receives from the primary database to another standby database. Only a physical standby database can be configured to forward redo data to another standby database. A logical standby database cannot forward redo to another standby database.

You cannot set up a physical standby to forward redo if the primary database is part of an Oracle Real Application Cluster (RAC) (lifted as of 11.2.0.2)  or part of a Data Guard Broker environment.


The following Data Guard configurations using cascaded destinations are supported.

  1. Primary Database > Physical Standby Database with cascaded destination > Physical Standby Database
  2. Primary Database > Physical Standby Database with cascaded destination > Logical Standby Database
While a logical standby database cannot forward redo to another standby database, it can be configured to have its own physical standby database. In such a case, the physical standby database is not considered a Cascaded Destination because it does not receive redo that is forwarded from the primary database. Instead, it is receiving redo generated by the logical standby.  However this can only be used for Rolling Upgrades since a failover to the Logical standby database's Physical standby would not result in a new Logical Standby.  Instead it would become another Primary and no longer be part of the original Data Guard configuration.



A physical standby database can support a maximum of nine (30 as of Version 11.2) remote destinations. When a cascaded destination is defined on a physical standby database, the physical standby will forward redo it receives from the primary to a second standby database after its standby redo log becomes full and is archived. Thus, the second standby database receiving the forwarded redo as a result of a cascaded destination will necessarily lag behind the primary database. 

Oracle recommends that cascaded destinations be used only for offloading reporting or for applications that do not require access to data that is completely up-to-date with the primary system. This is because the very nature of a cascaded destination means that the standby database that is the end-point will be one or more log files behind the primary database. Oracle also recommends that standby databases whose primary role is to be involved in role transitions receive their redo data directly from the primary database. 

The remainder of this note contains information about the following:

  • Configuring a cascaded destination
  • Role transitions in the presence of a cascaded destination
  • Examples of cascaded destinations
  1. Configuring a Cascaded Destination 

    To enable a physical standby database to forward incoming redo data to a cascaded destination perform the following steps:
    • Create standby redo log files on the physical standby database (if not already created).
      • If standby redo log files are not already defined, you can define them dynamically on the standby database. The standby database will begin using them after the next log switch on the primary database.
    • Define a LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set up a physical standby database that will forward redo to a cascaded destination. 
      Define the destination to use:
      • LGWR ASYNC or
      • LGWR SYNC
      Optionally, set the VALID_FOR attribute so that redo forwarding is enabled even after a role transition happens between the original primary database and the intermediate standby database that is forwarding redo. This may be meaningful in cases where the databases are separated over Wide Area Networks.
    • Ensure that archiving is enabled on the physical standby database where the cascaded destinations are defined (the standby database that will forward redo).
    • Configure a LOG_ARCHIVE_DEST_n parameter (on the physical standby that will forward redo data) for each cascaded destination.

    Below are the initialization parameters for a primary database named Boston, which sends redo to a physical standby database named Chicago, that forwards the redo it receives to a cascaded standby database named Denver. In this example, the database named Denver is a logical standby database, but note that a physical standby database can forward redo to either a physical or a logical standby database. 

    When the cascaded destination is a logical standby database, remember that you will create it just as if the logical standby will be directly connected to the primary database (see  Chapter 4 Creating a Logical Standby Database of Oracle Data Guard Concepts and Administration 10g Release 2).

    Boston Database (Primary Role)

    DB_UNIQUE_NAME=boston
    STANDBY_ARCHIVE_DEST=/arch1/boston/
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston,denver)'
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/boston/ VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'
    LOG_ARCHIVE_DEST_2= 'SERVICE=denver VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'
    LOG_ARCHIVE_DEST_3='SERVICE=chicago VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=chicago'


    Chicago Database (Standby Role)

    DB_UNIQUE_NAME=chicago
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston,denver)'
    STANDBY_ARCHIVE_DEST=/arch1/chicago/
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/chicago/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=chicago'
    LOG_ARCHIVE_DEST_2='SERVICE=denver VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'
    LOG_ARCHIVE_DEST_3='SERVICE=boston VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=boston'


    Denver Database (Standby Role)

    DB_UNIQUE_NAME=denver
    LOG_ARCHIVE_CONFIG='DG_CONFIG=(chicago,boston,denver)'
    STANDBY_ARCHIVE_DEST=/arch2/denver/
    REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
    LOG_ARCHIVE_DEST_1='LOCATION=/arch1/denver/ VALID_FOR=(ONLINE_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=denver'
    LOG_ARCHIVE_DEST_2='LOCATION=/arch2/denver/ VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=denver'

    In the example parameters above for the standby database Denver, STANDBY_ARCHIVE_DEST is set to /arch2/denver/ because this is a logical standby database. If Denver were a physical standby database, it would not be necessary to change STANDBY_ARCHIVE_DEST - it would match LOG_ARCHIVE_DEST_1. 

    Both the Boston primary database and the Chicago physical standby database define the LOG_ARCHIVE_DEST_2 initialization parameter as 'SERVICE=denver VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE). Hence, even if the Boston and Chicago databases switch roles, the redo data will continue to be forwarded to the Denver database. Remember, as part of the original setup of the physical standby database, you should define a local destination as VALID_FOR=(ONLINE_LOGFILE, PRIMARY_ROLE), that will be used for local archiving when the physical standby database transitions to the primary role.
  2. Role Transitions with Cascaded Destinations 

    Oracle recommends that standby databases primarily intended for disaster recovery purposes receive redo data directly from the primary database.  This will result in the optimum level of data protection. A cascaded destination may be used as a second line of defense, but by definition it will always be further behind than a standby database that is receiving redo directly from the primary.

  3. Examples of using Cascaded Destinations 

    The following scenarios demonstrate configuration options and uses for cascaded destinations. 

    Scenario 1: Physical Standby Forwarding Redo to a Remote Physical Standby 

    You have a primary database in your corporate offices and you want to create a standby database at another facility within your metropolitan area to provide zero data loss protection should there be a failure at your primary site. In addition to the local standby, you wish to maintain a geographically remote standby database 2,000 miles away at a disaster recovery site. A small amount of data loss is acceptable should failover to the remote standby be required (an acceptable trade-off in return for the extra protection against events that can impact a large geographic area and cause both the primary site and the local standby database to fail). The remote standby database also provides continuous data protection after a failover to the local standby database and improves security by enabling backups to be created and stored at the remote location, eliminating the need to ship tapes off-site. 

    While you could configure your primary database to ship redo directly to both standby databases, you may want to eliminate the potential overhead of the primary database shipping redo over a WAN to the second standby database. You solve this problem by creating the first physical standby in a local facility within your metropolitan area using the SYNC network transport to achieve zero data loss protection. A cascaded destination is defined on the local physical standby database that will forward redo received from the primary to the remote standby database using ASYNC network transport. Because the local standby manages all communication with the remote standby via a cascaded destination, there is no impact to the primary database to maintain a second standby. 

    Scenario 2: Physical Standby Forwarding Redo to a Logical Standby 

    You have a primary database in a city in the United States and you wish to deploy three complete replicas of this database to be used for end-user query and reporting in three different manufacturing plants in Europe. Your objective is to eliminate the need for users and applications at your European locations to access data that resides in the US to prevent network disruptions from making data unavailable for local access. While you can accept some latency between the time an update is made in the primary and the time it is replicated to all three European sites, you desire the data to be as up-to-date as possible and available to query and to run reports. You require a solution that is completely application transparent, and one where additional replicas can be deployed to sites in Europe if the need arises. A final requirement is the need to make this work with the limited bandwidth and very high network latency of the network connection between your US and European facilities. 

    You address your requirements by first creating a physical standby database in Europe for the primary database located in the US. You then create three logical standby databases, one in each of your European plants, and define each logical standby as a cascaded destination on your physical standby database.  One copy of the redo is shipped over the transatlantic link from the US to the physical standby in Europe. The physical standby in Europe forwards the redo to the three logical standby databases in the Europe manufacturing plants providing local access to corporate data for end-user query and reports. Room for future growth is built in - additional standby databases can be deployed in Europe without any modification to applications, without any additional overhead on your primary system, and without consuming any additional transatlantic bandwidth.

    Configure the physical standby database to forward redo data to the logical standby databases in each of your manufacturing sites as in the example above. The only difference from the example parameters, above, is that you will define two additional LOG_ARCHIVE_DEST_n parameters on the physical standby so that redo will be forwarded to all three logical standby databases.

REFERENCES

NOTE:1542969.1  - Cascaded Standby Databases in Oracle 12c



Data Guard broker considerations for cascaded standby databases in 11.2 (文档 ID 2220933.1)


In this Document


Goal

Solution

Creating the configuration:

Performing Role Transitions Using Data Guard Broker

APPLIES TO:

Oracle Database - Enterprise Edition - Version 11.2.0.2 to 11.2.0.4 [Release 11.2]
Information in this document applies to any platform.

GOAL

 To reduce the load on your primary system, or to reduce the bandwidth requirements imposed when your standbys are separated from the primary database through a Wide Area Network (WAN), you can implement cascaded destinations, whereby a standby database receives its redo data from another standby database, instead of directly from the primary database. In a Data Guard configuration using a cascaded destination, a physical standby database can forward the redo data it receives from the primary database to another standby database. Only a physical standby database can be configured to forward redo data to another standby database. Neither a logical standby database or a snapshot standby database can forward redo to another standby database.

In 11.2 when a cascaded destination is defined on a physical standby database, the physical standby will forward redo it receives from the primary to a second standby database after its standby redo log becomes full and is archived. Thus, the second standby database receiving the forwarded redo as a result of a cascaded destination will necessarily lag behind the primary database. In addition, there is no Data Guard broker support for handling cascaded destinations. In 12c it is now possible to cascade a Standby Database in Real-Time, ie. the first Standby Database can send Redo from the Standby RedoLogs to the cascaded Standby Database. Also, the Data Guard Broker now supports cascaded standby databases using the RedoRoutes database configuration property. For complete information on 12c Data Guard broker RedoRoutes property and cascaded standby databases refer to the following MAA whitepaper:

http://www.oracle.com/technetwork/database/availability/broker-12c-transport-config-2082184.pdf

While the 11.2 Data Guard does not support cascaded standby databases it is still possible to utilize the broker with some additional manual configuration. The following procedure describes the additional considerations for an 11.2 broker configuration with cascaded standby databases.

SOLUTION

 Consider the following starting configuration:

 

When a switchover or failover occurs between A and C the following is the desired configuration:

 

Creating the configuration:

1. Create a broker configuration that includes databases A, B, and C. In 11.2 the broker does not support cascaded standby (D) so it will be handled manually.

create configuration ‘orcl’ as
primary database is ‘A’
connect identifier is A;

add database ‘B’ as
connect identifier is B;

add database ‘C’ as
connect identifier is C;

enable configuration;

2. Manually configure cascaded redo shipping from database C to database D. On database C:

alter system set log_archive_config='DG_CONFIG=(A,B,C,D)' scope=both;
alter system set log_archive_dest_5='service=D valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=A' scope=both;

3. On database D, configure the log_archive_config parameter to accept redo from C. Also, configure the fal_server parameter to point to database C:

alter system set log_archive_config='DG_CONFIG=(A,B,C,D)' scope=both;
alter system set fal_server=’C’ scope=both;

4. On database A configure a log_archive_dest_n parameter that will be used to cascade redo to B when A is operating in the standby role:

alter system set log_archive_dest_5='service=B valid_for=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=B' scope=both;

Performing Role Transitions Using Data Guard Broker

1. Prior to performing the role transition between A and C, remove database B from the broker configuration. This will prevent the broker from configuring the new primary C to ship redo to database B.

DGMGRL> remove database B;

2. Perform the role transition using the Data Guard broker.

DGMGRL> switchover to C;

or

DGMGRL> failover to C;

3. After the role transition to C, add in database D to the broker configuration.

add database ‘D’ as
connect identifier is D;

4. Database A should automatically begin shipping redo to B using the destination defined in the previous steps. You should adjust log_archive_config accordingly:

alter system set log_archive_config='DG_CONFIG=(A,B,C,D)' scope=both;








2.1 准备工作

和正常搭建DG一样,安装数据库软件,创建相应的目录,拷贝参数文件,密码文件等。我这里演示的是,添加第三个级联备库过程。


2.2 主库修改参数

SQL> Alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(cndba_p,cndba_s,cndba_ss)' scope=both;


2.3 第一备库修改参数

SQL> Alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(cndba_p,cndba_s,cndba_ss)' scope=both;

SQL> alter system set LOG_ARCHIVE_DEST_2= 'SERVICE=cndba_ss VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=cndba_ss' scope=spfile;


2.4 第二备库修改参数

*.DB_UNIQUE_NAME=cndba_ss

*.FAL_SERVER=cndba_s

*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(cndba_p,cndba_s,cndba_ss)'

*.LOG_ARCHIVE_DEST_1='LOCATION= USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cndba_ss'


2.5 主备库创建TNSNAME


CNDBA_SS =

  (DESCRIPTION =

    (ADDRESS_LIST =

      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.173)(PORT = 1521))

    )

    (CONNECT_DATA =

      (SERVICE_NAME = cndba)

    )

  )

  

[oracle@12cdg-p ~]$ tnsping cndba_ss

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 16-AUG-2017 17:36:54

Copyright (c) 1997, 2014, Oracle.  All rights reserved.

Used parameter files:

/u01/app/oracle/product/12.1.0.2/db_1/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias

Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.173)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = cndba)))

OK (0 msec)


2.6 将第二备库启动到nomount

SQL> startup nomount pfile='/u01/app/oracle/product/12.1.0.2/db_1/dbs/initcndba.ora';

ORACLE instance started.


Total System Global Area 2348810240 bytes

Fixed Size     2927048 bytes

Variable Size 1409287736 bytes

Database Buffers   922746880 bytes

Redo Buffers    13848576 bytes


2.7 开始DUPLICATE

注意:还是主库和第二备库的DUPLICATE

[oracle@12cdg-p ~]$ rman target [email protected]_p auxiliary [email protected]_ss
Recovery Manager: Release 12.1.0.2.0 - Production on Wed Aug 16 17:38:28 2017
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: CNDBA (DBID=216462194)
connected to auxiliary database: CNDBA (not mounted)
RMAN> duplicate target database for standby from active database nofilenamecheck;
Starting Duplicate Db at 16-AUG-17
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=23 device type=DISK
 
contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwcndba' auxiliary format
 '/u01/app/oracle/product/12.1.0.2/db_1/dbs/orapwcndba'   ;
}
executing Memory Script
 
Starting backup at 16-AUG-17
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1 device type=DISK
Finished backup at 16-AUG-17
 
contents of Memory Script:
{
   restore clone from service  'cndba_p' standby controlfile;
}
executing Memory Script
 
Starting restore at 16-AUG-17
using channel ORA_AUX_DISK_1
 
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04
output file name=/u01/app/oracle/oradata/cndba/control01.ctl
output file name=/u01/app/oracle/fast_recovery_area/cndba/control02.ctl
Finished restore at 16-AUG-17
 
contents of Memory Script:
{
   sql clone 'alter database mount standby database';
}
executing Memory Script
 
sql statement: alter database mount standby database
 
contents of Memory Script:
{
   set newname for tempfile  1 to
 "/u01/app/oracle/oradata/cndba/temp01.dbf";
   set newname for tempfile  2 to
 "/u01/app/oracle/oradata/cndba/pdbseed/pdbseed_temp012017-08-14_12-17-51-PM.dbf";
   set newname for tempfile  3 to
 "/u01/app/oracle/oradata/cndba/sihong/temp012017-08-14_12-17-51-PM.dbf";
   switch clone tempfile all;
   set newname for datafile  1 to
 "/u01/app/oracle/oradata/cndba/system01.dbf";
   set newname for datafile  3 to
 "/u01/app/oracle/oradata/cndba/sysaux01.dbf";
   set newname for datafile  4 to
 "/u01/app/oracle/oradata/cndba/undotbs01.dbf";
   set newname for datafile  5 to
 "/u01/app/oracle/oradata/cndba/pdbseed/system01.dbf";
   set newname for datafile  6 to
 "/u01/app/oracle/oradata/cndba/users01.dbf";
   set newname for datafile  7 to
 "/u01/app/oracle/oradata/cndba/pdbseed/sysaux01.dbf";
   set newname for datafile  8 to
 "/u01/app/oracle/oradata/cndba/sihong/system01.dbf";
   set newname for datafile  9 to
 "/u01/app/oracle/oradata/cndba/sihong/sysaux01.dbf";
   set newname for datafile  10 to
 "/u01/app/oracle/oradata/cndba/sihong/sihong_users01.dbf";
   restore
   from service  'cndba_p'   clone database
   ;
   sql 'alter system archive log current';
}
executing Memory Script
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
renamed tempfile 1 to /u01/app/oracle/oradata/cndba/temp01.dbf in control file
renamed tempfile 2 to /u01/app/oracle/oradata/cndba/pdbseed/pdbseed_temp012017-08-14_12-17-51-PM.dbf in control file
renamed tempfile 3 to /u01/app/oracle/oradata/cndba/sihong/temp012017-08-14_12-17-51-PM.dbf in control file
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
executing command: SET NEWNAME
Starting restore at 16-AUG-17
using channel ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/cndba/system01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/cndba/sysaux01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:26
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/cndba/undotbs01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/cndba/pdbseed/system01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/oradata/cndba/users01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00007 to /u01/app/oracle/oradata/cndba/pdbseed/sysaux01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/cndba/sihong/system01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:16
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/cndba/sihong/sysaux01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:25
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service cndba_p
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/cndba/sihong/sihong_users01.dbf
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 16-AUG-17
sql statement: alter system archive log current
contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script
 
datafile 1 switched to datafile copy
input datafile copy RECID=3 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/system01.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/sysaux01.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/undotbs01.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/pdbseed/system01.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=7 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/users01.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=8 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/pdbseed/sysaux01.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=9 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/sihong/system01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=10 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/sihong/sysaux01.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=11 STAMP=952191686 file name=/u01/app/oracle/oradata/cndba/sihong/sihong_users01.dbf
Finished Duplicate Db at 16-AUG-17
2.8 打开第二备库并启用MRP
SQL> alter database open;
Database altered.
SQL> alter database recover managed standby database disconnect;
Database altered.
–查看MRP进程
SQL> select process,status from v$managed_standby;
PROCESS   STATUS
--------- ------------
ARCH  CLOSING
ARCH  CLOSING
ARCH  CONNECTED
ARCH  CLOSING
RFS  IDLE
RFS  IDLE
RFS  IDLE
MRP0  WAIT_FOR_LOG
8 rows selected.
–数据库状态
SQL> select database_role,open_mode from v$database; 
DATABASE_ROLE OPEN_MODE
---------------- --------------------
PHYSICAL STANDBY READ ONLY WITH APPLY
2.9 查看日志序列号
主库:
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
    46
第一备库:
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
    46
第二备库:
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
    46
2.9.1 主库切换日志
SQL> alter system switch logfile;
System altered.
–再查看日志序列号,全部都为47
SQL> select max(sequence#) from v$archived_log;
MAX(SEQUENCE#)
--------------
    47


2 实验

搭建级联备库参考:https://blog.csdn.net/qianglei6077/article/details/90736799


2.1 查看当前DG配置

SQL> select * from V$DATAGUARD_CONFIG;

DB_UNIQUE_NAME        PARENT_DBUN       DEST_ROLE CURRENT_SCN CON_ID

------------------------------ ------------------------------ ----------------- ----------- ----------

cndba_p        NONE       PRIMARY DATABASE     2122746      0

cndba_s        cndba_p       PHYSICAL STANDBY     2122754      0

cndba_ss        cndba_s       PHYSICAL STANDBY     2112269      0


2.2 查看用于级联的备库参数–启用real-time redo

SQL> show parameter LOG_ARCHIVE_DEST_2

NAME      TYPE VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_2      string SERVICE=cndba_ss ASYNC NOAFFIRM VALID_FOR=(ST

                                          ANDBY_LOGFILES,STANDBY_ROLE) D

                                          B_UNIQUE_NAME=cndba_ss


可以看到启用real-time redo cascade。


2.2.1 主库创建表,查看日志序列号

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

-------------

    51


–创建表,并插入数据


SQL> create table cndba(id number);

Table created.


SQL> insert into cndba select object_id from dba_objects;

90947 rows created.


SQL> commit;

Commit complete.


SQL> select count(*) from cndba;

  COUNT(*)

----------

     90947


–可以看到日志没有发生切换


SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51


2.2.2 查看用于级联(Cascading )备库表

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51 --日志序列号没有变化,表示没有发生日志切换


SQL>  select count(*) from cndba;

  COUNT(*)

----------

     90947  --由于DG默认启用实时redo应用,所以Cascading备库数据实时传输过来,下面注意是验证cascaded数据是否传输过来。


2.2.3 查看级联(cascaded)的备库表

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51   --同样日志序列号没有变化。


SQL> select count(*) from cndba;

  COUNT(*)

----------

     90947  --数据已经传输过来了,符合预期。


从日志中也可以查看出来:

Recovery of Online Redo Log: Thread 1 Group 4 Seq 51 Reading mem 0

Mem# 0: /u01/app/oracle/fast_recovery_area/CNDBA_SS/onlinelog/o1_mf_4_ds84tg8t_.log


2.3 修改用于级联备库(Cascading )的参数-启用non-real-time

SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=cndba_ss SYNC VALID_FOR=(STANDBY_LOGFILES,STANDBY_ROLE) DB_UNIQUE_NAME=cndba_ss' scope=both;

System altered.


SQL> show parameter LOG_ARCHIVE_DEST_2

NAME      TYPE VALUE

------------------------------------ ----------- ------------------------------

log_archive_dest_2      string SERVICE=cndba_ss SYNC VALID_FO

                                         R=(STANDBY_LOGFILES,STANDBY_RO

                                        LE) DB_UNIQUE_NAME=cndba_ss


2.3.1 主库插入数据

SQL> insert into cndba select object_id from dba_objects;

90947 rows created.


SQL> commit;

Commit complete.


SQL> select count(*) from cndba;

  COUNT(*)

----------

    181894


SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51  


2.3.2 查看用于级联(Cascading )备库表\

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51


SQL> select count(*) from cndba;

  COUNT(*)

----------

    181894


2.3.3 查看级联(cascaded)的备库表

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)

--------------

    51


SQL> select count(*) from cndba;

  COUNT(*)

----------

181894  --可以看到数据没有同步过来。


从日志也可以看出来:当前日志时51,等52日志来进程恢复。


Media Recovery Waiting for thread 1 sequence 52


至此对于Real-time redo的介绍已经结束了。该特性还是非常有用的,对于数据容灾更加可靠。







About Me

........................................................................................................................

● 本文作者:小麦苗,部分内容整理自网络,若有侵权请联系小麦苗删除

● 本文在itpub、博客园、CSDN和个人微 信公众号( xiaomaimiaolhr)上有同步更新

● 本文itpub地址: http://blog.itpub.net/26736162

● 本文博客园地址: http://www.cnblogs.com/lhrbest

● 本文CSDN地址: https://blog.csdn.net/lihuarongaini

● 本文pdf版、个人简介及小麦苗云盘地址: http://blog.itpub.net/26736162/viewspace-1624453/

● 数据库笔试面试题库及解答: http://blog.itpub.net/26736162/viewspace-2134706/

● DBA宝典今日头条号地址: http://www.toutiao.com/c/user/6401772890/#mid=1564638659405826

........................................................................................................................

● QQ群号: 230161599 、618766405

● 微 信群:可加我微 信,我拉大家进群,非诚勿扰

● 联系我请加QQ好友 646634621 ,注明添加缘由

● 于 2019-09-01 06:00 ~ 2019-09-31 24:00 在西安完成

● 最新修改时间:2019-09-01 06:00 ~ 2019-09-31 24:00

● 文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

● 版权所有,欢迎分享本文,转载请保留出处

........................................................................................................................

小麦苗的微店https://weidian.com/s/793741433?wfr=c&ifr=shopdetail

小麦苗出版的数据库类丛书http://blog.itpub.net/26736162/viewspace-2142121/

小麦苗OCP、OCM、高可用网络班http://blog.itpub.net/26736162/viewspace-2148098/

小麦苗腾讯课堂主页https://lhr.ke.qq.com/

........................................................................................................................

使用 微 信客户端扫描下面的二维码来关注小麦苗的微 信公众号( xiaomaimiaolhr)及QQ群(DBA宝典)、添加小麦苗微 信, 学习最实用的数据库技术。

........................................................................................................................

欢迎与我联系

 

 



来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/26736162/viewspace-2656076/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
QQ:646634621| 网名:小麦苗| 微信公众号:xiaomaimiaolhr| 11g OCM| QQ群:618766405 微信群:私聊| 《数据库笔试面试宝典》作者| OCP、OCM、高可用(RAC+DG+OGG)网络班开讲啦,有需要的小伙伴可以私聊我。

注册时间:2012-09-23

  • 博文量
    1352
  • 访问量
    8137971