ITPub博客

【XTTS】XTTS使用说明

转载 作者:路途中的人2012 时间:2018-01-11 15:15:25 0 删除 编辑

【XTTS】XTTS使用说明





迁移神技 XTTS-恩墨小王子再战32TB跨平台U2L




2016-09-27
 
杨俊 

杨俊

云和恩墨技术专家


 


XTTS(Cross Platform Transportable Tablespaces) 跨平台迁移表空间,它是 Oracle 8i开始就引入的一种基于表空间传输的物理迁移方法,命名为 TTS,经历各个版本的不断演进,从 11gR2 开始,在相对停机时间要求日益减少的情况,为了应对越来越大的数据量跨平台迁移,Oracle 推出了新的解决方案—加强版 XTTS(以下简称 XTTS),XTTS 使用增量备份的方式实现跨平台的数据迁移,从真正意义上大大缩短停机时间。在 U2L 如火如荼的今天,通过 XTTS 快捷、高效、平稳、安全的将 Oracle 数据库“小型机+集中式存储”环境迁移至“X86 架构平台+分布式存储”已然成为一大神技。



案例介绍


某省交管核心系统自上线运行6年以来,从最初 GB 为单位的数量级上升到今天 32TB 的业务数据量,其中照片信息的 LOB 字段占有 27TB,随着近几年信息化行业深化改革发展,信息化、互联网+、大数据已经成为交管业务支撑不可或缺的组成元素,但该系统却存在严重问题,已然不能满足现有业务的发展。为解决这套老旧的核心业务系统,通过调研,最终云和恩墨为客户采用 ZDATA 分布式存储方案,组建一个高速、安全、稳定的高性能分布式存储数据库架构,通过去除10年老旧 HPUX 机器,采用高配置的 X86 PC 服务器作为计算和存储节点,不仅提供强大的 CPU、IO、Memory 支持能力,还为后续的横向扩容存储提供不停机服务,可如何在有限的停机时间内进行这大批量的数据迁移成为本项目的一个难点。



1

XTTS 概述



XTTS (Cross Platform Transportable Tablespaces) 跨平台迁移表空间,是 Oracle 自10g推出的一个用来移动单个表空间数据以及创建一个完整的数据库从一个平台移动到另一个平台的迁移备份方法。它是 Oracle 8i 开始就引入的一种基于表空间传输的物理迁移方法,命名为 TTS,不过8i的表空间迁移仅支持相同平台、相同块大小之间的表空间传输,然而在那个年代还未像今天一样技术日新月异,TTS 的光芒一直被埋没在历史的尘埃里,从 Oracle 9i 开始,TTS 开始支持同平台中,不同块大小的表空间传输,这个时候很多数据库管理员就注意到了 TTS 在实际工作中的应用,不过由于每次移动表空间都需要停机、停业务,而9i的 TTS 只能在相同平台之间进行数据移动,相比 Oracle RMAN 本身的快捷方便,更多人更愿意选择使用 RMAN 进行数据备份、数据移动,基于这些原因,Oracle 10g 时代引入了跨平台的表空间传输方案 XTTS,标志着第一代 XTTS 的诞生。



如图一:TTS 和 XTTS 迁移对比图



在 Oracle10.1 中第一代 XTTS 是基于表空间的传输,到 Oracle 11gR1 后,跨平台数据的迁移可以支持传输表空间中的某个特定分区,不过在数据移动过程中,仍然需要将主库设置为 read only 只读状态、停机、停业务下才能进行数据迁移,对于业务不可间断的系统仍旧需要花费大量的停机时间才能达到跨平台物理迁移的效果,所以把 11gR2 以前的 XTTS 技术称作第一代 XTTS 技术。从 11gR2 开始,为了应对越来越大的数据量,相对停机时间要求日益减少的情况,Oracle 推出了新的解决方案—加强版 XTTS(以下简称 XTTS),使用增量备份方式的实现跨平台的数据迁移。从真正意义上讲,能够减少停机时间、进行增量备份的 XTTS,才真正是今天所说的 XTTS。



XTTS 各版本的功能比对如下,表一:XTTS 各版本功能比对表



版本说明


跨平台


不同块


增量备份


Oracle 8i





Oracle 9i





Oracle 10g





Oracle 10gR2





Oracle 11gR1





Oracle 11gR2





Oracle 12c






2

XTTS 迁移步骤



在 Oracle11gR2 以后,Oracle 推出了通过前滚数据文件,拷贝数据后再进行多次增量备份的 XTTS 来完成迁移过程,在这个过程中通过开启块跟踪特性,根据 SCN 号来执行一系列的增量备份,并且通过对块跟踪文件的扫描,来完成增量数据的增量备份应用,最后在通过一定的停机时间,在源库 read only 的状态下进行最后一次增量备份转换应用,使得整个迁移过程的停机时间同源库数据块的变化率成正比。这样大大的缩短了停机时间。




3

XTTS 迁移方式



XTTS 是基于一组 rman-xttconvert_2.0 的脚本文件包来实现跨平台的数据迁移,主要包含 Perl script xttdriver 和 xttdriver Perl 脚本。Perl script xttdriver.pl 是备份、转换、应用的执行脚本,xtt.properties 是属性文件,其中包含XTTS配置的路径、参数。



采用 XTTS 迁移方式,具备跨平台字序转换和全量初始化加增量 merge 的功能,非常适用于异构 OS 跨平台迁移,成为数据库实施人员中公认的大数据量跨平台迁移的最佳选择。


传统的 TTS 传输表空间要求数据由源端到目标端传输的整个过程中,表空间必须置于 read only 模式,严重影响业务可用性。XTTS 方式可以在业务正常运行的情况下,进行物理全量初始化,增量 block 备份,数据高低字节序转码,增量 block 应用,保持目标端与源端数据的同步,整个过程不影响源端数据库使用。在最后的增量 block 应用完毕后,利用停机窗口进行数据库切换,显著地减少了停机时间。



rman-xttconvert_2.0 包参数说明如下表:


参数


意义


tablespaces=TS1,TS2  


需要传输的表空间


platformid=2        


源库的 platform_id,v$database 中得到


srcdir=src1,src2    


当使用 dbms_file_transfer 时使用,表示源库存放数据文件的路径


dstdir=dst1,dst2    


当使用 dbms_file_transfer 时使用,表示目标库存放数据文件的路径


srclink=ttslink      


从目标端指向源端的 dblink,当使用 dbms_file_transfer 时使用


dfcopydir=/storage  


源端用于存放数据文件的 copy,使用 rman 时使用


backupformat=/storage


源端用于存放增量备份的目录,无论哪种方式都需要设置


stageondest=/storage


目标端存放数据文件 copy 目录,和存放增量备份的目录


storageondest=/oradata/prod/%U  


数据文件的最终存放点


backupondest=/storage


增量备份格式转换后的输出目录


cnvinst_home=      


不同的增量转换目录使用的时候设置该参数


cnvinst_sid        


不同中转 sid 使用的时候使用


parallel=3        


默认为3


rollparallel=2



Getfileparallel=8


默认8,使用 rman 时的并行设置


 


方式一:dbms_file_transfer

DBMS_FILE_TRANSFER 包是 Oracle 提供的一个用于复制二进制数据库文件或在数据库之间传输二进制文件的程序包,在 XTTS 迁移中,利用不同的参数进行数据文件传输转换完成迁移。


方式二:RMANBackup

RMAN Backup 方式是基于 RMAN 备份原理,通过使用 rman-xttconvert_2.0 包提供的参数,对数据库进行基于表空间的备份,将备份生产的备份集写到本地或者 NFS 盘上,然后在通过 rman-xttconvert_2.0 包中包含的不同平台之间数据文件格式转换的包对进行数据文件格式转换,最后通过记录的表空间 FILE_ID 号生产元数据的导入脚本,通过 db_link 执行完成。


 


方式三:手工 XTTS 迁移

手工脚本执行 XTTS 迁移是云和恩墨根据 Oracle 提供的 rman-xttconvert_2.0 包迁移步骤从原理入手结合 dbms_file_transfer 和 rman backup 方式集合实际工作需求,手工编写的一套专门用于大数据量跨平台的表空间传输迁移脚本,整套脚本从源库、中间库、目标库三个方面详细的通过手工脚本的方式,把需要进行的迁移工作根据任务以及子任务的方式固化,形成一套可执行的迁移技术方案。


4

XTTS 前置条件检查


前置条件

如前面概述所示,传输表空间技术从 Oracle8i 诞生,甲骨文公司经过多个版本的不停的改进完善,时至今日已经发展成为跨平台大数据量迁移的一大利器,尤其从 Oracle11.2.0.3 以后 XTTS 推出使用跨平台增量备份的方式,通过迁移不同字节序格式系统之间的数据,大大的减少了停机的时间,在方便的同时,看一看,使用 XTTS 进行数据迁移需要具备的那些前置条件?



如下表所示:




当然以上所说的目标端数据库版本均为11.2.0.4版本或者以上,如果在使用过程中,目标库的版本是11.2.0.3或者更低,那么需要创建一个单独的11.2.0.4版本数据库作为中间库来在目标端进行数据文件的格式转换,而使用 DBMS_FILE_TRANSFER 包目标端的数据库版本必须是11.2.0.4。



5

XTTS 32TB 实战案例分享



实践是检验真理的唯一标志,为贴近实际,下面通过一个跨平台迁移32TB数据库的 XTTS 实战案例,来解析 XTTS 在大数据量迁移过程中的手工脚本应用情况,以下案例从 XTTS 原理出发,涉及操作系统、NFS 存储、rman 备份、系统字节序转换、数据验证以及网络知识。


 


系统现状评估

主机


生产库主库为3节点集群,其中2台为 HP 8640


存储


HP  XP24000


应用


Websphere(大约20台主机,其中14台为HP小机环境,其他为 Windows 环境)


数据库连接数


单节点连接数约为260-300


容灾备份


无容灾环境


数据容量


约32TB、使用的集中存储 XP24000 已经满了 无法扩容




A.  3节点 RAC 架构不合理


B.  集中存储使用10年以上、计算节点服务器设备老旧


C.  资源配置低


D.  数据爆发式增长


E.  业务应用模块增多、数据库表存放 LOB 字段


F.  物理扩容到达瓶颈


G.  基层业务人员反馈系统各种不好用


迁移需求分析

A.  HPUX11.31 迁移到 Linux


B.  数据库总量 32TB


C.  LOB 字段大小 27TB


D.  单个数据库表空间 17TB


E.  数据库版本11.2.0.3(无任何补丁)


F.  计划内停机切换时间8小时


G.  计划内完成时间15天


H.  数据库账号密码不能改变


I.  无应用测试(根本没有应用程序)


J.  无资源提供测试环境(存储、资源)


迁移方案选型

通过需求调研分析后,因系统涉及到30多 TB 数据量,并且业务停机时间只有8个小时,另外需要跨平台进行数据迁移,我方经过几次测试论证后,排除如下方案,在此也请各位思考一下,如果遇到此类需求作为 DBA 应该如何应对?


最终选择了最具挑战的 XTTS 来完成这次 32TB 的跨平台迁移挑战。


资源配置情况

配置类型


源  库


目标库


数据库版本


11.2.0.3


11.2.0.4.160419


数据库名称


orcl


orcl


数据库字符集


AMERICAN_AMERICA.ZHS16GBK


AMERICAN_AMERICA.ZHS16GBK


数据库节点


RAC 3节点


RAC 4节点


操作系统版本


HPUX11.31


Linux6.5


磁盘组大小


35TB


80TB


数据库大小


32TB



Block size


16384


16384


第一步:XTTS 环境检查

检查项


源  库


目标库


时区是否一致


时区为东八区


东八区


字符集是否一致


16GBK


16GBK


检查目标端补丁情况



需打最新PSU


组件检查



包含源库组件


key compression索引组织表


存在


需手工重建


表空间规范检查


不同磁盘组下数据文件名称命名相同



TEMP表空检查


存在


需手导入


检查目标端的db_files参数


1024


4096


检查源端compatible参数


不可以是windows  且大于10.2.0


11.2.0.4


                                


检查表空间自包含


存在自包含 需手工MOVE



用户,


DBLINK,PROFILE,PRIV


需手工创建


第二歩:开启块跟踪

Block change tracking 进程记录自从上一次0级备份以来数据块的变化,并把这些信息记录在跟踪文件中。RMAN 使用这个文件判断增量备份中需要备份的变更数据。这极大的提高了备份性能和速度,RMAN 可以不再扫描整个文件以查找变更数据。


SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING  FILE '+SATADATA/changetracking.chg';


Database altered.


第三歩:挂载 NFS 存储

NFS 存储挂载在源库和目标库之间,用于传输数据文件和增量备份节省数据文件的传输时间。


mount  -o  llock,rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid  10.160.118.236:/dump1 /dump1



NFS 共享存储挂载说明:


源端,目标端需要挂载 35T 存储用于存放所有数据文件的镜像文件,建议使用 ntp 的方式将存储远程从源端挂载到目标端,减少备份传送时间,如下图 XTTS 迁移工作示意图—— NFS 存储初始化挂载




第四歩:SCN 确认记录


SCN(System Chang Number)作为 Oracle 中的一个重要机制,在数据恢复、Data Guard、Streams 复制、RAC 节点间的同步等各个功能中起着重要作用,在此需确认 SCN,且该 SCN 号用于后续增量备份的起始点。


alter system checkpoint;


select current_scn from v$database;


第五歩:开始 RMAN Copy

基于数据文件的 RMAN COPY 生成的文件存放于挂载的 NFS 目录下。


rman target / <<eof



run{


allocate channel c1 type disk;


allocate channel c2 type disk;


backup as copy datafile 18,19,20,21,22........ format  '/dump1/enmo/copy/enmo_%U';


release channel c1;


release channel c2;


}


EOF


</eof


第六歩:数据文件格式转换

A.  Convert 用于转换数据文件的字节序


B.  转换后的新数据文件直接写入新环境的磁盘组


C.  转换过程消耗目标端的 CPU 资源


D.  此处需要关注目标端磁盘组的大小,避免造成磁盘组满引起转换失败




convert  from platform 'HP-UX IA (64-bit)' datafile '/dump1/ccm/vvstart_tabs.dbf'  format '+FLASHDATA/ORCL/DATAFILE/vvstart_new_01.dbf';



第七歩:增量备份阶段


A.  开启块跟踪后基于块进行快速增量


B.  增量备份前先查询并记录当前的 SCN


C.  根据全备时记录的 SCN1000 进行增备


D.  增量备份文件存放于 NFS 存储上


E.  增量备份后生成的字节序是 HPUX 的需进一步转换


set until scn=1850


backup incremental from scn 1000 datafile  18,19,20,21,22...... format '/dump1/enmo/incr/copy_%d_%T_%U';3;


set until scn=1850


backup incremental from scn 1000 datafile  18,19,20,21,22...... format '/dump1/enmo/incr/copy_%d_%T_%U';3;


第八步:增量转换应用

增量备份的转换和应用是两个过程,首先是增量备份集从 HPUX 平台转换成 Linux 平台格式,转换完毕后的备份集在 Linux 平台数据库才能识别。


sys.dbms_backup_restore.backupBackupPiece(bpname  => '/dump1/enmo/incr/copy_ORCL_20160707_78ra40o7_1_1',


fname =>  '/dump1/enmo/incr/copy_ORCL_20160707_78ra40o7_1_1_conv',handle =>  handle,media=> media,


comment=>  comment, concur=> concur,recid=> recid,stamp => stamp, check_logical  => FALSE,copyno=> 1,


deffmt=> 0,  copy_recid=> 0,copy_stamp => 0,npieces=> 1,dest=> 0,pltfrmfr=>  4);


 


其次是增量备份集的应用,这个过程和 rman recover 的原理是一样的。


sys.dbms_backup_restore.restoreBackupPiece(done  => done, params => null, outhandle => outhandle,outtag => outtag,  failover => failover);


 


第九歩:循环进行增量备份

循环进行增量备份操作在正式环境的切割之前进行,其目的是为了减少最后一次数据库表空间 readonly 时生产环境的停机时间,需要特别注意的是操作之前务必查询并记录当前 SCN 号。这个 SCN 号是下一次开始增量备份的起点。



如图所示:


图十三:循环增量备份图

 


第十步:正式切割准备

正式切换的准备阶段是整个 XTTS 迁移过程中最重要的一步,为保证数据的一致性,需要在源库停止业务后,对活动的数据库会话进行查杀处理,并且在 read only 表空间之前需要进行几次检查点检查和 redo log 日志切换保证在无数据在内存的情况下才能进行下一步操作,另外针对计划窗口任务的 JOB 需要提前关闭 JOB 避免因 JOB 执行、批处理等导致数据不一致。





图十四:切割准备工作示意图


 


第十一歩:最后一次增量备份

最后一次增量备份是在生产源库表空间全部 read only 的情况下进行的,需要根据前一次记录的 SCN 号进行最后一次增量备份、转换、应用,在转换应用之后建议把新环境做一个闪回点或者进行一次全备。作为下一步导入元数据失败的回退方案。



如图所示:




图十五:最后一次增量备份示意图


第十二歩:元数据导入导出

A.  导出时排除系统表空间


B.  导出需在表空间 read only 下才能进行


C.  导入报错及终止


D.  导入可能会遇到 type 不存在的情况


E.  建议使用数据泵进行



如下表所示,在源库导出表空间的元数据


exp \'/ as sysdba\'


transport_tablespace=y


tablespaces='TBS_NAME'


STATISTICS=none


file=/dump1/enmo/exp/orcl_XTTS_0715.dmp


 


根据导出的 dmp 包导入元数据


imp \'/ as sysdba\'


transport_tablespace=y


TABLESPACES='TBS_NAME'


file=/dump1/enmo/exp/orcl_XTTS0715.dmp


log=/dump1/enmo/exp/orcl_imp_XTTS0715.log


datafiles=('+FLASHDATA/orcl/datafile/VIO_DATA_u01',


'+FLASHDATA/orcl/datafile/base_image_fno65')


第十三歩:元对象导入导出

A.  开始导入之前先把表空间 read write


B.  可以使用 dblink 进行远程不落地导入


C.  指定需要的 schemas


D.  导入存在权限不足可进行手工授权


E.  导入完毕后即可开始验证



如下表所示:


1、迁移列表 schema 对象导入


impdp "'/ as sysdba'"  metrics=yes network_link=ENMO_TEST  schemas=VIO EXCLUDE=table,index content=metadata_only  directory=enmo_exp  logfile=imp_full_metadata_`date +"%d%H%M"`.log


2、临时表导入


3、组织索引表创建


4、其他手工导入用户


第十四歩:数据校验收尾阶段

A.  统计信息收集或者导入


EXEC DBMS_STATS.gather_schema_stats('DRV_ZW',  estimate_percent => 10,degree => 64);


B.  无效对象重新检查编译


sqlplus / as sysdba <<eof



DECLARE


   threads pls_integer  := 150;


BEGIN


    utl_recomp.recomp_parallel(threads);


END;


/


</eof

C.  对象数量比对


select owner,object_type,count (*) from dba_objects


where owner  ='用户名称'


group by owner,object_type


order by owner,object_type;


D.  主键索引核对


select count(1),a.status from dba_constraints a where  a.owner='用户名称


and a.constraint_type='P' GROUP BY A.status;


E.  大表数据校验


--num_rows行数验证


select  table_name,num_rows from all_tables where owner='用户名称' group by table_name,num_rows


having  num_rows>500 order by table_name;


--大小验证


 select owner, segment_name, bytes / 1024 /  1024


  from dba_segments


 where segment_type = 'TABLE'


   and owner = '用户名称';


F.  账号权限、同义词验证


Set lines 180


Col object_name  for a40


select  object_name,object_type,status from dba_objects where owner in ('账号名称') and status<>'VALID';


G.  数据文件头状态


select  STATUS,ERROR,TABLESPACE_NAME from V$DATAFILE_HEADER;


H.  表空间校验


确认owner用户的DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE 以及所有用户在相关表空间上的配额情况将之前创建的owner用户的默认表空间改为正确的默认表空间


I.  启动数据库


启动监听,应用开始连接测试


srvctl start  scan_listener


删除测试用户信息:


drop user  enmoXTTStest cascade;


drop public  database link enmo_test;


 


验证结果比对表如下:


数据校对项


结果


测试数据比对


True


迁移列表的表空间数量比对


True


Schema数量比对


True


对象数比对


True


表记录数比对


True


包、函数、存储过程、索引等状态比对


True


权限比对


True


同义词比对


True


临时表数量比对


True


第十五歩:回退方案

触发条件:迁移切换失败进行


A.  前提条件:在不影响现有生产环境后续的可用性的情况下进行切换


B.  回退条件:仅需把源生产数据库表空间置换为 read write 、源库 JOB 进程调整为1000、源库监听启动


C.  回退时间:执行回退方案可保证在5分钟之内完成


D.  回退影响:本次切换失败



6

XTTS 风险预估



A.  第一次全量备份消耗源生产库资源需关注


B.  全量备份挂载 NFS 会占用网络流量


C.  筛选排除系统表空间需认真仔细


D.  自包含检查需排除系统表空间


E.  使用exp导元数据可能会遇到 BUG 然后遇到 BUG 大神也救不了


F.  最后一次增量备份 read only 源库表空间可能会因活动会话占用 表空间 read only 过慢


G.  切换之前一定要先对目标库做一个 rman 全备 避免失败无法回退


H.  每次增量备份之前都需记录 SCN



7

XTTS 总结



对于数据库的跨平台迁移,大家所熟悉的方法有很多,每种方法都各有利弊,关键还要看实际需求再来决定使用哪一种方式更能切合业务,提高工作效率,作为我个人而言,我比较喜欢采用物理迁移的方式,这样少了很多繁琐的数据校验过程,而且物理迁移或多或少能提高迁移速度,节省不少时间,尤其是面对超过十多个 TB 的数据库时,而这一章节向大家推荐的 XTTS 在面对 U2L 大数量迁移中更能发挥其优势,不过 XTTS 也是有很多不为人知的坑的,个人建议如果想使用好 XTTS 这个方法,需要对 XTTS 的原理非常熟悉,尽量采用手工脚本的方式来进行数据迁移,官方推出的 DBMS_FILE_TRANSFER 包由于 BUG 太多,在同步过程中经常会遇到很多莫名其妙的错误而中断,所以我不建议大家使用。对于 RMAN backup 的方式,因为本身 rman_xttconvert_2.0.zip 包是通过执行不同参数来自动的方式进行数据文件的拷贝、转换、应用以及增量等,需要大家对它的执行过程非常熟悉才不容易造成混乱,如果当您面对需要传输的表空间非常多的时候,建议还是采用手工的方式进行会比较保险。




 XTTS 跨平台表空间迁移测试 

简介      
      这两天一直在研究xtts(跨平台表空间迁移的方法),因为一是网上的资源不是很多,二是个人感觉这是个很实用的迁移技巧。下面就和大家来分享一下我个人做的测试过程吧。

1、搭建测试环境

跨平台表空间迁移过程    (参考文档官方1389592.1)

--过程主要分为四个部分
  1、初始化阶段
  2、准备阶段
  3、增量备份前滚阶段
  4、运输或叫迁移阶段

Phase 1 - 初始化安装
Step 1.1 - 在目标端安装数据库软件,并且创建target数据库。强烈建议使用11.2.0.4或之后的版本
Step 1.2 - 如果需要可以配置一个增量转换home和实例

a、如果软件是11.2.0.4可以忽略这步
b、如果数据库软件时11.2.0.3或是更低,你必须安装一个新的11.2.0.4的数据库软件home,作为增量转换home路径,并且仅启动一个11.2.0.4的实例到nomount状态(数据库可以不用创建)

--为了简化难度,我的测试环境都用的是64位linux,并且数据库版本也是11.2.0.4.0(大家只注意操作步奏就好),就只要在目标端建好实例就行了。

Step 1.3 -选取要传送的表空间,测试源端用的TEST表空间

--测试里我分别搭建源和目标数据库起名为source、target    
  source:数据库实例tx9ab,创建表空间test;
            创建用户tx9ab,dba权限默认tablespace为test;
            创建测试用表 xttstest,并插入一条数据。
  target:创建数据库实例tx9ab,并且迁移过程中实例保持open;
            创建用户tx9ab,dba权限。

SQL> create tablespace test datafile '/space/oradata/tx9ab/test01.dbf' size 10m aotuextend on;

SQL> create user tx9ab identified by tx9ab default tablespace test;
User created.

SQL> grant connect,resource,select any table to tx9ab;
Grant succeeded.

SQL> conn tx9ab/tx9ab
Connected.
SQL> create table xttstest(a number);
Table created.

SQL> select * from xttstest;

            A
          ----------
          100

SQL> select username,default_tablespace from dba_users where username='TX9AB'; --查看用户默认的表空间;

USERNAME                       DEFAULT_TABLESPACE
------------------------------ ------------------------------
TX9AB                          TEST

--查看各个系统平台信息
SQL> col platform_name for a32
SQL> select * from v$transportable_platform;

PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
----------- -------------------------------- --------------
          1 Solaris[tm] OE (32-bit)          Big
          2 Solaris[tm] OE (64-bit)          Big
          7 Microsoft Windows IA (32-bit)    Little
         10 Linux IA (32-bit)                Little
          6 AIX-Based Systems (64-bit)       Big
          3 HP-UX (64-bit)                   Big
          5 HP Tru64 UNIX                    Little
          4 HP-UX IA (64-bit)                Big
         11 Linux IA (64-bit)                Little
         15 HP Open VMS                      Little
          8 Microsoft Windows IA (64-bit)    Little

PLATFORM_ID PLATFORM_NAME                    ENDIAN_FORMAT
----------- -------------------------------- --------------
          9 IBM zSeries Based Linux          Big
         13 Linux x86 64-bit                 Little
         16 Apple Mac OS                     Big
         12 Microsoft Windows x86 64-bit     Little
         17 Solaris Operating System (x86)   Little
         18 IBM Power Based Linux            Big
         19 HP IA Open VMS                   Little
         20 Solaris Operating System (x86-64 Little
            )

         21 Apple Mac OS (x86-64)            Little

20 rows selected.

Step 1.4 -使用RMAN backup(略)
Step 1.5 -创建一个转换过程的路径
-在源和目标系统中创建数据转换路径,相关参数定义在xtt.properties文件中:backupformat, backupondest。如果使用RMAN backups则还要设置
dfcopydir, stageondest。

Step 1.6 -在源端安装xttconvert脚本
-下载并解压rman-xttconvert_2.0.zip
测试解压/space/sys_software/oracle/scripts/xtts_scripts下

Step 1.7 -在源端配置xtt.properties

[oracle@tx9ab xtts_scripts]$ more xtt.properties
tablespaces=TEST
platformid=13                                                                    
dfcopydir=/space/sys_software/oracle/xtts_dir/dfcopydir
backupformat=/space/sys_software/oracle/xtts_dir/backup
stageondest=/space/oradata/tx9ab
storageondest=/space/oradata/tx9ab/test
backupondest=/space/sys_software/oracle/xtts_dir/backup   

Step 1.8 -将xttconvert脚本拷贝到目标端的相同位置下
--这步省了
Step 1.9 - Set TMPDIR
-告诉系统xttconvert脚本所在的位置
[oracle@single01 ~]$ export TMPDIR=/space/sys_software/oracle/scripts/xtts_scripts

Phase 2 - Prepare Phase
Phase 2A - Prepare Phase for dbms_file_transfer Method   <--本次测试不使用!!!!
Phase 2B - Prepare Phase for RMAN Backup Method          <--有两种方法,测试我使用RMAN backup的方法

Step 2B.1 - 源端调用xttdriver.pl做迁移准备

[oracle@single01 xtts_scripts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -p

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Starting prepare phase
--------------------------------------------------------------------
Prepare source for Tablespaces:
                  'TEST'  /space/oradata/tx9ab
xttpreparesrc.sql for 'TEST' started at Thu Sep 17 13:54:21 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:22 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:30 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:31 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:31 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:31 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:32 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:32 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:32 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:32 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:33 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:33 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:33 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:33 2015
Prepare source for Tablespaces:
                  ''  /space/oradata/tx9ab
xttpreparesrc.sql for '' started at Thu Sep 17 13:54:34 2015
xttpreparesrc.sql for  ended at Thu Sep 17 13:54:34 2015

--------------------------------------------------------------------
Done with prepare phase
--------------------------------------------------------------------

在源端该准备脚本做这样几件事:
       1、创建要钱仪表空间中的数据文件的拷贝并将它们搬到 xtt.properties文件中定义的dfcopydir路径下。
       2、验证表空间是否在线, 是否为读写模式, 并且不包含下线的数据文件。
       3、在TMPDIR下还会生成以下文件:
                a、xttplan.txt
                b、rmanconvert.cmd

Step 2B.2 -将源端的数据文件,传到目标端
拷贝source:/space/sys_software/oracle/xtts_dir/dfcopydir/TEST_6.tf  --->  target: /space/oradata/tx9ab
Step 2B.3 -在目标端对来自源端的数据文件拷贝进行转换
拷贝source:/space/sys_software/oracle/scripts/xtts_scripts/rmanconvert.cmd---> target:/space/sys_software/oracle/scripts/xtts_scripts/rmanconvert.cmd
[oracle@tx9ab xtts_scripts]$ export TMPDIR=/space/sys_software/oracle/scripts/xtts_scripts
[oracle@tx9ab xtts_scripts]$ /space/sys_software/oracle/app/product/11.2.0/db_1/perl/bin/perl xttdriver.pl -c

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Performing convert
--------------------------------------------------------------------

--------------------------------------------------------------------
Converted datafiles listed in: /space/sys_software/oracle/scripts/xtts_scripts/xttnewdatafiles.txt
--------------------------------------------------------------------

-转换后的数据文件拷贝会出现在 xtt.properties文件中定义的参数storageondest下
这时在目标的/space/oradata/tx9ab/test/下会生成一个TEST_6.xtf经转换后的数据文件拷贝。

Phase 3 - Roll Forward Phase

    在源端创造增量数据、做增量备份,然后传到目标端的, 在目标端对传过来的增量备份进行格式转换后,将增量数据应用到数据文件备份上。这个过程是可以多次重复的,这样备库上的数据文件拷贝,通过一次次应用增量数据就可以逐渐追上源库的生产数据。

创建增量数据

SQL> update xttstest set a=200;
SQL> commit;
SQL> select * from xttstest;
            A
          ----------
          200

Step 3.1 -源端对表空间进行增量备份

这里必须注意:在最后一次对源库进行增量备份以前,要把源库要迁移的表空间设为只读,不然以后目标端导入表空间元组时一定会报错的!!!

SQL> show user
USER is "SYS"
SQL> alter tablespace test read only;
Tablespace altered.

[oracle@single01 xtts_scripts]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Backup incremental
--------------------------------------------------------------------
Prepare newscn for Tablespaces: 'TEST' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
Prepare newscn for Tablespaces: '' 
rman target /  cmdfile /space/sys_software/oracle/scripts/xtts_scripts/rmanincr.cmd

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Sep 17 15:47:40 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: TX9AB (DBID=390378578)

RMAN> set nocfau;
2> host 'echo ts::TEST';
3> backup incremental from scn 1120971 
4>   tag tts_incr_update tablespace 'TEST'  format
5>  '/space/sys_software/oracle/xtts_dir/backup/%U';
6> 
executing command: SET NOCFAU
using target database control file instead of recovery catalog

ts::TEST
host command complete

Starting backup at 17-SEP-2015 15:47:41
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=41 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=38 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=43 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=44 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/space/oradata/tx9ab/test01.dbf
channel ORA_DISK_1: starting piece 1 at 17-SEP-2015 15:47:42
channel ORA_DISK_1: finished piece 1 at 17-SEP-2015 15:47:43
piece handle=/space/sys_software/oracle/xtts_dir/backup/0fqhd10u_1_1 tag=TTS_INCR_UPDATE comment=NONE    <--增量的备份集
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 17-SEP-2015 15:47:43

Recovery Manager complete.


--------------------------------------------------------------------
Done backing up incrementals
--------------------------------------------------------------------

上面的操作还会在TMPDIR目录下产生以下的文件
       1、tsbkupmap.txt

       2、incrbackups.txt

[oracle@single01 xtts_scripts]$ ls -trl
total 172
-rwxrwxr-x 1 oracle oinstall    52 May 22 08:30 xttstartupnomount.sql
-rwxrwxr-x 1 oracle oinstall 11549 May 22 08:30 xttprep.tmpl
-rwxrwxr-x 1 oracle oinstall 91722 May 22 08:30 xttdriver.pl
-rwxrwxr-x 1 oracle oinstall    71 May 22 08:30 xttdbopen.sql
-rwxrwxr-x 1 oracle oinstall  1390 May 22 08:30 xttcnvrtbkupdest.sql
-rw-r--r-- 1 oracle oinstall   354 Sep 17 11:01 xtt.properties
-rw-r--r-- 1 oracle oinstall    18 Sep 17 13:54 xttplan.txt
-rw-r--r-- 1 oracle oinstall   181 Sep 17 13:54 rmanconvert.cmd
-rw-r--r-- 1 oracle oinstall 11657 Sep 17 13:54 xttpreparesrc.sql
-rw-r--r-- 1 oracle oinstall    20 Sep 17 15:47 xttplan.txt.new
-rw-r--r-- 1 oracle oinstall     0 Sep 17 15:47 xttprepare.cmd
-rw-r--r-- 1 oracle oinstall 11592 Sep 17 15:47 xttdetnewfromscnsrc.sql
-rw-r--r-- 1 oracle oinstall   169 Sep 17 15:47 rmanincr.cmd
-rw-r--r-- 1 oracle oinstall    25 Sep 17 15:47 tsbkupmap.txt                            <--new file
-rw-r--r-- 1 oracle oinstall    56 Sep 17 15:47 incrbackups.txt                           <--new file

Step 3.2 -将增量备份和新生文件上传至目标端(略)
Step 3.3 -在源端对刚传过来的增量备份进行转换和应用

[oracle@tx9ab xtts_scripts]$ perl xttdriver.pl -r
################################## 前面报的错 ##################################
ERROR IN CONVERSION ORA-19624: operation failed, retry possible
ORA-19505:
failed to identify file "/space/oradata/tx9ab/0gqhs56u_1_1"
ORA-27037: unable to
obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional
information: 3
ORA-19600: input file is backup piece
(/space/oradata/tx9ab/0gqhs56u_1_1)
ORA-19601: output file is backup piece
(/space/sys_software/oracle/xtts_dir/backup/xib_0gqhs56u_1_1_6)
CONVERTED BACKUP
PIECE/space/sys_software/oracle/xtts_dir/backup/xib_0gqhs56u_1_1_6

PL/SQL procedure successfully completed.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Error:
------
/space/sys_software/oracle/scripts/xtts_scripts/xxttconv_0gqhs56u_1_1_6.sql execution failed
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--发现是路径的问题!!!!!!!!!
[oracle@tx9ab xtts_scripts]$ cp /space/sys_software/oracle/xtts_dir/backup/0gqhs56u_1_1 /space/oradata/tx9ab/0gqhs56u_1_1 
--修改备份的路径再次尝试

[oracle@tx9ab xtts_scripts]$ perl xttdriver.pl -r
--------------------------------------------------------------------
Parsing properties
--------------------------------------------------------------------
Key: backupondest
Values: /space/sys_software/oracle/xtts_dir/backup
Key: platformid
Values: 13
Key: backupformat
Values: /space/sys_software/oracle/xtts_dir/backup
Key: storageondest
Values: /space/oradata/tx9ab/test
Key: dfcopydir
Values: /space/sys_software/oracle/xtts_dir/dfcopydir
Key: cnvinst_home
Values: /space/sys_software/oracle/app/product/11.2.0/db_1
Key: cnvinst_sid
Values: tx9ab
Key: stageondest
Values: /space/oradata/tx9ab
Key: tablespaces
Values: TEST

--------------------------------------------------------------------
Done parsing properties
--------------------------------------------------------------------

--------------------------------------------------------------------
Checking properties
--------------------------------------------------------------------
ARGUMENT tablespaces
ARGUMENT platformid
ARGUMENT backupformat
ARGUMENT stageondest
ARGUMENT backupondest

--------------------------------------------------------------------
Done checking properties
--------------------------------------------------------------------
ORACLE_SID  : tx9ab
ORACLE_HOME : /space/sys_software/oracle/app/product/11.2.0/db_1

--------------------------------------------------------------------
Start rollforward
--------------------------------------------------------------------
convert instance: /space/sys_software/oracle/app/product/11.2.0/db_1 

convert instance: tx9ab 

ORACLE instance started.

Total System Global Area  217157632 bytes
Fixed Size                  2251816 bytes
Variable Size             159384536 bytes
Database Buffers           50331648 bytes
Redo Buffers                5189632 bytes
rdfno 6

BEFORE ROLLPLAN

datafile number : 6  

datafile name   : /space/oradata/tx9ab/test/TEST_6.xtf

AFTER ROLLPLAN

CONVERTED BACKUP
PIECE/space/sys_software/oracle/xtts_dir/backup/xib_0gqhs56u_1_1_6

PL/SQL procedure successfully completed.
Entering RollForward
After applySetDataFile
Done: applyDataFileTo
Done: applyDataFileTo
Done: RestoreSetPiece
Done: RestoreBackupPiece

PL/SQL procedure successfully completed.

--------------------------------------------------------------------
End of rollforward phase
--------------------------------------------------------------------

Phase 4 - Transport Phase

-源端导出要迁移的表空间的Metadata
exp \'/ as sysdba\' tablespaces=test transport_tablespace=y file=/space/sys_software/oracle/xtts_dir/backup/test_xtts.dmp

由于,我前面有且只做了一次增量数据的备份和前滚应用,这里在目标库直接导入表空间的Metadata就可以完成xtts(虽然我这里的环境是一样的

--在目标导入迁移的表空间的Metadata
[oracle@tx9ab xtts_scripts]$ imp \'/ as sysdba\' tablespaces=test transport_tablespace=y file=/space/sys_software/oracle/scripts/xtts_scripts/test_xtts.dmp datafiles=/space/oradata/tx9ab/test/TEST_6.xtf

Import: Release 11.2.0.4.0 - Production on Wed Sep 23 23:07:00 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V11.02.00 via conventional path
About to import transportable tablespace(s) metadata...
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing TX9AB's objects into TX9AB
. . importing table                     "XTTSTEST"
. importing SYS's objects into SYS
Import terminated successfully without warnings.

--查看源端、目标端的数据一致性
[oracle@tx9ab dbs]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Wed Sep 23 23:07:38 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from tx9ab.XTTSTEST;

         A
----------
       200



深入浅出XTTS:Oracle数据库迁移升级利器

 2017-03-20 杨光 

演讲大纲:

1. 什么是XTTS

2. 适用场景

3. XTTS的基本操作步骤

4. XTTS案例分享


今天主要跟大家分享一下XTTS,我在网上曾看过相关讨论,但发现按网上讲的那些去实际操作的话,还是会遇到一些坑,并不能实际落下来,所以今天想跟大家分享一些实战干货。


一、什么是XTTS


首先什么是XTTS。XTTS其实是从TTS来的,TTS大家做过吗?TTS其实也是传输数据的一种手段,传输数据的时候可能用过EXP的方式,再往后可能用数据泵导入导出一些数据,或者去做备份然后再恢复。其实还有一种方式是用TTS,TTS就是传输表空间,把表空间传输出去,数据从一个库传输到另外一个库,而XTTS是在TTS基础上做了一些更新,支持了跨平台,再有一个可以支持增量备份。

    

因为过去传统的TTS是不支持增量的,我们可以想象一下,一个表空间从一个库拷贝到另外一个库,转移过程当中业务新增的数据是有所丢失的。


二、适用场景


1数据泵


我们做数据迁移的时候大概有三种手段,第一种是数据泵,这种方式进行数据迁移,为了保证应用不丢数据,做的时候需要把应用停掉,停完应外之后数据没有更新了,可以能保证所有业务表的一致性,这种方式操作起来其实是最简单的,它比较适用的场景就是数据量比较小、数据大概在5T以下,使用数据泵会方便很多。


2GoldenGate


再往下一种就是我们在做一些重要的大型系统,对它进行迁移时,我们往往会使用GoldenGate,它迁移的时间很短,刚才使用数据泵时需要提前把应用停掉,或者允许迁移过程中的数据丢失,但是对于GoldenGate而言,在准备的阶段数据一直是同步的,只有业务正式割接时才把业务停掉,切换连接串,切到新的库上,停机时间十分短暂。


3XTTS


我们再看一下XTTS,XTTS其实也是类似于GoldenGate的方式,也是前期要有一个准备,有一个数据的初始化,数据初始化之后,它是后续作为一个增量的恢复,把我们初始化之后的那些变更数据,使用增量的备份和恢复,去把之前的数据往前补上,到最后应用切换时,把最后一次小增量再补回来。这样我们保证割接的时间便会比较短暂。



最短停机时间,最少数据丢失的一种,这是甲方的诉求。那么我来总结下三种方式:


数据泵、GoldenGate、XTTS这三种方式都是支持跨版本、跨平台、停机时间,对于GoldenGate来说它的停机时间是最短的,数据泵是停机时间最长,XTTS是介于这两者之间的,我们后面会进行更深入的讨论,看一下XTTS实际的步骤,在看完这个步骤之后,就可以看到时间具体花在哪。


三、XTTS的基本操作步骤


对于现在很多大型企业来说,现在有一个很大的环境是去IOE,但O是很难去掉的一个东西,而I和E其实是很多厂商开始实施的事情,这里就有一个很重要的点就是跨平台,平台会从以前的小机迁移到x86,所以我们需要考虑到跨平台的问题。

    

1TTS的基本步骤


A、将源端数据库表空间设置为READ ONLY模式。

B、传输数据文件到目标系统。

C、转换数据文件为目标系统的字节序。

D、在源端导出元数据,并在目标端导入。

E、将目标端的数据库表空间设置为READ WRITE。


讲解XTTS前可以先看一下TTS,很多同学想必都用过TTS。TTS操作起来很简单,第一步是把源端的表空间设置为READ ONLY,把源端的数据文件传输到目标端,拷贝过去就可以,拷贝过去之后会牵扯到转换字节序的问题,然后是在源端对源数据做一个元数据的导出,在目标端把元数据再导入,导入之后我的目标端就已经能看到用户的数据,最后把目标端表空间置为READ WRITE。


2XTTS的基本步骤


A、将源端数据文件传输到目标系统。

B、转换数据文件为目标系统的字节序。

C、在源端创建增量备份,并传输到目标端。

D、在目标端恢复增量别分。

E、重复多次操作C和D步骤。

F、将源端数据库表空间设置为READ ONLY模式。

G、最后一次执行C和D步骤。

H、在源端导出元数据,并在目标端导入。

I、将目标端的数据库表空间设置为READ WRITE。


再看到XTTS的步骤,猛的看上去这个步骤比TTS多了很多,但其实它中间变化很少,其实E这个步骤写的是重复C和D,G这部分也是重复C和D,真正多的步骤是在源端进行一个增量的备份,把它传输到目标端,在目标端做一个增量的恢复。

    

我们谈谈做TTS具体存在哪些问题。我们把源端的一个表空间置成了READ ONLY,这时已经不能提供正常业务了,对于5乘8的系统来说,对它的影响其是比较小的,可以接受这种情况的。但对于7*24小时的业务来说,不可能在迁移的过程中随便的停掉源端的业务,这种肯定不能接受的,这个时候就需要迁移的过程当中尽量保证业务是可用的,而XTTS就可以保证源端的业务迁移前一直可用。


3XTTS的参数设置



XTTS涉及到的参数,乍一看感觉比较多,其实可以分成两部分的参数,XTTS可以使用两种方法去传输数据。我个人是建议使用DBMS_FILE_TRANSFER数据包的方式去做。


第一个是平台号,可以通过语句从源端数据库里面查询到。第二个是源端的数据文件目录,第三个是目标端的一个目录,第四个是目标端创建的db_link,剩下几个是源端备份保存路径,目标端备份文件的保存路径,还有目标端增量备份的路径。底下这两个都叫目标端文件的路径,这两个有什么区别呢?这个目标端备份的路径可以理解为,你迁移完之后,你数据文件所在的路径,即迁移完之后,我把数据从这边迁移到了这边,新的数据文件在目录里面就是它的目录,backupondest是增量备份文件目录,增量备份文件是从源端产生的,是源端做完第一次初始化之后,源端需要对这段时间做一个增量的备份,而这些备份集放在这个目录里面。

    

XTTS案例-准备


下面讲一个简单的例子,看一下XTTS操作是什么样的,首先第一步是在源端,源端执行如下命令:

§ 源端运行perl脚本,操作命令

$ORACLE_HOME/perl/bin/perl xttdriver.pl –S

该操作将生成xttnewdatafiles.txt、getfile.sql两个文件。


XTTS案例-数据文件拷贝


首先需要在配置信息里面,把这些信息做好配置,做好配置之后可以在源端执行—S操作,当然在这个位置需要设置上要传输哪一些表空间,执行完—S会生成两个文件。然后,我们要把生成的那两个文件传输到目标端,传输到目标端之后执行—G这个参数,会把源端数据文件抽到目标端。


  • 目标端执行命令:


XTTS案例-进行第1次增量备份


因为抽取过程往往是比较费时的,所以我们需要把这一段时间的数据再追回来,追回来时在源端做一次增量备份,这个备份指的是从第一次拽数据文件到做本次增量备份的时间段内新增的数据,执行如下命令,这样执行完了文件的备份,目录底下生成三个文件,做增量恢复的时候需要把它产生的文件,以及增量备份级同时拷到目标端。


$ORACLE_HOME/perl/bin /perl xttdriver.pl –i


  • 该命令将对xtt.properties参数文件中指定的表空间,使用进行一个增量备份,同时会生成tsbkupmap.txt、incrbackups.txt、xttplan.txt三个文件。


  • 备份的数据是从做xttdriver.pl  -S时在xttplan.txt文件中记录的SCN开始的。备份完成后需要将这3个文件连同增量备份集一起传输到目标端。


XTTS案例-进行第1此增量恢复


拷贝到目标端后,我们就执行一下—r操作,完了之后把增量备份恢复到了目标端。然后回到源端执行Pl—s操作,确认了增量备份已经完成了恢复,这样下次再做增量备份时,就从上一次做增量备份的点继续往后去做备份。



  • 但是如果一套库上有多个实例的话,在执行该步骤之前,需要对环境变量进行确认,如检查当前ORACLE_SID是否是需要执行的SID,否则可能会恢复到其他实例上。(并非是真实的恢复,因为其他实例跟这个备份集没有任何关系,但恢复的过程会在其他实例上进行一遍,如关闭/启动数据库,包括增量恢复的日志都会在另一个数据库上显示。)如果发生了这种事情,不用紧张,调整好环境变量,再执行一次perl xttdriver.pl –r即可。误操作的实例不受影响。


XTTS案例-进行SCN推进


解释下—s这个过程。假设初始化数据文件时SCN号是100,做增量时SCN号是200,这之间数据是从100到200之间,如果做完—s之后再做增量备份,这个SCN号就变成200了。如果不做—s,就意味着在源端做恢复的时候,有可能出现了异常,我没有恢复成功,这会我再去做增量备份,还是从100开始往后做。


$ORACLE_HOME/perl/bin/ perl xttdriver.pl –s


  • 该命令将修改FROM_SCN,用于确定下一次增量备份的起点。

建议在【目标端】每次做完recover动作后,【源端】就执行一次该命令,以免遗忘。


XTTS案例-最后的增量备份和恢复


最后一次做备份及恢复时有很重要的一步,就是我们把源端的表空间设成RARD ONLY,再做一次备份和恢复,这样就完成了数据文件传输的过程。最后的步骤还是元数据的导入,最后这个步骤和TTS步骤几乎是一样的。导出和导入的命令也是很简单。


  • 【原库端】表空间设置为READ ONLY

alter tablespace XXXX read only;

  • 【原库端】做最后一次增量备份

perl xttdriver.pl –i

  • 【目标端】做最后一次增量恢复

perl xttdriver.pl –r

  • 在执行完恢复操作后,脚本会自动将目标库重启,不需要人工干预,如果出现到mount状态出现异常,根据情况手工执行后续命令。


刚才说的是理想的实验环境过程,通过这个过程大家可以简单了解XTTS整个过程,而从这个过程大家会发现XTTS其实是很简单的一个过程,第一步是写配置文件,配置文件里面写好我们的一些系统里的信息,它的平台号,写上传输哪一些表空间,以及设置一些文件的源端的地址,目标端的地址配备好之后,然后去执行一些后面的参数,都是很简单的一些操作,下面我们就考虑一些实际当中的一些案例。

    

四、XTTS案例分享


1运营商环境


首先是目标端是生产库,第二个是源端的数据量是20TB+,每天归档量是1TB+,本地空间不足2TB,网络单进程是35MB/S,业务中断时间是3小时内,业务中断时间这么短,数据量比较大的情况下,第一时间就会考虑GoldenGate,因为它的数据是实时传输的,最后保证最后中断时间比较短。

    

大家入行业时大概都听说过DBA是比较清闲的一份事业,这个其实是每个人刚入行时的愿景,实际工作当中往往碰不到,尤其遇到这种需求时。在这种情况下,如果用GoldenGate的话,就需要把数据库里的每一个用户去分组做一个导出,导出完之后再去目标端做一个个恢复,恢复完之后建立了某几个用户的连接之后再循环操作。再加上每次传出数据量只有1TB,网络速度又不是很高,时间和工作量是一个很大的情况。基本上至少要两周的时间才能完成GoldenGate的搭建,这样再过一段时间稳定之后,业务方才会去允许你做正式的割接,这个工作量可能大家都深有体会。实际上, XTTS可以较轻松地解决这个问题。

    

首先,数据量虽然是20TB,但是调用DBMS_FILE_TRANSFER包,我们可以很轻松地把源端数据文件传输到目标端,传输的过程当中人为不需要干预的,周末的时候让它传输文件就好了,一个周末传完数据文件之后,周一来了之后对它进行从周五到周一这段时间增量数据做一个恢复就OK,只要保证我在正式迁移的那一天,我差的数据比较少的情况下就可以完成迁移的工作。


但XTTS也并非完美,它存在一个问题,就是在它做恢复时,就是刚才我们看到的那个步骤需要重启数据库,这个案例里面对的问题是目标端数据库也是一个生产库,这种情况下如何破解呢?很简单。



我们只需要创建一个pfile就可以了,这样我在做数据文件恢复的过程中,就不会影响到我的生产环境。再有一个问题就是考虑如何把整个最后增量的备份和恢复时间缩短到可控的时间(三个小时之内)。


2如何加速XTTS


我们来看一下这个XTTS整个的时间流程,准备阶段、初始化和N次增量备份恢复,这些都是迁移之前的,只需要考虑实际停业务的时间,这段时间做表空间只读,增量备份恢复,元数据导入,数据校验。我们知道表空间的只读和数据的校验,这两个时间是固定的,表空间只读速度很快,关键的时间点是增量的备份和恢复的时间,以及元数据的导入时间。

    


这是一个实际的案例,我们在做增量备份时,一天的数据,因为它每天的归档量是1T,每24个小时业务数据需要备份的时间是6个小时,这种情况下再加上网络速度,备份是需要6小时,传输这些增量备份时间也需要6小时,恢复需要4个小时,时间一加是6+6+4就是16个小时,整个备份恢复的时间是一个很长的时间,换句话说,完全不可能缩减到3小时以内。

    

其实我们可以利用Oracle的一些特性,比方说它有一个BCT的特性,我们可以开启BCT的特性,它做增量备份时,它只会扫描有变更的数据块,这样就可以加速备份的时间。开启的方式也很简单,执行如下命令就直接开启,需要注意的目录需要共享的目录。


再有一个问题就是关于传输的问题,有了增量备份,增量备份放在源端,放在源端之后,传输备份也是需要一定时间。刚刚看到实际环境里面,网络环境是35兆每秒,对于24小时数据,产生的备份文件也是1T,拿1T除以35兆每秒,时间也很长,我们如何优化传输的时间呢?这里面有一个小技巧,就是在RMAN里面开启并行,表空间最多对应那么多分片,这样有了分片之后就可以开不同的进程传输数据文件。



为什么说这个有意义呢?因为在生产环境里面,我们发现有一些表空间会比其他表空间大很多,就是它的容量很不均匀,有可能一个表空间的大小就可以超过其它表空间很多倍,虽然我是一口气对十几个表空间做迁移的工作,但那一个表空间传输的时间是最长的。我经过开启并行之后,把一个表空间分成8个片,分组进行传输,这样就可以加快传输的速度。



再有一个是我们要考虑元数据导入的时间,正常情况下,元数据导入的时间73%花在统计信息导入这一块。这块有一个小的技巧可加入统计信息导入,就是我第一次导入这个元数据的时候,把统计信息排除,排除掉之后可以快速地第二次导入,第二次导入时会加入一个并行,这一块就可以很快速地导入这些文件。同时这一步可以把之前没有导入的这些过程,视图、包、触发器这些信息也可以导入进去。表空间里面只包含表、索引这些东西,但下面这些东西其实是存在系统表空间里的,系统表空间没有通过XTTS做迁移,所以这部分数据如果不导入的话是丢失的。


3迁移前的准备


最后我们再来看看整个XTTS在前期准备当中还需要做哪一些工作。首先是对象的统计,跟业务确认需要传输哪一些数据,哪一些表空间是迁移的对象。第二个是源端字符集的检查。第三个是检查表有没有空段,第四个是失效对象检查。第五是基于XMLSchema的HMLType对象检查。第六个是目标端创建检查用DBlink。第七个是检查源数据库的目标库具有重复名称的表空间。最后是检查是否存在应用用户建在systmwem、sysaux、users上的情况。

    

然后表空间自包含的检查,所有表空间里面是否都是自包含的,对表新旧环境role,对比新旧环境profile,在新环境当中对比并创建用户,生成恢复用户默认表空间和临时表空间的脚本,创建非默认的temep表空间,最后一部分是软件包上传。

    

五、总结


简单总结一下,XTTS支持跨平台迁移,操作起来十分方便,使用XTTS之后就可以让DBA过上一个理想中的生活,轻松完成迁移工作。最后,它的停机时间较短。


建议大家在做迁移的时候减少批次,批次越多,增量备份的数据越少,数据越少,最后停机时间越短,但是这个过程如果做太多就越容易出错。谢谢大家!

   

Q&A


Q1:完成增量备份的时候,假如说要恢复时,能不能用归档的方法?

A1:在XTTS里面是不可以的。

    

Q2:如果说加快传输方式的就要开并行,如果有20t,如果开并行有可能影响现有生产?

A2:没错,初始化传输数据文件的时候可以开并行,做增量备份的时候也可以开并行,这两个是不一样的。其实这个影响主要是网络传输那一块,看实际的网络带宽。

    

Q3:并行开完了以后传输速度是加快,生产不停机可能会影响现在生产,会出现等待事件。

A3:做并行的时候,肯定是先要测试网络和磁盘性能最大可以开到几个并行,我们可以开到最大并行减N的方式来保证生产,肯定要给生产留一点空间的。

    

Q4:EXP导出的时候,有可能会遇到ora-01555的问题。

A4:没错,但是导出原数据的过程出现的时间很小的,而且表空已经设置为read only,表空间的没有被使用,所以不会出现ora-01555。

    

Q5:之前说传输,从源端到目标端有一个重启,那块没听太懂。

A5:XTTS在执行—l的时候,会导致目标和数据库自动重启到写nomount状态下恢复数据文件,这个过程中,可以创建另外一个实例来做恢复这个事情,只有真正的到元数据导入这个步骤的时候,再切换到实际的目标库就可以了。  



我们都被骗了,所有的跨平台迁移都可以通过XTTS实现


 2017-06-13 李真旭 

自从2015年初进行了xtts增量的U2L迁移测试之后,国内很多人都开始利用这种方案进行数据库跨平台迁移了,基本上都是利用Oracle 封装的perl脚本。其中Oracle MOS文档 11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (文档 ID 1389592.1) 明确提到目标端环境必须是Linux,


这里该文档中的一段原话:

The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met. The destination system must be Linux, either 64-bit Oracle Linux or RedHat Linux, as long as it is a certified version. The typical use case is expected to be migrating data from a big endian platform, such as IBM AIX, HP-UX, or Solaris SPARC, to 64-bit Oracle Linux, such as Oracle Exadata Database Machine running Oracle Linux.


其实这里很容易让人产生误解,这里Oracle并非说不支持其他平台,而是说Oracle 提供的封装perl脚本不支持而已。但是手工进行xtts操作,完全是ok的;经过我的测试也是可行,这里是测试从Hp IA到Solaris Sparc的xtts增量迁移方式,供参考。


1、首先在原端创建测试表空间和测试表.

-创建测试表空间

create tablespace xtts datafile ‘+data’ size 100m;

create table test0504 as select * from dba_objects where 1=2;

alter table test504 move tablespace xtts;


2、备份xtts表空间文件,并传输到目标端(Solaris)

3、目标端进行文件格式转换

convert from platform ‘HP-UX IA (64-bit)’ datafile  ’/tmp/xtts.dbf’ format ‘+DATA/test/datafile/xtts_new.dbf’;

 

4、原端进行基于SCN的增量备份(这里由于我是测试表空间,所以未启用Block track  changing)


5、创建备份集传到目标端并进行备份集格式手工转换(Solaris)

将脚本保存为xtts_conv1.sql并执行,如下是脚本内容:


执行结果如下:

SQL> start xtts_conv1.sql;

PL/SQL procedure successfully completed.


6、进行第一次增量应用(Solaris)

说明:为了验证增量数据是否能够同步到目标端,在进行增量备份之前,


我这里先进行了:

SQL > insert into test0504 select * fro dba_objects where rownm < 101;

SQL> commit;


将如下脚本保存为apply_incr1.sql,并执行:


执行结果如下:

7、将原端表空间设置为只读模式

SQL> alter tablespace xtts  read only ;


8、进行最后一次增量备份。


9、将备份集传输到目标端并进行转换。步骤略(同上)


10、最后一次应用增量备份。步骤略(同上)


11、源端导出元数据

将下列内容保存为exp_xtts.par:


执行如下命令导出xtts表空间上的元数据信息:


12、目标端导入元数据

将下列内容保存为imp_xtts.par:


执行如下命令导入元数据。


13、检查数据


我们可以看出,xtts表空间已经被迁移过来了,并且增量的数据也已经同步了。

所以我想表达的是,所有的跨平台迁移,其实都可以利用XTTS Incremental Backup 功能进行迁移,无论目标端是什么平台。当然,如果字节序相同的情况下,可以直接使用convert database 功能。


补充:

在进行增量应用时,可能会出现如下错误:


如果遇到这个错误,那么只需要将实例停掉,启动到nomount状态下执行脚本即可。





11G - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (文档 ID 1389592.1)


类型:
状态:
上次主更新:
上次更新:
语言:
REFERENCE
PUBLISHED
2016-6-3
2016-6-3
English???


In this Document

Purpose
  Traditional Cross Platform Transportable Tablespaces
  Reduce Downtime using Cross Platform Incremental Backup
Scope
Details
  Overview
  Cross Platform Incremental Backup Supporting Scripts
  Requirements and Recommendations
  Prerequisites
  Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance
  Troubleshooting
  Known Issues
  Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup
  Conventions Used in This Document
  Phase 1 - Initial Setup
  Step 1.1 - Install the Destination Database Software and Create the Destination Database
  Step 1.2 - If necessary, Configure the Incremental Convert Home and Instance
  Step 1.3 - Identify Tablespaces to be Transported
  Step 1.4 - If Using dbms_file_transfer Prepare Phase Method, then Configure Directory Objects and Database Links
  Step 1.5 - Create Staging Areas
  Step 1.6 - Install xttconvert Scripts on the Source System
  Step 1.7 - Configure xtt.properties on the Source System
  Step 1.8 - Copy xttconvert Scripts and xtt.properties to the Destination System
  Step 1.9 - Set TMPDIR
  Phase 2 - Prepare Phase
  Phase 2A - Prepare Phase for dbms_file_transfer Method
  Step 2A.1 - Run the Prepare Step on the Source System
  Step 2A.2 - Transfer the Datafiles to the Destination System
  Phase 2B - Prepare Phase for RMAN Backup Method
  Step 2B.1 - Run the Prepare Step on the Source System
  Step 2B.2 - Transfer Datafile Copies to the Destination System
  Step 2B.3 - Convert the Datafile Copies on the Destination System
  Phase 3 - Roll Forward Phase
  Step 3.1 - Create an Incremental Backup of the Tablespaces being Transported on the Source System
  Step 3.2 - Transfer Incremental Backup to the Destination System
  Step 3.3 - Convert the Incremental Backup and Apply to the Datafile Copies on the Destination System
  Step 3.4 - Determine the FROM_SCN for the Next Incremental Backup
  Step 3.5 - Repeat the Roll Forward Phase (Phase 3) or Move to the Transport Phase (Phase 4)
  Phase 4 - Transport Phase
  Step 4.1 - Make Source Tablespaces READ ONLY in the Source Database
  Step 4.2 - Create the Final Incremental Backup, Transfer, Convert, and Apply It to the Destination Datafiles
  Step 4.3 - Import Object Metadata into Destination Database
  Step 4.4 - Make the Tablespace(s) READ WRITE in the Destination Database
  Step 4.5 - Validate the Transported Data
  Phase 5 - Cleanup
  Appendix
  Description of Perl Script xttdriver.pl Options
  Description of Parameters in Configuration File xtt.properties
  Change History
References


APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.2 [Release 10.2 to 12.1]
Linux x86-64

PURPOSE


When using Cross Platform Transportable Tablespaces (XTTS) to migrate data between systems that have different endian formats, the amount of downtime required can be substantial because it is directly proportional to the size of the data set being moved.  However, combining XTTS with Cross Platform Incremental Backup can significantly reduce the amount of downtime required to move data between platforms.

Traditional Cross Platform Transportable Tablespaces

The high-level steps in a typical XTTS scenario are the following:

  1. Make tablespaces in source database READ ONLY
  2. Transfer datafiles to destination system
  3. Convert datafiles to destination system endian format
  4. Export metadata of objects in the tablespaces from source database using Data Pump
  5. Import metadata of objects in the tablespaces into destination database using Data Pump
  6. Make tablespaces in destination database READ WRITE

Because the data transported must be made read only at the very start of the procedure, the application that owns the data is effectively unavailable to users for the entire duration of the procedure.  Due to the serial nature of the steps, the downtime required for this procedure is proportional to the amount of data.  If data size is large, datafile transfer and convert times can be long, thus downtime can be long.

Reduce Downtime using Cross Platform Incremental Backup

To reduce the amount of downtime required for XTTS, Oracle has enhanced RMAN's ability to roll forward datafile copies using incremental backups, to work in a cross-platform scenario.  By using a series of incremental backups, each smaller than the last, the data at the destination system can be brought almost current with the source system, before any downtime is required.  The downtime required for datafile transfer and convert when combining XTTS with Cross Platform Incremental Backup is now proportional to the rate of data block changes in the source system.

The Cross Platform Incremental Backup feature does not affect the amount of time it takes to perform other actions for XTTS, such as metadata export and import.  Hence, databases that have very large amounts of metadata (DDL) will see limited benefit from Cross Platform Incremental Backup since migration time is typically dominated by metadata operations, not datafile transfer and conversion.


Only those database objects that are physically located in the tablespaces that are being transported will be copied to the destination system. If you need for other objects to be transported, that are located in different tablespaces (such as, for example, pl/sql objects, sequences, etc., that are located in the SYSTEM tablespace), you can use data pump to copy those objects to the destination system.


The high-level steps using the cross platform incremental backup capability are the following:

  1. Prepare phase (source data remains online)
    1. Transfer datafiles to destination system
    2. Convert datafiles, if necessary, to destination system endian format
  2. Roll Forward phase (source data remains online - Repeat this phase as many times as necessary to catch destination datafile copies up to source database)
    1. Create incremental backup on source system
    2. Transfer incremental backup to destination system
    3. Convert incremental backup to destination system endian format and apply the backup to the destination datafile copies
  3. Transport phase (source data is READ ONLY)
    1. Make tablespaces in source database READ ONLY
    2. Repeat the Roll Forward phase one final time
      • This step makes destination datafile copies consistent with source database.
      • Time for this step is significantly shorter than traditional XTTS method when dealing with large data because the incremental backup size is smaller.
    3. Export metadata of objects in the tablespaces from source database using Data Pump
    4. Import metadata of objects in the tablespaces into destination database using Data Pump
    5. Make tablespaces in destination database READ WRITE

The purpose of this document is to provide an example of how to use this enhanced RMAN cross platform incremental backup capability to reduce downtime when transporting tablespaces across platforms.

SCOPE

The source system may be any platform provided the prerequisites referenced and listed below for both platform and database are met. The destination system must be Linux, either 64-bit Oracle Linux or RedHat Linux, as long as it is a certified version. The typical use case is expected to be migrating data from a big endian platform, such as IBM AIX, HP-UX, or Solaris SPARC, to 64-bit Oracle Linux, such as Oracle Exadata Database Machine running Oracle Linux.

If you are migrating from a little endian platform to Oracle Linux, then the migration method that should receive first consideration is Data Guard.  See Note 413484.1 for details about heterogeneous platform support for Data Guard between your current little endian platform and Oracle Linux.

This method can also be used with 12c databases, however, for an alternative method for 12c see:

Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

NOTE:  Neither method supports 12c multitenant databases.  Enhancement bug 22570430 addresses this limitation.  

  

DETAILS

Overview

This document provides a procedural example of transporting two tablespaces called TS1 and TS2 from an Oracle Solaris SPARC system to an Oracle Exadata Database Machine running Oracle Linux, incorporating Oracle's Cross Platform Incremental Backup capability to reduce downtime.

After performing the Initial Setup phase, moving the data is performed in the following three phases:

Prepare phase
During the Prepare phase, datafile copies of the tablespaces to be transported are transferred to the destination system and converted.  The application being migrated is fully accessible during the Prepare phase.  The Prepare phase can be performed using RMAN backups or dbms_file_transfer.  Refer to the Selecting the Prepare Phase Method section for details about choosing the Prepare phase method.

Roll Forward phase
During the Roll Forward phase, the datafile copies that were converted during the Prepare phase are rolled forward using incremental backups taken from the source database.  By performing this phase multiple times, each successive incremental backup becomes smaller and faster to apply, allowing the data at the destination system to be brought almost current with the source system.  The application being migrated is fully accessible during the Roll Forward phase.

Transport phase
During the Transport phase, the tablespaces being transported are put into READ ONLY mode, and a final incremental backup is taken from the source database and applied to the datafile copies on the destination system, making the destination datafile copies consistent with source database.  Once the datafiles are consistent, the tablespaces are TTS-exported from the source database and TTS-imported into the destination database.  Finally, the tablespaces are made READ WRITE for full access on the destination database. The application being migrated cannot receive any updates during the Transport phase.

Cross Platform Incremental Backup Supporting Scripts

The Cross Platform Incremental Backup core functionality is delivered in Oracle Database 11.2.0.4 and later.  See the Requirements and Recommendations section for details.  In addition, a set of supporting scripts in the file rman-xttconvert_2.0.zip are attached to this document that are used to manage the procedure required to perform XTTS with Cross Platform Incremental Backup.  The two primary supporting scripts files are the following:

  • Perl script xttdriver.pl - the script that is run to perform the main steps of the XTTS with Cross Platform Incremental Backup procedure.
  • Parameter file xtt.properties - the file that contains your site-specific configuration.


Requirements and Recommendations

This section contains the following subsections:

  • Prerequisites
  • Selecting the Prepare Phase Method
  • Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance

Prerequisites

The following prerequisites must be met before starting this procedure:

  • The limitations and considerations for transportable tablespaces must still be followed.  They are defined in the following manuals:
  • In addition to the limitations and considerations for transportable tablespaces, the following conditions must be met:
    • The source system cannot be Windows.
    • The source database must have its COMPATIBLE parameter set to 10.2.0 or higher.
    • The source database's COMPATIBLE parameter must not be greater than the destination database's COMPATIBLE parameter.
    • The source database must be in ARCHIVELOG mode.
    • The destination system must be Linux, either 64-bit Oracle Linux or a certified version of RedHat Linux.
    • RMAN's default device type should be configured to DISK
    • RMAN on the source system must not have DEVICE TYPE DISK configured with COMPRESSED.
    • The set of tablespaces being moved must all be online, and contain no offline data files.  Tablespaces must be READ WRITE.  Tablespaces that are READ ONLY may be moved with the normal XTTS method.  There is no need to incorporate Cross Platform Incremental Backups to move tablespaces that are always READ ONLY.
  • All steps in this procedure are run as the oracle user that is a member of the OSDBA group. OS authentication is used to connect to both the source and destination databases.
  • If the Prepare Phase method selected is dbms_file_transfer, then the destination database must be 11.2.0.4.  See the Selecting the Prepare Phase Method section for details.
  • If the Prepare Phase method selected is RMAN backup, then staging areas are required on both the source and destination systems.  See the Selecting the Prepare Phase Method section for details.
  • If the destination database version is 11.2.0.3 or lower, then a separate database home containing 11.2.0.4 running an 11.2.0.4 instance on the destination system is required to perform the incremental backup conversion.  See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details. If using ASM for 11.2.0.4 Convert Home, then ASM needs to be on 11.2.0.4, else error ORA-15295 (e.g. ORA-15295: ASM instance software version 11.2.0.3.0 less than client version 11.2.0.4.0) is raised.
Whole Database Migration

If Cross Platform Incremental Backups will be used to reduce downtime for a whole database migration, then the steps in this document can be combined with the XTTS guidance provided in the MAA paper Platform Migration Using Transportable Tablespaces: Oracle Database 11g.

This method can also be used with 12c databases, however, for an alternative method for 12c see:
Note 2005729.1 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup.

Selecting the Prepare Phase Method

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods:

  1. Using dbms_file_transfer (DFT) transfer (using xttdriver.pl -S and -G options)
  2. Using Recovery Manager (RMAN) RMAN backup (using xttdriver.pl -p and -c options)

The dbms_file_transfer method uses the dbms_file_transfer.get_file() subprogram to transfer the datafiles from the source system to the target system over a database link.  The dbms_file_transfer method has the following advantages over the RMAN method: 1) it does not require staging area space on either the source or destination system; 2) datafile conversion occurs automatically during transfer - there is not a separate conversion step.  The dbms_file_transfer method requires the following:

  • A destination database running 11.2.0.4.  Note that an incremental convert home or instance do not participate in dbms_file_transfer file transfers.
  • A database directory object in the source database from where the datafiles are copied.
  • A database directory object in the destination database to where the datafiles are placed.
  • A database link in the destination database referencing the source database.

The RMAN backup method runs RMAN on the source system to create backups on the source system of the datafiles to be transported.  The backups files must then be manually transferred over the network to the destination system.  On the destination system the datafiles are converted by RMAN, if necessary.  The output of the RMAN conversion places the datafiles in their final location where they will be used by the destination database.  In the original version of xttdriver.pl, this was the only method supported.  The RMAN backup method requires the following:

  • Staging areas are required on both the source and destination systems for the datafile copies created by RMAN.  The staging areas are referenced in the xtt.properties file using the parameters dfcopydir and stageondest.  The final destination where converted datafiles are placed is referenced in the xtt.properties file using the parameter storageondest.  Refer to the Description of Parameters in Configuration File xtt.properties section for details and sizing guidelines.

Details of using each of these methods are provided in the instructions below.  The recommended method is the dbms_file_transfer method.


Destination Database 11.2.0.3 or Earlier Requires a Separate Incremental Convert Home and Instance

The Cross Platform Incremental Backup core functionality (i.e. incremental backup conversion) is delivered in Oracle Database 11.2.0.4 and later.  If the destination database version is 11.2.0.4 or later, then the destination database can perform this function.  However, if the destination database version is 11.2.0.3 or earlier, then, for the purposes of performing incremental backup conversion, a separate 11.2.0.4 software home, called the incremental convert home, must be installed, and an instance, called the incremental convert instance, must be started in NOMOUNT state using that home.  The incremental convert home and incremental convert instance are temporary and are used only during the migration.

Note that because the dbms_file_transfer Prepare Phase method requires destination database 11.2.0.4, which can be used to perform the incremental backup conversions function (as stated above), an incremental convert home and incremental convert instance are usually only applicable when the Prepare Phase method is RMAN backup.

For details about setting up a temporary incremental convert instance, see instructions in Phase 1.


Troubleshooting

To enable debug mode, either run xttdriver.pl with the -d flag, or set environment variable XTTDEBUG=1 before running xttdriver.pl.  Debug mode enables additional screen output and causes all RMAN executions to be performed with the debug command line option.

Known Issues

  1. If the source database contains nested IOTs with key compression, then the fix for Bug 14835322 must be installed in the destination database home (where the tablespace plug operation occurs).
  2. If you wish to utilize block change tracking on the source database when incremental backups are created, then the fix for Bug 16850197 must be installed in the source database home.
  3. If the roll forward phase (xttdriver.pl -r) fails with the following errors, then verify RMAN DEVICE TYPE DISK is not configured COMPRESSED.

    Entering RollForward
    After applySetDataFile
    Done: applyDataFileTo
    Done: RestoreSetPiece
    DECLARE
    *
    ERROR at line 1:
    ORA-19624: operation failed, retry possible
    ORA-19870: error while restoring backup piece
    /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup
    ORA-19608: /dbfs_direct/FS1/xtts/incrementals/xtts_incr_backup is not a backup
    piece
    ORA-19837: invalid blocksize 0 in backup piece header
    ORA-06512: at "SYS.X$DBMS_BACKUP_RESTORE", line 2338
    ORA-06512: at line 40




Transport Tablespaces with Reduced Downtime using Cross Platform Incremental Backup

The XTTS with Cross Platform Incremental Backups procedure is divided into the following four phases:

  • Phase 1 - Initial Setup phase
  • Phase 2 - Prepare phase
  • Phase 3 - Roll Forward phase
  • Phase 4 - Transport phase

Conventions Used in This Document

  • All command examples use bash shell syntax.
  • Commands prefaced by the shell prompt string [oracle@source]$ indicate commands run as the oracle user on the source system.
  • Commands prefaced by the shell prompt string [oracle@dest]$ indicate commands run as the oracle user on the destination system.

Phase 1 - Initial Setup

Perform the following steps to configure the environment to use Cross Platform Incremental Backups:

Step 1.1 - Install the Destination Database Software and Create the Destination Database

Install the desired Oracle Database software on the destination system that will run the destination database.  It is highly recommended to use Oracle Database 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

Identify (or create) a database on the destination system to transport the tablespace(s) into and create the schema users required for the tablespace transport.

Per generic TTS requirement, ensure that the schema users required for the tablespace transport exist in the destination database.
Step 1.2 - If necessary, Configure the Incremental Convert Home and Instance

See the Destination Database 11.2.0.3 and Earlier Requires a Separate Incremental Convert Home and Instance section for details.

Skip this step if the destination database software version is 11.2.0.4 or later.  Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

If the destination database is 11.2.0.3 or earlier, then you must configure a separate incremental convert instance by performing the following steps:

    • Install a new 11.2.0.4 database home on the destination system.  This is the incremental convert home.
    • Using the incremental convert home startup an instance in the NOMOUNT state.  This is the incremental convert instance.  A database does not need to be created for the incremental convert instance.  Only a running instance is required.

      The following steps may be used to create an incremental convert instance named xtt running out of incremental convert home /u01/app/oracle/product/11.2.0.4/xtt_home:
      [oracle@dest]$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/xtt_home

      [oracle@dest]$ export ORACLE_SID=xtt

      [oracle@dest]$ cat << EOF > $ORACLE_HOME/dbs/init$ORACLE_SID.ora
      db_name=xtt
      compatible=11.2.0.4.0
      EOF

      [oracle@dest]$ sqlplus / as sysdba
      SQL> startup nomount
      If ASM storage is used for the xtt.properties parameter backupondest (described below), then the COMPATIBLE initialization parameter setting for this instance must be equal to or higher than therdbms.compatible setting for the ASM disk group used.
        
Step 1.3 - Identify Tablespaces to be Transported

Identify the tablespace(s) in the source database that will be transported. Tablespaces TS1 and TS2 will be used in the examples in this document.  As indicated above, the limitations and considerations for transportable tablespaces must still be followed.

Step 1.4 - If Using dbms_file_transfer Prepare Phase Method, then Configure Directory Objects and Database Links

Note that the dbms_file_transfer Prepare Phase method requires the destination database to be 11.2.0.4.

If using dbms_file_transfer as the Prepare Phase method, then three database objects must be created:

    1. A database directory object in the source database from where the datafiles are copied
    2. A database directory object in the destination database to where the datafiles are placed
    3. A database link in the destination database referencing the source database

The source database directory object references the location where the datafiles in the source database currently reside.  For example, to create directory object sourcedir that references datafiles in ASM location +DATA/prod/datafile, connect to the source database and run the following SQL command:

SQL@source> create directory sourcedir as '+DATA/prod/datafile';

The destination database directory object references the location where the datafiles will be placed on the destination system.  This should be the final location where the datafils will reside when in use by the destination database.  For example, to create directory object dstdir that will place transferred datafiles in ASM location +DATA/prod/datafile, connect to the destination database and run the following SQL command:

SQL@dest> create directory destdir as '+DATA/prod/datafile';

The database link is created in the destination database, referencing the source database.  For example, to create a database link named ttslink, run the following SQL command:

SQL@dest> create public database link ttslink connect to system identified by using '';

Verify the database link can properly access the source system:

SQL@dest> select * from dual@ttslink;
Step 1.5 - Create Staging Areas

Create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: backupformat, backupondest.

Also, if using RMAN backups in the Prepare phase, create the staging areas on the source and destinations systems as defined by the following xtt.properties parameters: dfcopydir, stageondest.

Step 1.6 - Install xttconvert Scripts on the Source System

On the source system, as the oracle software owner, download and extract the supporting scripts attached as rman-xttconvert_2.0.zip to this document.

[oracle@source]$ pwd
/home/oracle/xtt

[oracle@source]$ unzip rman-xttconvert_2.0.zip

Archive: rman-xttconvert_2.0.zip
inflating: xttcnvrtbkupdest.sql
inflating: xttdbopen.sql
inflating: xttdriver.pl
inflating: xttprep.tmpl
inflating: xtt.properties
inflating: xttstartupnomount.sql

Step 1.7 - Configure xtt.properties on the Source System

Edit the xtt.properties file on the source system with your site-specific configuration.   For more information about the parameters in the xtt.properties file, refer to the Description of Parameters in Configuration File xtt.properties section in the Appendix below.

Step 1.8 - Copy xttconvert Scripts and xtt.properties to the Destination System

As the oracle software owner copy all xttconvert scripts and the modified xtt.properties file to the destination system.

[oracle@source]$ scp -r /home/oracle/xtt dest:/home/oracle/xtt
Step 1.9 - Set TMPDIR

In the shell environment on both source and destination systems, set environment variable TMPDIR to the location where the supporting scripts exist.  Use this shell to run the Perl script xttdriver.pl as shown in the steps below.  If TMPDIR is not set, output files are created in and input files are expected to be in /tmp.

[oracle@source]$ export TMPDIR=/home/oracle/xtt

[oracle@dest]$ export TMPDIR=/home/oracle/xtt


  

Phase 2 - Prepare Phase

During the Prepare phase, datafiles of the tablespaces to be transported are transferred to the destination system and converted by the xttdriver.pl script.  There are two possible methods:

  1. Phase 2A - dbms_file_transfer Method
  2. Phase 2B - RMAN Backup Method

Select and use one of these methods based upon the information provided in the Requirements and Recommendations section above.

NOTE:  For large number of files, using dbms_file_transfer has been found to be the fastest method for transferring datafiles to destination.   

Phase 2A - Prepare Phase for dbms_file_transfer Method

Only use the steps in Phase 2A if the Prepare Phase method chosen is dbms_file_transfer and the setup instructions have been completed, particularly those in Step 1.4.

During this phase datafiles of the tablespaces to be transported are transferred directly from source system and placed on the destination system in their final location to be used by the destination database.  If conversion is required, it is performed automatically during transfer.  No separate conversion step is required.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2A.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -S
The prepare step performs the following actions on the source system:
  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttnewdatafiles.txt
    • getfile.sql
The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.
Step 2A.2 - Transfer the Datafiles to the Destination System
On the destination system, log in as the oracle user and set the environment (ORACLE_HOME and ORACLE_SID environment variables) to the destination database (it is invalid to attempt to use an incremental convert instance). Copy the xttnewdatafiles.txt and getfile.sql files created in step 2A.1 from the source system and run the -G get_file step as follows:
  
NOTE: This step copies all datafiles being transported from the source system to the destination system.  The length of time for this step to complete is dependent on datafile size, and may be substantial.  Use getfileparallel option for parallelism.  
  

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttnewdatafiles.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/getfile.sql /home/oracle/xtt

# MUST set environment to destination database
[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -G

When this step is complete, the datafiles being transported will reside in the final location where they will be used by the destination database.  Note that endian conversion, if required, is performed automatically during this step.

Proceed to Phase 3 to create and apply incremental backups to the datafiles.

Phase 2B - Prepare Phase for RMAN Backup Method

Only use the steps in Phase 2B if the Prepare Phase method chosen is RMAN backup and the setup instructions have been completed, particularly those in Step 1.5.

During this phase datafile copies of the tablespaces to be transported are created on the source system, transferred to the destination system, converted, and placed in their final location to be used by the destination database.  The steps in this phase are run only once.  The data being transported is fully accessible in the source database during this phase.

Step 2B.1 - Run the Prepare Step on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the prepare step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -p

The prepare step performs the following actions on the source system:
  • Creates datafile copies of the tablespaces that will be transported in the location specified by the xtt.properties parameter dfcopydir.
  • Verifies the tablespaces are online, in READ WRITE mode, and do not contain offline datafiles.
  • Creates the following files used later in this procedure:
    • xttplan.txt
    • rmanconvert.cmd

The set of tablespaces being transported must all be online, contain no offline data files, and must be READ WRITE.  The Prepare step will signal an error if one or more datafiles or tablespaces in your source database are offline or READ ONLY.  If a tablespace is READ ONLY and will remain so throughout the procedure, then simply transport those tablespaces using the traditional cross platform transportable tablespace process.  No incremental apply is needed for those files.

Step 2B.2 - Transfer Datafile Copies to the Destination System
On the destination system, logged in as the oracle user, transfer the datafile copies created in the previous step from the source system.  Datafile copies on the source system are created in the location defined in xtt.properties parameter dfcopydir.  The datafile copies must be placed in the location defined by xtt.properties parameter stageondest.

Any method of transferring the datafile copies from the source system to the destination system that results in a bit-for-bit copy is supported.

If the dfcopydir location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the datafile copies are already available in the expected location on the destination system.

In the example below, scpis used to transfer the copies created by the previous step from the source system to the destination system.
[oracle@dest]$ scp oracle@source:/stage_source/* /stage_dest

Note that due to current limitations with cross-endian support in DBMS_FILE_TRANSPORT and ASMCMD, you must use OS-level commands, such as SCP or FTP to transfer the copies from the source system to destination system.
Step 2B.3 - Convert the Datafile Copies on the Destination System
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the rmanconvert.cmd file created in step 2B.1 from the source system and run the convert datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/rmanconvert.cmd /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -c


The convert datafiles step converts the datafiles copies in the stageondest location to the endian format of the destination system.  The converted datafile copies are written in the location specified by the xtt.properties parameter storageondest.  This is the final location where datafiles will be accessed when they are used by the destination database.

When this step is complete, the datafile copies in stageondest location are no longer needed and may be removed.

Phase 3 - Roll Forward Phase

During this phase an incremental backup is created from the source database, transferred to the destination system, converted to the destination system endian format, then applied to the converted destination datafile copies to roll them forward.  This phase may be run multiple times. Each successive incremental backup should take less time than the prior incremental backup, and will bring the destination datafile copies more current with the source database.  The data being transported is fully accessible during this phase.

Step 3.1 - Create an Incremental Backup of the Tablespaces being Transported on the Source System

On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the create incremental step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

The create incremental step executes RMAN commands to generate incremental backups for all tablespaces listed in xtt.properties.  It creates the following files used later in this procedure:

  • tsbkupmap.txt
  • incrbackups.txt
Step 3.2 - Transfer Incremental Backup to the Destination System

Transfer the incremental backup(s) created during the previous step to the stageondest location on the destination system.  The list of incremental backup files to copy are found in theincrbackups.txt file on the source system.

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest

If the backupformat location on the source system and the stageondest location on the destination system refer to the same NFS storage location, then this step can be skipped since the incremental backups are already available in the expected location on the destination system.
Step 3.3 - Convert the Incremental Backup and Apply to the Datafile Copies on the Destination System

On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, copy the xttplan.txt andtsbkupmap.txt files created in the step 3.1 from the source system and run the rollforward datafiles step as follows:

[oracle@dest]$ scp oracle@source:/home/oracle/xtt/xttplan.txt /home/oracle/xtt
[oracle@dest]$ scp oracle@source:/home/oracle/xtt/tsbkupmap.txt /home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r

The rollforward datafiles step connects to the incremental convert instance as SYS, converts the incremental backups, then connects to the destination database and applies the incremental backups for each tablespace being transported.


Note that you must copy the xttplan.txt and tsbkupmap.txt files each time that this step is executed, because their content is different each iteration.
Step 3.4 - Determine the FROM_SCN for the Next Incremental Backup
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, run the determine new FROM_SCN step as follows:

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -s

The determine new FROM_SCN step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1.
Step 3.5 - Repeat the Roll Forward Phase (Phase 3) or Move to the Transport Phase (Phase 4)

At this point there are two choices:

  1. If you need to bring the files at the destination database closer in sync with the production system, then repeat the Roll Forward phase, starting with step 3.1.
  2. If the files at the destination database are as close as desired to the source database, then proceed to the Transport phase.

Phase 4 - Transport Phase

During this phase the source data is made READ ONLY and the destination datafiles are made consistent with the source database by creating and applying a final incremental backup. After the destination datafiles are made consistent, the normal transportable tablespace steps are performed to export object metadata from the source database and import it into the destination database.  The data being transported is accessible only in READ ONLY mode until the end of this phase.

Step 4.1 - Make Source Tablespaces READ ONLY in the Source Database
On the source system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the source database, make the tablespaces being transported READ ONLY.

system@source/prod SQL> alter tablespace TS1 read only;

Tablespace altered.

system@source/prod SQL> alter tablespace TS2 read only;

Tablespace altered.
Step 4.2 - Create the Final Incremental Backup, Transfer, Convert, and Apply It to the Destination Datafiles
Repeat steps 3.1 through 3.3 one last time to create, transfer, convert, and apply the final incremental backup to the destination datafiles. 

[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -i

[oracle@source]$ scp `cat incrbackups.txt` oracle@dest:/stage_dest

[oracle@source]$ scp xttplan.txt oracle@dest:/home/oracle/xtt
[oracle@source]$ scp tsbkupmap.txt oracle@dest:/home/oracle/xtt

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -r
Step 4.3 - Import Object Metadata into Destination Database
On the destination system, logged in as the oracle user with the environment (ORACLE_HOME and ORACLE_SID environment variables) pointing to the destination database, run the generate Data Pump TTS command step as follows:

[oracle@dest]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl -e

The generate Data Pump TTS command step creates a sample Data Pump network_link transportable import command in the file xttplugin.txt with the transportable tablespaces parameters TRANSPORT_TABLESPACES and TRANSPORT_DATAFILES correctly set.  Note that network_link mode initiates an import over a database link that refers to the source database.  A separate export or dump file is not required.  If you choose to perform the tablespace transport with this command, then you must edit the import command to replace import parameters DIRECTORY, LOGFILE, and NETWORK_LINK with site-specific values.

The following is an example network mode transportable import command:

[oracle@dest]$ impdp directory=DATA_PUMP_DIR logfile=tts_imp.log network_link=ttslink \ 
transport_full_check=no \ 
transport_tablespaces=TS1,TS2 \ 
transport_datafiles='+DATA/prod/datafile/ts1.285.771686721', \ 
'+DATA/prod/datafile/ts2.286.771686723', \ 
'+DATA/prod/datafile/ts2.287.771686743'

After the object metadata being transported has been extracted from the source database, the tablespaces in the source database may be made READ WRITE again, if desired.

Database users that own objects being transported must exist in the destination database before performing the transportable import.

If you do not use network_link import, then perform the tablespace transport by running transportable mode Data Pump Export on the source database to export the object metadata being transported into a dump file, then transfer the dump file to the destination system, then run transportable mode Data Pump Import to import the object metadata into the destination database.  Refer to the following manuals for details:

Step 4.4 - Make the Tablespace(s) READ WRITE in the Destination Database

The final step is to make the destination tablespace(s) READ WRITE in the destination database.

system@dest/prod SQL> alter tablespace TS1 read write; 

Tablespace altered. 

system@dest/prod SQL> alter tablespace TS2 read write; 

Tablespace altered.
Step 4.5 - Validate the Transported Data
At this step, the transported data is READ ONLY in the destination database.  Perform application specific validation to verify the transported data.

Also, run RMAN to check for physical and logical block corruption by running VALIDATE TABLESPACE as follows:

RMAN> validate tablespace TS1, TS2 check logical;


Phase 5 - Cleanup

If a separate incremental convert home and instance were created for the migration, then the instance may be shutdown and the software removed.
Files created by this process are no longer required and may now be removed.  They include the following:
  • dfcopydir location on the source system
  • backupformat location on the source system
  • stageondest location on the destination system
  • backupondest location on the destination system
  • $TMPDIR location in both destination and source systems

Appendix


Description of Perl Script xttdriver.pl Options

 The following table describes the options available for the main supporting script xttdriver.pl.

Option Description
-S prepare source for transfer

-S option is used only when Prepare phase method is dbms_file_transfer.

Prepare step is run once on the source system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.  This step creates files xttnewdatafiles.txt and getfile.sql.

-G get datafiles from source

-G option is used only when Prepare phase method is dbms_file_transfer.

Get datafiles step is run once on the destination system during Phase 2A with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.  The -S option must be run beforehand and files xttnewdatafiles.txt and getfile.sql transferred to the destination system.

This option connects to the destination database and runs script getfile.sql.  getfile.sql invokes dbms_file_transfer.get_file() subprogram for each datafile to transfer it from the source database directory object (defined by parameter srcdir) to the destination database directory object (defined by parameter dstdir) over a database link (defined by parameter srclink).

-p prepare source for backup

-p option is used only when Prepare phase method is RMAN backup.

Prepare step is run once on the source system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step connects to the source database and runs the xttpreparesrc.sql script once for each tablespace to be transported, as configured in xtt.properties.  xttpreparesrc.sql does the following:

  1. Verifies the tablespace is online, in READ WRITE mode, and contains no offline datafiles.
  2. Identifies the SCN that will be used for the first iteration of the incremental backup step and writes it into file $TMPDIR/xttplan.txt.
  3. Creates the initial datafile copies on the destination system in the location specified by the parameter dfcopydir set in xtt.properties.  These datafile copies must be transferred manually to the destination system.
  4. Creates RMAN script $TMPDIR/rmanconvert.cmd that will be used to convert the datafile copies to the required endian format on the destination system.
-c convert datafiles

-c option is used only when Prepare phase method is RMAN backup.

Convert datafiles step is run once on the destination system during Phase 2B with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step uses the rmanconvert.cmd file created in the Prepare step to convert the datafile copies to the proper endian format.  Converted datafile copies are written on the destination system to the location specified by the parameter storageondest set in xtt.properties.

-i create incremental Create incremental step is run one or more times on the source system with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.

This step reads the SCNs listed in $TMPDIR/xttplan.txt and generates an incremental backup that will be used to roll forward the datafile copies on the destination system.
-r rollforward datafiles Rollforward datafiles step is run once for every incremental backup created with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step connects to the incremental convert instance using the parameters cnvinst_home and cnvinst_sid, converts the incremental backup pieces created by the Create Incremental step, then connects to the destination database and rolls forward the datafile copies by applying the incremental for each tablespace being transported.
-s determine new FROM_SCN Determine new FROM_SCN step is run one or more times with the environment (ORACLE_HOME and ORACLE_SID) set to the source database.
This step calculates the next FROM_SCN, records it in the file xttplan.txt, then uses that SCN when the next incremental backup is created in step 3.1. It reports the mapping of the new FROM_SCN to wall clock time to indicate how far behind the changes in the next incremental backup will be.
-e generate Data Pump TTS command Generate Data Pump TTS command step is run once on the destination system with the environment (ORACLE_HOME and ORACLE_SID) set to the destination database.

This step creates the template of a Data Pump Import command that uses a network_link to import metadata of objects that are in the tablespaces being transported.
-d debug -d option enables debug mode for xttdriver.pl and RMAN commands it executes.  Debug mode can also be enabled by setting environment variable XTTDEBUG=1.


Description of Parameters in Configuration File xtt.properties

The following table describes the parameters defined in the xtt.properties file that is used by xttdriver.pl.

Parameter Description Example Setting
tablespaces Comma-separated list of tablespaces to transport from source database to destination database. Must be a single line, any subsequent lines will not be read. tablespaces=TS1,TS2
platformid Source database platform id, obtained from V$DATABASE.PLATFORM_ID. platformid=2
srcdir

Directory object in the source database that defines where the source datafiles currently reside. Multiple locations can be used separated by ",". The srcdir to dstdir mapping can either be N:1 or N:N. i.e. there can be multiple source directories and the files will be written to a single destination directory, or files from a particular source directory can be written to a particular destination directory.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srcdir=SOURCEDIR

srcdir=SRC1,SRC2

dstdir

Directory object in the destination database that defines where the destination datafiles will be created.  If multiple source directories are used (srcdir), then multiple destinations can be defined so a particular source directory is written to a particular destination directory.  

This parameter is used only when Prepare phase method is dbms_file_transfer.

dstdir=DESTDIR

dstdir=DST1,DST2

srclink

Database link in the destination database that refers to the source database.  Datafiles will be transferred over this database link using dbms_file_transfer.

This parameter is used only when Prepare phase method is dbms_file_transfer.

srclink=TTSLINK
dfcopydir

Location on the source system where datafile copies are created during the "-p prepare" step. 

This location must have sufficient free space to hold copies of all datafiles being transported.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.  See Note 359515.1 for mount option guidelines.

This parameter is used only when Prepare phase method is RMAN backup.

dfcopydir=/stage_source
backupformat Location on the source system where incremental backups are created.  

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

This location may be an NFS-mounted filesystem that is shared with the destination system, in which case it should reference the same NFS location as the stageondest parameter for the destination system.
backupformat=/stage_source
stageondest Location on the destination system where datafile copies are placed by the user when they are transferred manually from the source system. 

This location must have sufficient free space to hold copies of all datafiles being transported.

This is also the location from where datafiles copies and incremental backups are read when they are converted in the "-c conversion of datafiles" and "-r roll forward datafiles" steps.

This location may be a DBFS-mounted filesystem.

This location may be an NFS-mounted filesystem that is shared with the source system, in which case it should reference the same NFS location as the dfcopydir and backupformat parameters for the source system.  See Note 359515.1 for mount option guidelines.
stageondest=/stage_dest
storageondest

Location on the destination system where the converted datafile copies will be written during the "-c conversion of datafiles" step. 

This location must have sufficient free space to permanently hold the datafiles that are transported.

This is the final location of the datafiles where they will be used by the destination database.

This parameter is used only when Prepare phase method is RMAN backup.

storageondest=+DATA
- or -
storageondest=/oradata/prod/%U
backupondest Location on the destination system where converted incremental backups on the destination system will be written during the "-r roll forward datafiles" step. 

This location must have sufficient free space to hold the incremental backups created for one iteration through the process documented above.

NOTE: If this is set to an ASM location then define properties asm_home and asm_sid below. If this is set to a file system location, then comment out asm_home and asm_sid parameters below.
backupondest=+RECO
cnvinst_home

Only set this parameter if a separate incremental convert home is in use.

ORACLE_HOME of the incremental convert instance that runs on the destination system.

cnvinst_home=/u01/app/oracle/product/11.2.0.4/xtt_home
cnvinst_sid

Only set this parameter if a separate incremental convert home is in use.

ORACLE_SID of the incremental convert instance that runs on the destination system.

cnvinst_sid=xtt
asm_home ORACLE_HOME for the ASM instance that runs on the destination system.

NOTE: If backupondest is set to a file system location, then comment out both asm_home and asm_sid.
asm_home=/u01/app/11.2.0.4/grid
asm_sid ORACLE_SID for the ASM instance that runs on the destination system. asm_sid=+ASM1
parallel

Defines the degree of parallelism set in the RMAN CONVERT command file rmanconvert.cmd. This file is created during the prepare step and used by RMAN in the convert datafiles step to convert the datafile copies on the destination system.  If this parameter is unset, xttdriver.pl uses parallel=8.

NOTE: RMAN parallelism used for the datafile copies created in the RMAN Backup prepare phase and the incremental backup created in the rollforward phase is controlled by the RMAN configuration on the source system. It is not controlled by this parameter. 


parallel=3
rollparallel

Defines the level of parallelism for the -r roll forward operation.

rollparallel=2
getfileparallel

Defines the level of parallelism for the -G operation

Default value is 1. Maximum supported value is 8.

getfileparallel=4


Change History


Change Date

rman-xttconvert_2.0.zip released - add support for multiple source and destination directories

2015-Apr-20

rman-xttconvert_1.4.2.zip released - add parallelism support for -G get file from source operation

2014-Nov-14

rman-xttconvert_1.4.zip released - remove staging area requirement, add parallel rollforward, eliminate conversion instance requirements when using 11.2.0.4.

2014-Feb-21

rman-xttconvert_1.3.zip released - improves handling of large databases with large number of datafiles.

2013-Apr-10

REFERENCES


NOTE:359515.1 - Mount Options for Oracle files when used with NFS on NAS devices
NOTE:413484.1 - Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration
NOTE:1166564.1 - Master Note for Transportable Tablespaces (TTS) -- Common Questions and Issues
NOTE:1454872.1 - Transportable Tablespace (TTS) Restrictions and Limitations: Details, Reference, and Version Where Applicable
NOTE:2005729.1 - 12C - Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup 
NOTE:2100369.1 - Cross Platform Incremental Failed with "Error in executing xttstartupnomount.sql"
NOTE:2141314.1 - Running Roll forward Phase of XTT Procedure in Note 1389592.1 Returns Error ORA-19848
NOTE:733824.1 - How To Recreate A Database Using TTS (Transportable TableSpace)






About Me

...............................................................................................................................

● 本文整理自网络

● 本文在itpub(http://blog.itpub.net/26736162)、博客园(http://www.cnblogs.com/lhrbest)和个人微信公众号(xiaomaimiaolhr)上有同步更新

● 本文itpub地址:http://blog.itpub.net/26736162/abstract/1/

● 本文博客园地址:http://www.cnblogs.com/lhrbest

● 本文pdf版及小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/

● 数据库笔试面试题库及解答:http://blog.itpub.net/26736162/viewspace-2134706/

● QQ群:230161599     微信群:私聊

● 联系我请加QQ好友(646634621),注明添加缘由

● 于 2017-07-01 09:00 ~ 2017-07-31 22:00 在魔都完成

● 文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

● 版权所有,欢迎分享本文,转载请保留出处

...............................................................................................................................

拿起手机使用微信客户端扫描下边的左边图片来关注小麦苗的微信公众号:xiaomaimiaolhr,扫描右边的二维码加入小麦苗的QQ群,学习最实用的数据库技术。


DBA笔试面试讲解
欢迎与我联系

请登录后发表评论 登录
全部评论

注册时间:2015-01-08

  • 博文量
    495
  • 访问量
    478080