ITPub博客

首页 > 数据库 > Oracle > [20190911]12c dml redo优化2.txt

[20190911]12c dml redo优化2.txt

Oracle 作者:lfree 时间:2019-09-12 20:35:46 0 删除 编辑

[20190911]12c dml redo优化2.txt

--//晚上看了https://jonathanlewis.wordpress.com/2019/09/08/quiz-night-34/测试,原来dml优化在12cR2已经实现。
--//我一直以为18c才有,链接:http://blog.itpub.net/267265/viewspace-2220303/=>[20181116]18c DML 日志优化.txt
--//不过作者给出测试一个例外,redo日志反而增加的情况,自己重复测试看看。

1.环境:
SCOTT@test01p> @ ver1
PORT_STRING                    VERSION        BANNER                                                                               CON_ID
------------------------------ -------------- -------------------------------------------------------------------------------- ----------
IBMPC/WIN_NT64-9.1.0           12.2.0.1.0     Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production              0

SYS@test> Alter database drop supplemental log data;
Database altered.

--//要关闭附加日志,不然测试不出来这个现象!!

create table t1 as with generator as ( select rownum id from dual connect by rownum <= 1e4)
select
    rownum      id,
    lpad(rownum,10,'x')     small_vc,
--  lpad(rownum,10,'0')     small_vc,
    'Y'     flag
from
    generator   v1
where rownum <= 1e3
;

begin
    dbms_stats.gather_table_stats(
    ownname      => user,
    tabname      =>'T1',
    method_opt   => 'for all columns size 1'
    );
end;
/

2.测试:
--//建立测试脚本a.txt:
column member new_value v_member
column member noprint
set numw 12
--//pause alter system switch logfile ;
--//pause alter system archive log current;
--//12c不允许在pluggable database执行这条命令,可以在别的回话执行然后继续。
--//SELECT  member FROM v$log a, v$logfile b WHERE a.group#(+) = b.group# and a.STATUS='CURRENT' and rownum=1;

column curr1 new_value v_curr1
select current_scn curr1 from v$database;

--//以下操作内容:
@ viewsess redo%size
update t1 set small_vc = upper(small_vc);
commit ;
@ viewsess redo%size

column curr2 new_value v_curr2
select current_scn curr2 from v$database;

prompt alter system dump logfile '&&v_member' scn min &&v_curr1 scn max &&v_curr2;
--//alter system dump logfile '&&v_member' scn min &&v_curr1 scn max &&v_curr2;
alter system dump redo scn min &&v_curr1 scn max &&v_curr2;

3.执行测试脚本a.txt
SCOTT@test01p> @ a.txt

       CURR1
------------
    10711432

NAME                               STATISTIC#        VALUE          SID
---------------------------------- ---------- ------------ ------------
redo size                                 288         1308          261

1000 rows updated.

Commit complete.
NAME                               STATISTIC#        VALUE          SID
---------------------------------- ---------- ------------ ------------
redo size                                 288       111668          261
IMU Redo allocation size                  658        45828          261


       CURR2
------------
    10711436
--//redo size =  111668-1308 = 110360.
--//110360/1024 = 107.77 kb

$ grep -n OP  D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_6284.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//" | gawk "{ print $NF}" | sort | uniq -c | sort -nr
     54 OP:5.1    ---//Update undo block
     52 OP:11.19  ---//Array update ?
      8 OP:5.2
      2 OP:5.4
      1 OP:17.28
      1 OP:13.22
      1 OP:11.2
      1 OP:10.2

--//54次OP:5.1,52次OP:11.19.也就是数据被成组修改,而不是一条一条修改.

$ egrep -n -e "OP:" -e "Array Update" D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_6284.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//"
41:CHANGE #1  OP:5.2
44:CHANGE #2  OP:5.1
64:CHANGE #3  OP:11.2
89:CHANGE #1  OP:13.22
98:CHANGE #1  OP:5.1
114:CHANGE #2  OP:10.2
129:CHANGE #1  OP:5.4
131:CHANGE #2 MEDIA RECOVERY MARKER  OP:17.28
140:CHANGE #1  OP:11.19
145:Array Update of 20 rows:
286:CHANGE #2  OP:5.2
289:CHANGE #3  OP:11.19
294:Array Update of 20 rows:
435:CHANGE #4  OP:11.19
440:Array Update of 20 rows:
581:CHANGE #5  OP:11.19
586:Array Update of 20 rows:
727:CHANGE #6  OP:11.19
732:Array Update of 20 rows:
873:CHANGE #7  OP:11.19
878:Array Update of 20 rows:
1019:CHANGE #8  OP:11.19
1024:Array Update of 20 rows:
1165:CHANGE #9  OP:11.19
1170:Array Update of 20 rows:
1311:CHANGE #10  OP:5.2
1314:CHANGE #11  OP:11.19
1319:Array Update of 20 rows:
1460:CHANGE #12  OP:11.19
1465:Array Update of 20 rows:
1606:CHANGE #13  OP:11.19
1611:Array Update of 20 rows:
1752:CHANGE #14  OP:11.19
1757:Array Update of 20 rows:
1898:CHANGE #15  OP:11.19
1903:Array Update of 20 rows:
2044:CHANGE #16  OP:11.19
2049:Array Update of 20 rows:
2190:CHANGE #17  OP:11.19
2195:Array Update of 20 rows:
2336:CHANGE #18  OP:5.2
2339:CHANGE #19  OP:11.19
2344:Array Update of 20 rows:
2485:CHANGE #20  OP:11.19
2490:Array Update of 13 rows:
2582:CHANGE #21  OP:11.19
2587:Array Update of 20 rows:
2728:CHANGE #22  OP:11.19
2733:Array Update of 20 rows:
2874:CHANGE #23  OP:11.19
2879:Array Update of 20 rows:
3020:CHANGE #24  OP:11.19
3025:Array Update of 20 rows:
3166:CHANGE #25  OP:11.19
3171:Array Update of 20 rows:
3312:CHANGE #26  OP:11.19
3317:Array Update of 20 rows:
3458:CHANGE #27  OP:5.2
3465:CHANGE #28  OP:11.19
3470:Array Update of 20 rows:
3611:CHANGE #29  OP:11.19
3616:Array Update of 20 rows:
3757:CHANGE #30  OP:11.19
3762:Array Update of 20 rows:
3903:CHANGE #31  OP:11.19
3908:Array Update of 20 rows:
4049:CHANGE #32  OP:11.19
4054:Array Update of 20 rows:
4195:CHANGE #33  OP:11.19
4200:Array Update of 20 rows:
--//前面密集出现OP:11.19是因为使用IMU的缘故.被IMU和private redo打包后输出.
4341:CHANGE #34  OP:5.1
4356:Array Update of 20 rows:
4497:CHANGE #35  OP:5.1
4509:Array Update of 20 rows:
4650:CHANGE #36  OP:5.1
4662:Array Update of 20 rows:
4803:CHANGE #37  OP:5.1
4815:Array Update of 20 rows:
4956:CHANGE #38  OP:5.1
4968:Array Update of 20 rows:
5109:CHANGE #39  OP:5.1
5121:Array Update of 20 rows:
5262:CHANGE #40  OP:5.1
5274:Array Update of 20 rows:
5415:CHANGE #41  OP:5.1
5427:Array Update of 20 rows:
5568:CHANGE #42  OP:5.1
5580:Array Update of 20 rows:
5721:CHANGE #43  OP:5.1
5733:Array Update of 20 rows:
5874:CHANGE #44  OP:5.1
5886:Array Update of 20 rows:
6027:CHANGE #45  OP:5.1
6039:Array Update of 20 rows:
6180:CHANGE #46  OP:5.1
6192:Array Update of 20 rows:
6333:CHANGE #47  OP:5.1
6345:Array Update of 20 rows:
6486:CHANGE #48  OP:5.1
6498:Array Update of 20 rows:
6639:CHANGE #49  OP:5.1
6651:Array Update of 20 rows:
6792:CHANGE #50  OP:5.1
6804:Array Update of 13 rows:
6896:CHANGE #51  OP:5.1
6908:Array Update of 20 rows:
7049:CHANGE #52  OP:5.1
7061:Array Update of 20 rows:
7202:CHANGE #53  OP:5.1
7214:Array Update of 20 rows:
7355:CHANGE #54  OP:5.1
7367:Array Update of 20 rows:
7508:CHANGE #55  OP:5.1
7520:Array Update of 20 rows:
7661:CHANGE #56  OP:5.1
7673:Array Update of 20 rows:
7814:CHANGE #57  OP:5.1
7826:Array Update of 20 rows:
7967:CHANGE #58  OP:5.1
7979:Array Update of 20 rows:
8120:CHANGE #59  OP:5.1
8132:Array Update of 20 rows:
8273:CHANGE #60  OP:5.1
8285:Array Update of 20 rows:
8426:CHANGE #61  OP:5.1
8438:Array Update of 20 rows:
8579:CHANGE #62  OP:5.1
8591:Array Update of 20 rows:
8735:CHANGE #1  OP:5.2
--//以下因为事务太大,IMU无法容纳整个事务,转化为传统模式. OP=5.1 OP=11.19交替出现.
8738:CHANGE #2  OP:5.1
8750:Array Update of 20 rows:
8891:CHANGE #3  OP:11.19
8896:Array Update of 20 rows:
9040:CHANGE #1  OP:5.1
9052:Array Update of 20 rows:
9193:CHANGE #2  OP:11.19
9198:Array Update of 20 rows:
9342:CHANGE #1  OP:5.1
9354:Array Update of 20 rows:
9495:CHANGE #2  OP:11.19
9500:Array Update of 20 rows:
9644:CHANGE #1  OP:5.1
9656:Array Update of 20 rows:
9797:CHANGE #2  OP:11.19
9802:Array Update of 20 rows:
9946:CHANGE #1  OP:5.1
9958:Array Update of 9 rows:
10022:CHANGE #2  OP:11.19
10027:Array Update of 9 rows:
10094:CHANGE #1  OP:5.1
10106:Array Update of 20 rows:
10247:CHANGE #2  OP:11.19
10252:Array Update of 20 rows:
10396:CHANGE #1  OP:5.1
10408:Array Update of 20 rows:
10549:CHANGE #2  OP:11.19
10554:Array Update of 20 rows:
10698:CHANGE #1  OP:5.1
10710:Array Update of 20 rows:
10851:CHANGE #2  OP:11.19
10856:Array Update of 20 rows:
11000:CHANGE #1  OP:5.2
11003:CHANGE #2  OP:5.1
11015:Array Update of 20 rows:
11156:CHANGE #3  OP:11.19
11161:Array Update of 20 rows:
11305:CHANGE #1  OP:5.1
11317:Array Update of 20 rows:
11458:CHANGE #2  OP:11.19
11463:Array Update of 20 rows:
11607:CHANGE #1  OP:5.1
11619:Array Update of 20 rows:
11760:CHANGE #2  OP:11.19
11765:Array Update of 20 rows:
11909:CHANGE #1  OP:5.1
11921:Array Update of 20 rows:
12062:CHANGE #2  OP:11.19
12067:Array Update of 20 rows:
12211:CHANGE #1  OP:5.1
12223:Array Update of 20 rows:
12364:CHANGE #2  OP:11.19
12369:Array Update of 20 rows:
12513:CHANGE #1  OP:5.1
12525:Array Update of 20 rows:
12666:CHANGE #2  OP:11.19
12671:Array Update of 20 rows:
12815:CHANGE #1  OP:5.1
12827:Array Update of 20 rows:
12968:CHANGE #2  OP:11.19
12973:Array Update of 20 rows:
13117:CHANGE #1  OP:5.2
13120:CHANGE #2  OP:5.1
13132:Array Update of 20 rows:
13273:CHANGE #3  OP:11.19
13278:Array Update of 20 rows:
13422:CHANGE #1  OP:5.1
13434:Array Update of 20 rows:
13575:CHANGE #2  OP:11.19
13580:Array Update of 20 rows:
13724:CHANGE #1  OP:5.1
13736:Array Update of 20 rows:
13877:CHANGE #2  OP:11.19
13882:Array Update of 20 rows:
14026:CHANGE #1  OP:5.1
14038:Array Update of 20 rows:
14179:CHANGE #2  OP:11.19
14184:Array Update of 20 rows:
14328:CHANGE #1  OP:5.1
14340:Array Update of 20 rows:
14481:CHANGE #2  OP:11.19
14486:Array Update of 20 rows:
14630:CHANGE #1  OP:5.1
14642:Array Update of 20 rows:
14783:CHANGE #2  OP:11.19
14788:Array Update of 20 rows:
14932:CHANGE #1  OP:5.1
14944:Array Update of 9 rows:
15008:CHANGE #2  OP:11.19
15013:Array Update of 9 rows:
15080:CHANGE #1  OP:5.1
15092:Array Update of 9 rows:
15156:CHANGE #2  OP:11.19
15161:Array Update of 9 rows:
15228:CHANGE #1  OP:5.4

3.重新测试:
--//重新建立表t1.注解lpad(rownum,10,'x') small_vc, 取消注解--lpad(rownum,10,'0')   small_vc。

drop table t1 purge ;
create table t1 as with generator as ( select rownum id from dual connect by rownum <= 1e4)
select
    rownum      id,
--    lpad(rownum,10,'x')     small_vc,
      lpad(rownum,10,'0')     small_vc,
    'Y'     flag
from
    generator   v1
where rownum <= 1e3
;

begin
    dbms_stats.gather_table_stats(
    ownname      => user,
    tabname      =>'T1',
    method_opt   => 'for all columns size 1'
    );
end;
/

SCOTT@test01p> @ a.txt

       CURR1
------------
    10711546

NAME                               STATISTIC#        VALUE          SID
---------------------------------- ---------- ------------ ------------
redo size                                 288         1336          261
1000 rows updated.
Commit complete.

NAME                               STATISTIC#        VALUE          SID
---------------------------------- ---------- ------------ ------------
redo size                                 288       216896          261
IMU Redo allocation size                  658        46852          261

       CURR2
------------
    10711555
--//redo size =  216896-1336 = 215560,可以发现第2次update修改产生日志相对前面(前面才不到108kb)要大许多.

$ grep -n OP  D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_3248.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//" | gawk "{ print $NF}" | sort | uniq -c | sort -nr
   1002 OP:5.1
   1000 OP:11.4
     10 OP:5.2
      2 OP:5.4
      1 OP:17.28
      1 OP:11.2
      1 OP:10.2
--//1002次OP:5.1,1000次OP: 11.4 ,lock row piece.
--//也就是OP: 11.4 ,lock row piece.是一条一条操作.如果update前后字串长度不是很大,看不出这样redo优化带来的效益.

--//另外如果打开附加日志,根本无法测出以上的效果.仅仅贴出最后测试结果:
Alter database add supplemental log data;

--//lpad(rownum,10,'x')     small_vc  的情况:
$ grep -n OP  D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_6476.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//" | gawk "{ print $NF}" | sort | uniq -c
      1 OP:10.2
      1 OP:11.2
   1000 OP:11.5
      1 OP:17.28
      2 OP:24.4
      1 OP:4.1
   1002 OP:5.1
     17 OP:5.2
      2 OP:5.20
      2 OP:5.4
--//redo size =  308316- 5644 = 302672.

--//lpad(rownum,10,'0')     small_vc  
$ grep -n OP  D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_5052.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//" | gawk "{ print $NF}" | sort | uniq -c
      1 OP:10.2
      1 OP:11.2
   1000 OP:11.5   -->update row piece,没有出现op=11.19的情况.
      1 OP:17.28
      2 OP:24.4
   1002 OP:5.1    -->Update undo block
     17 OP:5.2
      2 OP:5.20
      2 OP:5.4
--//redo size = 304684-2052  = 302632.
--//对比2者差别不大.而且根本不出现OP: 11.4的操作.
--//补充update t1 set small_vc = upper(small_vc);换成select * from t1 for update;的测试:

$ grep -n OP D:\APP\ORACLE\diag\rdbms\test\test\trace\test_ora_2912.trc | sed "s/CON_ID.*OP/ OP/" | sed "s/ ENC.*$//" | gawk "{ print $NF}" | sort | uniq -c | sort -nr
   1000 OP:5.1
   1000 OP:11.4
     12 OP:5.2
      1 OP:5.4
      1 OP:5.20
      1 OP:24.4
--//redo size = 238440-968 = 237472,这个redo size与前面测试相近(215560).
--//如果这样视乎对许多生产系统意义不大,因为现在许多应用系统还是打开附件日志的.
--//另外我在18c做过一个测试,如果修改多个字段,如果一个字段修改前后一致,一样能减少日志生成.

--//Redo OP Codes参考连接 ,https://jonathanlewis.wordpress.com/2017/07/25/redo-op-codes/
--//附上viewsess.sql脚本.
set verify off
column name format a70
SELECT b.NAME, a.statistic#, a.VALUE,a.sid
  FROM v$mystat a, v$statname b
 WHERE lower(b.NAME) like lower('%&1%') AND a.statistic# = b.statistic#
 and a.value>0;

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/267265/viewspace-2656951/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
熟悉oracle相关技术,擅长sql优化,rman备份与恢复,熟悉linux shell编程。

注册时间:2008-01-03

  • 博文量
    2471
  • 访问量
    6278345