ITPub博客

首页 > 应用开发 > IT综合 > Tuning Considerations When Import Is Slow

Tuning Considerations When Import Is Slow

原创 IT综合 作者:playwawa 时间:2005-06-01 15:51:43 0 删除 编辑

There is very little consolidated information on how to speed up

import when it is unbearably slow.  Obviously import will take

whatever time it needs to complete, but there are some things 

that can be done to shorten the time it will take.

[@more@]

System Level Changes 

-------------------- 

 

- Create and use one large rollback segment, take all other rollback 

  segments offline.  One rollback segment approximately 50% of the size 

  of the largest table being imported should be large enough.  Import 

  basically does 'insert into tabX values (',,,,,,,')' for every row in 

your database, so the rollback generated for insert statements is only

  the rowid for each row inserted.  Also create the rollback with the 

  minimum 2 extents of equal size. 

 

- Put the database in NOARCHIVELOG mode until the import is complete. 

  This will reduce the overhead of creating and managing archive logs.

  

- As with the rollback segment, create several large redo log files, the 

  larger the better.  Take any small redo logs off line.  The larger the 

  log files, the fewer log switches that are needed.  Check the alert 

  log for messages like 'Thread 1 cannot allocate new log, sequence 17, 

  Checkpoint not complete'.  This indicates the log files need to be bigger 

  or you need more of them. 

 

- If possible, make sure that rollback, table data, and redo log files 

  are all on separate disks.  This increases throughput by reducing 

  I/O contention. 

 

- Make sure there is no IO contention occurring.  If possible, don't run 

  other jobs which may compete with the import for system resources.

- Make sure there are no statistics on data dictionary tables.

- Check the sqlnet.ora in the TNS_ADMIN location.  Make sure that

  TRACE_LEVEL_CLIENT = OFF

- Increase DB_BLOCK_SIZE when recreating the database, if possible.  The 

  larger the block size, the smaller the number of I/O cycles needed. 

  This change is permanent, so consider all effects it will have before 

  changing it. 

Init.ora Parameter Changes  

- Set LOG_CHECKPOINT_INTERVAL to a number that is larger than the size 

  of the redo log files.  This number is in OS blocks (512 bytes on most 

  Unix systems).  This reduces checkpoints to a minimum (only at log 

  switch time). 

 

- Increase SORT_AREA_SIZE.  Indexes are not being built yet, but any 

  unique or primary key constraints will be.  The increase depends on what 

  other activity is on the machine and how much free memory is available. 

  Try 5-10 times the normal setting.  If the machine starts swapping and 

  paging, you have set it too high. 

 

- Try increasing db_block_buffers and shared_pool_size.

  Shared pool holds cached dictionary info and things like cursors, 

  procedures, triggers, etc. Dictionary info. or cursors created on 

  the import's behalf (there may be many since it's always working on a

  new table) may sometimes clog the pipes. Therefore, this stale stuff 

  sits around until the aging/flush mechanisms kick in on a per-request 

  basis because a request can't be satisfied from the lookaside lists. The 

  ALTER SYSTEM FLUSH SHARED_POOL throws out *all* currently unused objects 

  in one operation, hence, defragments the pool.

  If you can restart the instance with a bigger SHARED_POOL_SIZE prior

  to importing, that would definitely help. When it starts to slow down,

  at least you can see what's going on by doing the following:

  SQL> set serveroutput on size 2000;

  SQL>begin

  SQL>    dbms_shared_pool.sizes(2000);

  SQL> end;

  SQL> /

  The dbms_shared_pool is in $ORACLE_HOME/rdbms/admin/dbmspool.sql

Import Options Changes  

- Use COMMIT=N.  This will cause import to commit after each object (table), 

  not after each buffer.  This is why one large rollback segment is needed. 

 

- Use a large BUFFER size.  This value also depends on system activity, 

  database size, etc.  Several megabytes is usually enough, but if you 

  have the memory some can go higher.  Again, check for paging and swapping 

  at the OS level to see if it is too high.  This reduces the number of 

  times the import program has to go to the export file for data.  Each 

  time it fetches one buffer's worth of data. 

 

- Consider using  INDEXES=N during import. The user defined indexes will be 

  created after the table has   been created and populated, but if the primary 

  objective of the import is to get the data in there as fast as possible, 

  then importing with INDEXES=N will help. The indexes can then be created 

  at a later date when time is not a factor. 

  If this approach is chosen, then you will need to use INDEXFILE option 

  to extract the DLL for the index creation or to re-run the import with 

  INDEXES=Y and ROWS=N.

REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times 

  the export time.

    

Large Imports of LOB Data:

Generally speaking, a good forumla for determining a target elapsed 

time for a table import versus the elapsed time for the table export is:

  import elapsed time = export elapsed time X 4

- Eliminate indexes.  This affects total import time significantly.

  The existance of LOB data requires special consideration.

  The LOB locator has a primary key that cannot be explicity dropped 

  or ignored during the import process.

- Make certain that sufficient space in sufficently large contiguous 

  chunks is available to complete the data load. The following should 

  provide an accurate image of the space available in the target 

  tablespace:

  alter tablespace mailindx coalesce;

  select bytes 

  from dba_free_space 

  where tablespace_name = 'MAILINDX'

  order by bytes desc;

  select bytes 

  from dba_free_space 

  where tablespace_name = 'MAILINDX'

  order by bytes desc;

Large Imports of LONG Data:

---------------------------

Importing a table with a LONG column may cause a higher rate of I/O and disk 

utilization than importing a table without a LONG column.  

There are no parameters available within IMP to help optimize the import of these 

data types.

                                                                                                         

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/202861/viewspace-799577/,如需转载,请注明出处,否则将追究法律责任。

请登录后发表评论 登录
全部评论
  • 博文量
    105
  • 访问量
    1174484