ITPub博客

首页 > Linux操作系统 > Linux操作系统 > 学习expdp!

学习expdp!

原创 Linux操作系统 作者:nmgzw 时间:2019-06-23 20:27:05 0 删除 编辑

看expdp,出现了dba_datapump_jobs里存在一个作业无法清除,在metalink查到了该文章!

转来学习!


主题: HOW TO CLEANUP ROWS IN DBA_DATAPUMP_JOBS FOR STOPPED EXP/IMP JOBS WHEN DUMPFILE IS NOT THERE OR CORRUPTED
文档 ID: 注释:294618.1类型: PROBLEM
上次修订日期: 03-JAN-2005状态: MODERATED

The information in this document applies to:

Oracle Server - Enterprise Edition - Version: 10.1.0.2 to 10.1.0.3
This problem can occur on any platform.

Errors

ORA-31640 unable to open dump file %s for read
ORA-39000 bad dump file specification
ORA-39002 invalid operation

Symptoms

We have started a datapump job and stopped in-between. Then the dump file has been removed from the directory location.We are not able to attach to the job.
We are not able to remove the row from dba_datapump_jobs and the row for the job stays there forever.

Cause

When we run the datapump utility for export or import it runs as a job. This job can be seen in dba_datapump_jobs during the lifetime of the job. If we stop the job inbetween (or job gets aborted for some reasons) then the job lies in the dba_datapump_jobs and it will lie there till we resume the job or kill it. To kill or complete the job we need to attache to the job.

For a successful attachment we need to have the uncorrupted dump file in the proper location,
else we won't be able to attach.


SQL> drop directory MYDIR;
Directory dropped.

SQL> create directory MYDIR as 'D:DP';
Directory created.

SQL> grant read, write on directory MYDIR to public;
Grant succeeded.


-- Start the export job from one command prompt session

D:>expdp bh/sh DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 16:25
Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database integrity.
Starting "BH"."BHEXP": bh/******** DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type SCHEMA_EXPORT/USER
ORA-39014: One or more workers have prematurely exited.
Job "BH"."BHEXP" stopped due to fatal error at 16:26

D:>

-- Attache to the running job and stop it from another command prompt session


D:>expdp bh/sh attach=BHEXP

Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 16:25

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

Job: BHEXP
Owner: BH
Operation: EXPORT
Creator Privs: FALSE
GUID: 45DD2D9C04D8457C874C4AF0ADC93E6E
Start Time: Thursday, 30 December, 2004 16:25
Mode: SCHEMA
Instance: orcl
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND bh/******** DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY MYDIR
LOG_FILE_NAME e.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:DPE.DMP
bytes written: 4,096

Worker 1 Status:
State: EXECUTING

Export> stop_job
Are you sure you wish to stop this job ([y]/n): y

D:>


Now we see the semifinished job in dba_datapump_jobs.

SQL> select * from dba_datapump_jobs;

OWNER_NAME JOB_NAME OPERATION JOB_MODE STATE DEGREE ATTACHED_SESSIONS
---------- ---------- ---------- ---------- -------------- ------ -----------------
BH BHEXP EXPORT SCHEMA NOT RUNNING 0 0

We can attache to this session and resume / kill the job.
While attaching it refers to the master table of the job.

Master Table
------------
While the data and metadata are being transferred, a master table is used to track the progress
within a job. The master table is implemented as a user table within the database. The specific
function of the master table for export and import jobs is as follows:

For export jobs, the master table records the location of database objects within a dump file
set. Export builds and maintains the master table for the duration of the job. At the end of
an export job, the content of the master table is written to a file in the dump file set.

For import jobs, the master table is loaded from the dump file set and is used to control the
sequence of operations for locating objects that need to be imported into the target database.

The master table is created in the schema of the current user performing the export or import
operation. Therefore, that user must have sufficient tablespace quota for its creation. The name of
the master table is the same as the name of the job that created it. Therefore, you cannot explicitly
give a Data Pump job the same name as a preexisting table or view. For all operations, the information
in the master table is used to restart a job.

The master table is either retained or dropped, depending on the circumstances, as follows:
o Upon successful job completion, the master table is dropped.
o If a job is stopped using the STOP_JOB interactive command, the master table is retained for
use in restarting the job.
o If a job is killed using the KILL_JOB interactive command, the master table is dropped and the
job cannot be restarted.
o If a job terminates unexpectedly, the master table is retained. You can delete it if you do not
intend to restart the job.


sql_trace / 10046 also confirms this that while attaching the master table (BHEXP in this case) is
referred. The master table has the semidone dump file location(directory) and name. And if the file
is not present or corrupted then we will NOT be able to attach atall.

D:DP>dir

Directory of D:DP

12/30/2004 04:25p 4,096 E.DMP
12/30/2004 04:59p 1,100 e.log
2 File(s) 5,196 bytes

D:DP>rename e.dmp ee.dmp

D:DP>dir

Directory of D:DP

12/30/2004 04:59p 1,100 e.log
12/30/2004 04:25p 4,096 ee.dmp

D:>expdp bh/sh attach=BHEXP

Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 17:13

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "D:DPe.dmp" for read
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

Now the definition of dba_datapump_jobs is:

SELECT j.owner_name, j.job_name, j.operation, j.job_mode, j.state, j.workers,
NVL((SELECT COUNT(*)
FROM SYS.GV$DATAPUMP_SESSION s
WHERE j.job_id = s.job_id
GROUP BY s.job_id), 0)
FROM SYS.GV$DATAPUMP_JOB j
UNION ALL /* Not Running - Master Tables */
SELECT u.name, o.name, SUBSTR (c.comment$, 24, 30), SUBSTR (c.comment$, 55, 30), 'NOT RUNNING', 0, 0
FROM sys.obj$ o, sys.user$ u, sys.com$ c
WHERE SUBSTR (c.comment$, 1, 22) = 'Data Pump Master Table'
AND RTRIM (SUBSTR (c.comment$, 24, 30)) IN ('EXPORT','ESTIMATE','IMPORT','SQL_FILE','NETWORK')
AND RTRIM (SUBSTR (c.comment$, 55, 30)) IN ('FULL','SCHEMA','TABLE','TABLESPACE','TRANSPORTABLE')
AND o.obj# = c.obj#
AND o.type# = 2
AND u.user# = o.owner#
AND NOT EXISTS (SELECT 1
FROM SYS.GV$DATAPUMP_JOB
WHERE owner_name = u.name
AND job_name = o.name)
/


This view refers to the dynamic instance view and also to sys.obj$, sys.user$, sys.com$.
So everytime we query it, it will listout the stopped job. This will be there across instance startups.

Fix

The master table is a normal table under the schema initiating the export/import. If the dumpfile is available then we can place it in the correct location to attach.

D:DP>rename ee.dmp e.dmp
D:>expdp bh/sh attach=BHEXP

Export: Release 10.1.0.2.0 - Production on Thursday, 30 December, 2004 17:29

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

Job: BHEXP
Owner: BH
Operation: EXPORT
Creator Privs: FALSE
GUID: 45DD2D9C04D8457C874C4AF0ADC93E6E
Start Time: Thursday, 30 December, 2004 17:29
Mode: SCHEMA
Instance: orcl
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND bh/******** DIRECTORY=MYDIR DUMPFILE=e.dmp LOGFILE=e.log JOB_NAME=BHEXP
DATA_ACCESS_METHOD AUTOMATIC
ESTIMATE BLOCKS
INCLUDE_METADATA 1
LOG_FILE_DIRECTORY MYDIR
LOG_FILE_NAME e.log
TABLE_CONSISTENCY 0
USER_METADATA 1
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: D:DPe.dmp
bytes written: 4,096

Worker 1 Status:
State: UNDEFINED

Export> kill_job
Are you sure you wish to stop this job ([y]/n): y

D:>

SQL> select * from dba_datapump_jobs;
no rows selected

If we donot have the dump file or the dump file is corrupted then the only way left is to
drop the master table of the job from the schema.

SQL> conn bh/sh
Connected.
SQL> drop table bhexp purge;

SQL> select * from dba_datapump_jobs;
no rows selected

References



Help us improve our service. Please email us your comments for this document. .

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/25702/viewspace-464247/,如需转载,请注明出处,否则将追究法律责任。

下一篇: 字符集转换规则
请登录后发表评论 登录
全部评论

注册时间:2002-10-18

  • 博文量
    71
  • 访问量
    55276