ITPub博客

首页 > Linux操作系统 > Linux操作系统 > ORACLE 10G OCA 042 笔记(二)

ORACLE 10G OCA 042 笔记(二)

原创 Linux操作系统 作者:zhengbao_jun 时间:2009-02-25 08:16:41 0 删除 编辑

************$8 管理一致性与并发性*********************************
oracle 10g 保留用户自从一个事务开始以来已被修改过的数据所需要的撤销数据. oracle的GUI工具EM Database control使撤销管理和调整大小变得很容易.UNDO ADVISOR收集统计数据,帮助管理员调整撤销表空间的大小,以便一个事务的DML语句能顺利执行完毕,同时又使SELECT语句不返回"快照太旧"的错误.
作用:rollback对数据库的修改,支持读一致,支持回闪查询,数据库恢复操作(回退未提交数据).
一个数据库可以有一个以上的撤销表空间,可只有一个表空间是活动的.
sql>select * from v$rollname;(检索当前活动撤销段的名称)
V$TRANSACTION动态性能视图显示一个事务和各撤销段的关系.
for example:
sql>set transaction name 'update clerk salaries';
sql>update hr.employees set salary = salary * 1.25 where job_id like '%CLERK';
SQL>select xid,status,start_time,xidusn seg_num,r.name seg_name from v$transaction t join v$rollname r on t.xidusn=r.usn where t.name='update clerk salaries';
说明:insert语句几乎不使用撤销表中的空间. 新表行的指针被存储在撤销表空间中.
还可把读一致性应用于一个完整的事务.如下所示:
>set transaction read only;
直到该事务回退或者提交之前,该事务中的所由查询只能看到该事务提交之前发生在其他表中的已提交的事务.换句话说,只读事务不能含有任何一条修改表数据的语句,无论表驻留在什么地方.例如ALTER SUER语句.
sql>alter system set undo_management = auto scope=spfile; --或 manual
sql>alter system set undo_tablespace=undo_atch;
sql>alter system set undo_retention = 43200; (12小时)
UNDO_RETENTION参数:
撤销信息分三类:未提交撤销信息,已提交撤销信息,过期撤销信息.
 
 
监视撤销表空间
OEM(oracle 10g 图中长查询以分钟为单位,是是否显示错了),使用UNDO ADVISOR
SQL>SELECT to_char(begin_time,'yyyy-mm-dd hh24:mi:ss') starttime,to_char(end_time,'yyyy-mm-dd hh24:mi:ss') endtime,undoblks,maxquerylen maxqrylen from v$undostat;
 
 
手工使用V$UNDOSTAT视图确定撤销表空间的大小
undo_tablespace_size = UR * UPS * blocksize (UR--UNDO_RETENTION,UPS--UNDOBLKS)
for example:undo_tablespace_size = 43200 * 626 * 8192 = 206 GB 再乘(1+10~20%)
 
 
保障撤销保持能力
sql>alter tablespace undotbs1 retention guarantee;
sql>alter tablespace undotbs1 retention noguarantee;
sql>select tablespace_name,contents,retention from dba_tablespaces;
 
 
监视加锁和解决锁冲突
加锁方式(文字费解,以后看)
ROW SHARE
ROW EXCLUSIVE
SHARE
SHARE ROW EXCLUSIVE
EXCLUSIVE
 
 
SQL>LOCK table hr.employees,hr.departments in EXCLUSIVE MODE;
sql>select * from hr.employees where mnager_id = 100 for update;
sql>lock table hr.employees in share row exclusive mode nowait;
 
 
检测锁冲突 OEM---performance---instance locks 很直观,自给看看了.kiss session(死锁处理)
 
 

死锁示例
会话1 时间会话2
update employees set salary=salary 11:29 update employees set manager=100
*1.2 where employee_id=102; where employee_id=190;
update employees set salary=salary 11:44 update employees set manager=100
*1.2 where employee_id=190; *1.2 where employee_id=102;
ORA-00060:Deadlock detected while 11:45 还会等,如果会话1 rollback or comited
waiting for resource , 才执行完毕
此时commited 或者rollbakc吧
 
 
************$9 前瞻性数据库维护与性能监视**************************
oracle新增的特性
Automatic Workload Repository(AWR)
Automated Database Diagnostic Monitoring (ADDM)
Oracle 10g Tuning and Diagnostic advisor
DBA用两种方式来监视系统的管理和性能问题.
reactive monitoring(反应性监视)
proactive monitoring(前瞻性监视)
从各种源(数据目录,动态性能视图和操作系统等)中收集数据,还把基于代价的优化器统计数据广泛用于它的前瞻性监视.
AWR:MMON(memory monitor),MMNL(memory monitor light)二进程合作收集数据.MMON每60分醒来一次,从数据目录视图,动态性能视图和优化器中收集数据.然后存储在AWR中.
存储这些数据的表(AWR)由SYSMAN拥有.在SYSAUX表空间中.
激活AWR特性:设置statistics_level参数:BASIC,TYPICAL,ALL
修改时间间隔:
sql>execute dbms_workload_repository.modify_snapshot_settings(interval=>60,retention=>43200);(以分钟为单位,30天,1小时)
sql>begin DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(14400,60); end;
 
 
AWR运行完毕之后,ADDM自动分析收集到的统计数据,并把它们和前两个AWR快照所收集的统计数据进行比较.分析结果存储在AWR中.
ADDM会推荐可能的纠正方法,目标是最大限度降低DB TIME. db time = cpu时间和等待时间 (非空闲数据库用户累加所有非空闲用户的会话时间)
 
 

oem界面语言设置:
set NLS_LANG = american_america.ZHS16GBK
SET nls_lang =SIMPLIFIED CHINESE_CHINA.ZHS16GBK
 
 
OEM平时多观查。图不知如何理解。
 
 
使用数据目录视图查看ADDM分析
ADDM Data Dictionary Views
View Name :Description
DBA_ADVISOR_FINDINGS :Describes the findings identified by the ADDM analysis
DBA_ADVISOR_OBJECTS : Describes the objects that are referenced in the ADDM findings and recommendations
DBA_ADVISOR_RECOMMENDATIONS:Describes the recommendations made based on ADDM findings
DBA_ADVISOR_RATIONALE :Describes the rationale behind each ADDM finding
 
 
The following SQL statement shows a sample query on the DBA_ADVISOR_FINDINGS data dictionary view that identifies the type of performance problem that is causing the most impact
on the database:
SQL> SELECT task_id, type, message FROm dba_advisor_findings WHERE impact= (select MAX(impact) FROM dba_advisor_findings);
 
 
TASK_ID value shown in the query on DBA_ADVISOR_FINDINGS. A query on that view, using the TASK_ID of 164 returned by the ADDM session that had the potential for the greatest database
impact, returns the SQL statements shown here:
sql> SELECT attr4 FROM dba_advisor_objects WHERE task_id = 164;
 
 
The following query shows the recommendations for correcting the performance issues associated with TASK_ID 164, which was identified earlier as being the costliest database activity:
SQL> SELECT TRIM(attr1) ATTR1, TRIM(attr2) ATTR2, TRIM(attr3) ATTR3 FROM dba_advisor_actions WHERE task_id = 164;
 
 
The following example shows a sample query on the DBA_ADVISOR_RATIONALE view using the TASK_ID of 164 identified earlier:
SQL> SELECT message FROM dba_advisor_rationale WHERE task_id = 164;
 
 
ADDM诊断顾问(测试)
SQL Tuning Advisor
[1] SQL Access Advisor
[1] Memory Advisor
[1] Mean Time To Recover Advisor
[1] Segment Advisor
[1] Undo Management Advisor
 
 
ADDM Alerts
测试下发送邮件报警……
测试下基准度量的收集……
ADDM怎样计算阀值:AWR保存每个度量标准的高位值和低位值。计算方法取决于制定该阀值的比较运算符和相关的高位值和低位值。
 
 
性能监视:
dba应该再开始任何调节努力之前准确测出系统当前正在怎样运行。(如各种基准度量标准等)
Sources of Tuning Information
[1] The Alert log
[1] Background and user trace files
[1] Dynamic performance views
[1] Data dictionary views
The Alert log:The Oracle Alert log records informational and error messages for a variety of activities that
have occurred against the database during its operation.
Background and user trace files:Oracle trace files are text files that contain session information for the process that created
them。
Dynamic performance views: V$SGASTAT V$EVENT_NAME V$SYSTEM_EVENT V$SESSION_EVENT V$SESSION_WAIT V$STATNAME V$SYSSTAT V$SESSTAT V$SESSION V$WAITSTAT
OEM广泛使用这些视图,不一个一个了解了。
Data dictionary views:DBA_TABLES Table DBA_INDEXES INDEX_STATS DBA_DATA_FILES DBA_SEGMENTS DBA_HISTOGRAMS DBA_OBJECTS
Identifying Unusable Objects Using Data Dictionary
SQL> SELECT owner, object_name, object_type FROM dba_objects WHERE status = 'INVALID';
ALTER PACKAGE my_package COMPILE;
ALTER PACKAGE my_package COMPILE BODY;
ALTER PROCEDURE my_procedure COMPILE;
ALTER FUNCTION my_function COMPILE;
ALTER TRIGGER my_trigger COMPILE;
ALTER VIEW my_view COMPILE;
 
 
SQL> SELECT owner, index_name, index_type FROM dba_indexes WHERE status = 'UNUSABLE';
SQL> ALTER INDEX hr.job_id_pk REBUILD;
用OEM实现查找编译索引重建,很方便。索引重建选择脱机或联机还可以制定“便笺式”表空间(存储中间结果)
 
 
Storing Database Statistics in the Data Dictionary
Some columns in the DBA views are not populated with data until the table or index referenced
by the view is analyzed.
The cost-based optimizer (CBO) uses these statistics to formulate efficient execution plans
for each SQL statement that is issued by application users.
Because of this, the statistics gathered and stored in the data dictionary views are sometimes
called optimizer statistics.
Automatic Collection of Statistics:DBCA创建的数据库,默认,定时执行
Manual Collection of Statistics(不推荐,有定时的job)
OEM:略
Manually Gathering Statistics Using DBMS_STATS:备份旧的统计数据,允许以后恢复,执行并行分析更快速收集统计数据等……
GATHER_INDEX_STATS GATHER_INDEX_STATS GATHER_SCHEMA_STATS GATHER_DATABASE_STATS
如:The following example shows how the DBMS_STATS packages can be used to gather statistics on the PRODUCT_HISTORY table in SH’s schema:
SQL> EXECUTE DBMS_STATS.GATHER_TABLE_STATS (‘SH’,’PRODUCT_HISTORY’);
The presence of accurate optimizer statistics has a big impact on two important measures of overall system performance: throughput and response time.
 
 
Important Performance Metrics
Throughput is another example of a statistical performance metric. Throughput is the amount of processing that a computer or system can perform. in a given amount of time,
Throughput is an important measure when considering the scalability of the system. Scalability refers to the degree to which additional users can be added to the system without
system performance declining significantly.
Another important metric related to performance is response time. Response time is the amount of time that it takes for a single user’s request to return the desired result when using
an application
把服务器的I/O能力告诉ADDM
做I/O测试,我们的磁盘子系统默认的是10毫秒值
 
 
Summary
Oracle 10g provides many tools for proactively identifying and fixing potential performance
and management problems in the database. At the core of the monitoring system is the Automatic
Workload Repository (AWR), which uses the MMON background process to gather statistics
from the SGA and store them in a collection of tables owned by the user SYSMAN.
Following each AWR statistics collection interval, the Automatic Database Diagnostic Monitoring
(ADDM) feature examines the newly gathered statistics and compares them with the
two previous AWR statistics to establish baselines in an attempt to identify poorly performing
components of the database. The ADDM then summarizes these findings on the EM Database
Control main and Performance screens. Using these screens, you can identify and examine the
SQL statements that are contributing the most to DB Time. You can further explore the opportunities
for improving the performance or manageability of your database using the EM Database
Control advisors, which include the SQL Tuning Advisor, SQL Access Advisor, Memory
Advisor, Mean Time To Recover Advisor, Segment Advisor, and Undo Management Advisor.
Using the SQL Tuning Advisor, you can identify the SQL statements that have had the greatest
performance impact on the database. You can then examine these statements using the SQL
Access Advisor to determine if adjustments can be made to improve the execution paths for
these statements and therefore minimize their impact on total DB Time.
The Memory Advisor suggests changes that can potentially improve Oracle’s use of memory
within the SGA and PGA.
The Mean Time To Recover Advisor helps you determine if your database is properly configured
to meet service-level agreements for instance recovery in the event of a server failure or
an instance crash.
The Segment Advisor helps you determine which segments are using excess storage space and
which might benefit from a shrink operation. Shrinking these segments not only frees storage
space for use by other segments, but also minimizes the number of physical I/Os required to
access the segments.
Using the Undo Management Advisor, you can monitor and manage undo segments to minimize
the likelihood of ORA-01555, Snapshot Too Old error messages, and improve the application’s
overall read consistency.
You can also configure ADDM alerts to notify you via the EM Database Control or e-mail
whenever the performance of the database varies from established baselines or target levels.
Available storage space, excessive wait times, and high I/O activity are all examples of events
that you can monitor using alerts.
In addition to EM Database Control, you can find indicators of database performance in the
database Alert log, user and background trace files, data dictionary views, and dynamic performance
views. Some data dictionary views do not contain accurate information about the
segments in the database until after statistics are collected on those objects. Therefore, you can
automatically collect segment statistics through the use of EM Database Control jobs.
Invalid and unusable database objects also have a negative impact on performance and manageability.
You can monitor and repair invalid and unusable objects using the data dictionary
and the EM Database Control Administration screen.
EM Database Control summarizes several important performance metrics on the EM Database
Control main screen. These metrics include performance statistics for the host server, user
sessions, and instance throughput.
 
 
************$10 实现数据库备份************************************
 
 
Implementing Database Backups
As a database administrator, your primary goal is to keep the database open and available
for users, usually 24 hours a day, 7 days a week. Your partnership with the server’s system
administrator includes the following tasks:

Proactively solving common causes of failures

Increasing the mean time between failure (MTBF)

Ensuring a high level of hardware redundancy

Increasing availability by using Oracle options such as Real Application Clusters (RAC)
and Oracle Streams (an advanced replication technology)

Decreasing the
mean time to recover (MTTR)
by setting the appropriate Oracle initialization
parameters and ensuring that backups are readily available in a recovery scenario

Minimizing or eliminating loss of committed transactions by using archived redo logs,
standby databases, and Oracle Data Guard
RAC, Streams, Data Guard, and standby databases are beyond the scope of this
book, but are covered in more detail in advanced Oracle courseware.
 
 
Understanding and Configuring Recovery Components
The control files maintain the list of database files in the database, along with a record of the most recent database backups (if you are using RMAN for your backups). The checkpoint (CKPT) background process works in concert with the database writer (DBWn) process to manage the amount of time required for instance recovery; during instance recovery, the redo log files are used to synchronize the datafiles. For more serious types of failures such as media failures, archived redo log files are applied to a restored backup copy of a datafile to synchronize the datafiles and ensure that no committed transactions are lost. Finally, the Flash Recovery area, new to Oracle 10g , is a common area for all recovery-related files that makes your job much easier when backing up or recovering your database.
 
 
Control Files
The current log sequence number , which is a unique identifier that is incremented and recorded when an online redo log file is switched. Tablespace names,The most recent checkpoint information.等等
When you add a new file to the database or relocate a file, an Oracle server process immediately updates the information in the control file. Back up the control file after any structural
changes. The log writer (LGWR) process updates the control file with the current log sequence number. CKPT updates the control file with the recent checkpoint information. When the database is in ARCHIVELOG mode, the archiver (ARCn) processes update the control file with information such as the archive log filename and log sequence number.
Multiplexing Control Files Using an SPFILE
1. Alter the SPFILE while the database is still open:
SQL> ALTER SYSTEM SET CONTROL_FILES =
‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’,
‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’,
‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’,
‘/ora04/oradata/MYDB/ctrlMYDB04.ctl’ SCOPE=SPFILE;
This parameter change takes effect only after the next instance restart by using the
SCOPE=SPFILE qualifier. The contents of the binary SPFILE are changed immediately, but
the old specification of CONTROL_FILES is used until the instance is restarted.
2. Shut down the database:
SQL> SHUTDOWN NORMAL
3. Copy an existing control file to the new location:
$ cp /ora01/oradata/MYDB/ctrlMYDB01.ctl/ora04/oradata/MYDB/ctrlMYDB04.ctl
4. Start the instance:
SQL> STARTUP
 
 
Checkpoints
During a checkpoint, CKPT updates the control file and the header of the datafiles to reflect the last successful transaction by recording the last system change number (SCN). The SCN, which is a number sequentially assigned to each transaction in the database, is also recorded in the control file against the datafile name that is taken offline or made read-only.
A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and datafiles.
提前检查点,可以减少MTTR
 
 
Redo Log File
The LGWR process writes redo information from the redo log buffer to the online redo log files under a variety of circumstances:
[1] When a user commits a transaction, even if this is the only transaction in the log buffer.
[1] When the redo log buffer becomes one-third full.
[1] When the buffer contains approximately 1MB of changed records. This total does not include deleted or inserted records.
如果读者有一个RAC配置(单个数据库多个实例)。每个实例都有一个联机重做线程。写到相同的联机重做日志文件。因而ORACLE必须跟踪数据库修改正来自哪一个实例。
日志切换操作:The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as the active log files. The other log files are known as inactive.
提示:Keep the redo log files on a separate disk for better performance. If you have to store a datafile on the same disk as the redo log file, do not put the SYSTEM, UNDOTBS, SYSAUX, or any very active data or index tablespace file on this disk. A commit cannot complete until a transaction’s information has been written to the redo logs, so maximizing the throughput of the redo log files is a top priority.
The size of the redo log affects the checkpoint performance. If the size of the redo log is smaller compared with the number of transactions, a log switch occurs often, and so does the checkpoint.
You can adjust checkpoints primarily by using the initialization parameter FAST_START_MTTR_TARGET.
 
 
CREATE DATABASE “MYDB01”
. . .
LOGFILE
GROUP 1 (‘/ora02/oradata/MYDB01/redo0101.log’,
‘/ora03/oradata/MYDB01/redo0102.log’) SIZE 10M,
GROUP 2 (‘/ora02/oradata/MYDB01/redo0201.log’,
‘/ora03/oradata/MYDB01/redo0202.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
GROUP 3 (‘/ora02/oradata/MYDB01/redo0301.log’,
‘/ora03/oradata/MYDB01/redo0302.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
(‘/ora02/oradata/MYDB01/redo0301.log’,
‘/ora03/oradata/MYDB01/redo0302.log’) SIZE 10M;
 
 
ALTER DATABASE ADD LOGFILE
‘/ora02/oradata/MYDB01/redo0301.log’ REUSE;
 
 
ALTER DATABASE ADD LOGFILE MEMBER
‘/ora04/oradata/MYDB01/redo0203.log’ TO GROUP 2;
 
 
ALTER DATABASE DROP LOGFILE GROUP 3;
 
 
ALTER DATABASE DROP LOGFILE MEMBER
‘/ora04/oradata/MYDB01/redo0203.log’;
 
 
ALTER DATABASE CLEAR LOGFILE GROUP 3;
ALTER DATABASE CLEAR unarchived LOGFILE GROUP 3;(最好马上做一个全部数据库备份,因为未存档的重做日志文件不再适用于数据库恢复)
 
 
Archived Redo Log Files
Setting the Archive Destination
LOG_ARCHIVE_DEST_n = “null_string” | ((SERVICE = tnsnames_name | LOCATION = ‘directory_name’) [MANDATORY | OPTIONAL] [REOPEN [= integer]])
For example: LOG_ARCHIVE_DEST_1 = ((LOCATION=’/archive/MYDB01’) MANDATORY REOPEN = 60)
LOG_ARCHIVE_DEST_2 = (SERVICE=STDBY01) OPTIONAL REOPEN;
LOG_ARCHIVE_MIN_SUCCEED_DEST:成功写入的最小目的地数量
LOG_ARCHIVE_FORMAT:
%s Log sequence number
%t Thread number
%r Resetlogs ID: ensures uniqueness even after using advanced recovery techniques that resets the log sequence numbers
%d Database ID
Setting ARCHIVELOG
ALTER DATABASE ARCHIVELOG.
 
 
The Flash Recovery Area
Flash Recovery Area Occupants
Control files,Archived log files,Flashback logs,Control file and SPFILE autobackups,Datafile copies,RMAN backup sets。
You must define two initialization parameters to set up the Flash Recovery area: DB_RECOVERY_FILE_DEST_SIZE and DB_RECOVERY_FILE_DEST.
SQL> alter system set db_recovery_file_dest_size = 8g scope=both;
SQL> alter system set db_recovery_file_dest = ‘/OraFlash’ scope=both;
 
 

Performing Backups
Your backup strategy depends on the activity of your database, the level of availability required by your service-level agreements (SLAs), and how much downtime you can tolerate during a recovery effort.
Understanding Backup Terminology(一些很重要的概念)
Whole backups and partial backups are known as Oracle backup strategies. The backup type can be divided into two general categories: full backups and incremental backups. Depending on whether you make your database backups when the database is open or closed, backups can be further categorized into the backup modes known as consistent and inconsistent backups.
策略 strategies:Whole backups and partial backups
类型 type:full backups and incremental backups
方式 modes:consistent and inconsistent backups
Whole database A whole database backup includes all datafiles and at least one control file.Online redo log files are never backed up;
Partial database A partial database backup includes zero or more tablespaces, which in turn includes zero or more datafiles;a control file is optional in a partial database backup.
Full A full backup includes all blocks of every datafile backed up in a whole or partial database backup.
Incremental An incremental backup makes a copy of all data blocks that have changed since a previous backup.Oracle 10g supports five levels of incremental backups, from 0 to 4.
(好像看到有书说只有0,1)
Consistent A consistent backup, also known as an offline backup, is performed while the database is not open.
Inconsistent Although the term inconsistent backup may sound like something you might avoid in a database, it is a way to maintain availability of the database while performing backups.The backup is inconsistent because the SCN in the control file is most likely out of synch with the SCN in the header of the datafiles.
 
 
Backing Up the Control File
The trace backup is created in the directory specified by the initialization parameter USER_DUMP_DEST and its format is sid_ora_pid.trc,
SQL> alter database backup controlfile to trace;
 
 
Using RMAN to Create Backups
RMAN不用来备份联机重做日志文件,密码文件和基于文本的INIT.ORA初始化文件。
 
 
Configuring RMAN Backup Settings(oem,略)
Creating Full and Incremental Backups
The Oracle recommended backup strategy uses RMAN to make a one-time whole-database,baseline incremental level 0 online backup weekly, and then a level 1 incremental backup for the other days of the week. You can easily fine-tune this strategy for your own needs by making, for example, a level 2 incremental backup at noon during the weekdays if heavy DML (Data Manipulation Language) is occurring in the database.
RMAN> backup incremental level 0 as compressed backupset database;
RMAN> backup incremental level 1 as compressed backupset database;
测试之
 
 
另一个变动策略是创建一个递增更新备份
书中脚本
run
{
recover copy of database with tag 'inc_upd_img';
backup incremental level 1 for
recover of copy with tag 'inc_upd_img' database;
}
术语解释: COPY OF
Applies incremental backups to the specified image copy to roll it forward to any time equal to or before the most recent incremental backup of the file...... in conjunction with the BACKUP... FOR RECOVER OF COPY syntax.
 
 
An incrementally updated backup uses an incremental backup and updates the changed blocks in an existing image copy as if the entire image copy were backed up.
OEM中实验这个功能
1 做 LEVEL 0 全备份 (backupset)
2 做全部数据库备份(副本形式 as copy)
rman> backup device type disk tag '%TAG' database include current controlfile;(已经设置了oem默认是as copy,tag忘记设置了),此时有了数据库全部副本
3 更改默认设置是backupset,做 LEVEL 1备份,复选同时更新最近的副本 …………
4 OK!
以下是OEM 第3步骤的脚本
backup incremental level 1 cumulative device type disk tag '%TAG' database include current controlfile;
recover copy of database;
backup device type disk tag '%TAG' archivelog all not backed up;
 
 

我测试下单个数据文件的递增更新(老出错,以后修正)
RMAN> backup as copy datafile 4 tag 'users';
-------更新部分数据---------commit
RMAN> run {
recover copy of datafile 4 with tag 'users';
backup incremental level 1 for
recover of copy with tag 'users' datafile 4;
}
??找不到copy?错误?
 
 

Managing Backups--Catalog Maintenance
Catalog Additional Files Adds any image copy backups made outside RMAN to the RMAN catalog.
Crosscheck All Double-checks the backup files listed in the catalog against the actual files on disk to make sure that they are all available.
Delete All Obsolete Deletes all backup files not needed to satisfy the existing retention policy.
Delete All Expired Deletes the catalog entry for any backups not found when a crosscheck was performed.
 
 
eygle's blog 的……Oracle10g如何释放flash_recovery_area,解决ORA-19815错误
如果想释放flash_recovery_area空间,use RMAN CROSSCHECK and DELETE EXPIRED commands.
sys as sysdba>SELECT substr(name, 1, 30) name, space_limit AS quota,
space_used AS used,
space_reclaimable AS reclaimable,
number_of_files AS files
FROM v$recovery_file_dest ;
使用rman登录数据库进行crosscheck:
RMAN> crosscheck archivelog all;
RMAN> delete expired archivelog all;
Oracle指出,我们可以通过执行以下命令:
RMAN> backup recovery area;
将闪回区的内容备份到第三方介质,也同样可以解决这个问题。
是以为记。
 
 
释放空间(不符合保留策略的)
rman>delete noprompt obsolete 废弃文件
SQL> select name, space_limit max_size,
space_used used, space_reclaimable obsolete,
number_of_files num_files
from v$recovery_file_dest;
************$11 实现数据库恢复*************************************
 
 
Implementing Database Recovery
In a nutshell, your job is to increase the mean time between failures (MTBF) by providing redundant components where possible and leveraging other Oracle high-availability features such as Real Application Clusters (RAC) and Streams (an advanced replication technology).
 
 
The six general categories of failures are as follows:
Statement
A single database operation fails, such as a DML (Data Manipulation Language)
statement—INSERT,UPDATE, and so on.
User process
A single database connection fails.
Network
A network component between the client and the database server fails, and the session
is disconnected from the database.
User error
An error message is not generated, but the operation’s result, such as dropping a
table, is not what the user intended.
Instance
The database instance fails unexpectedly.
Media
One or more of the database files is lost, deleted, or corrupted.
 
 
Statement Failures
Although granting user privileges or additional quotas within a tablespace solves many of
these problems, also consider whether there are any gaps in the user education process that
might lead to some of these problems in the first place.
 
 
User Process Failures
The PMON (process monitor) background process periodically checks all user processes to ensure that the session is still connected.
.......... training users to terminate the application gracefully before shutting down their workstation.
 
 
Network Failures
To guard against these kinds of failures, you can provide redundant network paths from your clients to the server, as well as additional listener connections on the Oracle server and redundant network cards on the server.
 
 
User Error Failures
If a COMMIT has already been performed, you have a number of options at your disposal, such as using data in the undo tablespace for a Flashback Query or using data in the archived and online redo logs with the LogMiner utility, available as a command-line or GUI interface.
测试下logminer的gui版本
 
 
Instance Failures
An instance failure occurs when the instance shuts down without synchronizing all the database files to the same system change number (SCN), requiring a recovery operation the next time the instance is started
If the cause of the instance failure is related to an Oracle background process failure, you can use the alert log and process-specific trace files to debug the problem.
The EM Database Control makes it easy to review the contents of the alert log and anyother alerts generated right before the point of failure.
 
 
Media Failures
 
 
Understanding Instance Startup
SHUTDOWN
NOMOUNT
MOUNT
OPEN
You will have to se the dynamic performance view V$RECOVER_FILE to display a list of all files that need attention.
SQL> select file#, error from v$recover_file;
SQL> select file#, name from
v$datafile join v$recover_file using (file#);
 
 
Recovering from Instance Failure
Instance recovery occurs in two distinct phases: the first phase uses the online redo log files to estore the datafiles to the state before instance failure in a roll forward operation; after this step s completed, Oracle uses the undo tablespace to roll back any uncommitted transactions. The roll orward operation includes data in the undo tablespace; without a consistent undo tablespace, the oll back operation cannot succeed. Once
The amount of time required for instance recovery depends on how long it takes to bring the atafiles up-to-date from the last checkpoint position to the latest SCN in the control file.
To revent performance problems, the distance between the checkpoint position and the end of the redo log group cannot be more than 90 percent of the size of the redo log group.
You can tune instance recovery by setting an MTTR target, in seconds, using the initialization arameter FAST_START_MTTR_TARGET.
 
 
Recovering from User Errors
Using Flashback Query
Make sure that there is an undo tablespace in the database that is large enough to retain hanges made by all users for a specified period of time.
Specify how long the undo information will be retained for use by flashback queries by sing the initialization parameter UNDO_RETENTION.
SQL> select employee_id, last_name, email
from hr.employees
as of timestamp (systimestamp - interval '15' minute)
where employee_id = 101;
>select systimestamp from dual; 看系统时间
You can just as easily specify an absolute time of day to retrieve the contents of the row at hat time, as in this example:
SQL> select employee_id, last_name, email
from hr.employees
as of timestamp
(to_timestamp ('01-Sep-04 16:18:57.845993','DD-Mon-RR HH24:MI:SS.FF'))
where employee_id = 101;
 
 

Using Flashback Drop and the Recycle Bin
The recycle bin, new to Oracle 10g, is implemented as a data dictionary table.
The dropped object still belongs to the owner and still counts against the quota for the owner n the tablespace; in fact, the table itself is still directly accessible from the recycle bin, as you will see in subsequent examples.
Retrieving Dropped Tables from the Recycle Bin
SQL> flashback table order_items to before drop;
SQL> drop table order_items;
SQL> flashback table order_items to before drop rename to order_items_old_version;
限制因素
Only non-SYSTEM locally managed tablespaces can have a recycle bin.
A table’s dependent objects are saved in the recycle bin when the table is dropped, except for bitmap join indexes, referential integrity constraints (foreign key constraints), and
materialized view logs.
Indexes are protected only if the table is dropped first; explicitly dropping an index does not
place the index into the recycle bin.
 
 
Using Flashback Table
Flashback Table is different from Flashback Drop; Flashback Table undoes recent transactions to an existing table, whereas Flashback Drop recovers a dropped table.
Flashback Table uses data in the undo tablespace, whereas Flashback Drop uses the recycle bin.
To be able to flashback a table, you must enable row movement for the table.Because DML operations are used to bring the table back to its former state, the ROWIDs in the table change.
例如:两个表被删除了部分数据
SQL> alter table hr.employees enable row movement;
SQL> alter table hr.departments enable row movement;
SQL> flashback table hr.employees, hr.departments
to timestamp systimestamp - interval '15' minute;
Integrity constraints are not violated when one or more tables are flashed back; this is why you typically group tables related by integrity constraints or parent-child relationships in the
FLASHBACK TABLE command
OEM中显示的sql:
FLASHBACK TABLE HR.EMPLOYEES, HR.JOBS,HR.DEPARTMENTS, HR.LOCATIONS TO TIMESTAMP TO_TIMESTAMP(‘2004-09-12 01:15:25 PM’,‘YYYY-MM-DD HH:MI:SS AM’)
 
 
Using LogMiner
LogMiner extracts all DDL and DML activity from the redo log files for viewing via the dynamic performance view V$LOGMNR_CONTENTS.
LogMiner works differently from Oracle’s Flashback Query feature. The Flashback Query feature allows a user to see the contents of a table at a specified time in the past; LogMiner can
search a time period for all DDL against the table. A Flashback Query uses the undo information stored in the undo tablespace; LogMiner uses redo logs, both online and archived. Both
tools can be useful for tracking down how and when changes to database objects took place.
 
 
sql command 命令行配置和使用logminer 略。以前有笔记
图形化的GUI工具,没有看到?在哪?网友说要装grid 才有,console是不行地。
 
 
SQL> alter system set control_files = '/u02/oradata/ord/control01.ctl','/u06/oradata/ord/control02.ctl' scope = spfile;
Recovering from Loss of a Redo Log File
The dynamic performance view V$LOGFILE provides the status of each member of each redo
log file member of each redo log group; the STATUS column is defined as follows:
INVALID The file is corrupted or missing.
STALE This redo log file member is new and has never been used.
DELETED The file is no longer being used.
The redo log file is in use and is not corrupted.
 
 

1. Verify which redo log file group member is missing.
2. Archive the log file group’s contents; if you clear this log file group before archiving it, you must back up the full database to ensure maximum recoverability of the database in the case
of the loss of a datafile. Use the command ALTER SYSTEM ARCHIVE LOG GROUP groupnum; to force the archive operation.
3. Clear the log group to re-create the missing redo log file members using the command ALTER DATABASE CLEAR LOGFILE GROUP groupnum;.
SQL> select * from v$logfile 2 order by group#;
SQL> ! rm /u08/oradata/ord/redo01.log
SQL> select * from v$logfile order by group#;
SQL> alter system archive log group 1;
SQL> alter database clear logfile group 1;(重建)
SQL> select * from v$logfile order by group#;
 
 
Recovering from Loss of a System-Critical Datafile
The loss of a system-critical datafile in NOARCHIVELOG mode requires complete restoration of the database, including the control files and all datafiles, not just the missing datafiles.
The recovery of a system-critical datafile in ARCHIVELOG mode cannot proceed while the database is open; recovery must be performed while the database is in the MOUNT state.
 
 
Recovering from Loss of a Non–System-Critical Datafile
As with a system-critical datafile, the loss of a non–system-critical datafile in NOARCHIVELOG mode requires complete restoration of the database, including the control files and all datafiles, not just the missing datafiles.
The loss of a non–system-critical datafile in ARCHIVELOG mode affects only objects that are in the missing file, and recovery can proceed while the rest of the database is online.

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/235507/viewspace-557663/,如需转载,请注明出处,否则将追究法律责任。

上一篇: oracle logmnr文档
请登录后发表评论 登录
全部评论

注册时间:2008-08-08

  • 博文量
    209
  • 访问量
    863714