How to reduce the DB File Sequential Read :)
stripe the data across multiple disks?
db file sequential read is due to INDEXED reads by the way -- it is single block
IO, it is not the result of a full scan. So.... you may well be looking at the
wrong place. the p1, p2, p3 info in v$session_wait can be used to determine the
actual object being waited on. A sequential read is usually a single-block
read, although it is possible to see sequential reads for more than one block
(See P3). This wait may also be seen for reads from datafile headers (P2=1
indicates a file header read) .
Block reads are fairly inevitable so the aim should be to minimise un-necessary
IO. This is best achieved by good application design and efficient execution
plans. Changes to execution plans can yield orders of magnitude changes in
performance. Tweaking at system level usually only achieves percentage gains.
The following points may help:
* Check for SQL using unselective index scans
* A larger buffer cache can (not will, "might") help
* A less obvious issue which can affect the IO rates is how well data is clustered physically. Eg: Assume that you frequently fetch rows from a table where a column is between two values via an index scan. If there are 100 rows in each index block then the two extremes are:
1. Each of the table rows is in a different physical block (100 blocks need to be read for each index block)
2. The table rows are all located in the few adjacent blocks (a handful of blocks need to be read for each index block)
Pre-sorting or re-organising data can help to tackle this in severe situations.
* See if partitioning can be used to reduce the amount of data you need to look at.
来自 “ ITPUB博客 ” ，链接：http://blog.itpub.net/26706/viewspace-64591/，如需转载，请注明出处，否则将追究法律责任。