When you use an application that often select rows based on an expression, then a function based index (an index on a function) can help to avoid full table scans. A function-based index is based on expressions. Expressions may be arithmetic expressions, built-ins, or other user-defined expressions.
Lets look at an example
SQL> SELECT sal FROM emp
WHERE sal + comm * (sal - 100) < 1500;
Without a function based index, you will notice, that oracle performs a full table scan.
Create the function based index
SQL> CREATE INDEX calc_on_emp ON emp (sal + comm * (sal - 100));
Enable the function based index
SQL> ALTER SESSION SET QUERY_REWRITE_ENABLED = TRUE;
SQL> ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED;
Check, that the function based index is used
SQL> ANALYZE INDEX calc_on_emp COMPUTE STATISTICS;
SQL> set autotrace on explain;
SQL> SELECT sal FROM emp
WHERE sal + comm * (sal - 100) < 1500;
Execution Plan
-------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
(Cost=10 Card=21690 Bytes=108450)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
(Cost=10 Card=21690 Bytes=108450)
2 1 INDEX (RANGE SCAN) OF 'CALC_ON_EMP'
(NON-UNIQUE) (Cost=2 Card=21690)
Here is a more advanced example
We use an application that deletes data with records in excess of 20MB from the fact table.
It takes the following form:
DELETE measurements
WHERE last_modified_by
LIKE '%'||'&1'||'#'||'&2'||'#%''
This process currently takes nearly two to three hours each time it runs. Is there a way we can speed this up? Note that &1&1 is of fixed length and &2&2 is of varying length and mixed case. The character in front of &1&1 is #.
Let's say the &1 is after the first # and the &2 is after the second, up to the third. Then I would recommend a function-based index (an index on a function—a capability added in Oracle8i). You can make an index on some function of the last_modified_by column that will allow you to find all of the rows you want to delete via the index—as opposed to the full scan that is occurring now. The index you want to create would take the form:
create index t_idx on
t(substr(last_modified_by,
instr(last_modified_by,'#',1,1)+1,
instr(last_modified_by,'#',1,3)
instr(last_modified_by,'#',1,1)-1 ))
/
Index created.
If I'm off on the placement of the #'s, you just need to adjust the fourth parameter to instr in the above—I'm looking for the first and third ones. The index you create will pick off just the fields you are interested in—in effect, it will index &1#&2 for you. You would then create a view to delete from, so that you can avoid typos in the function.
create or replace view t_view
as
select t.*,
substr(last_modified_by,
instr(last_modified_by,'#',1,1)+1,
instr(last_modified_by,'#',1,3)
instr(last_modified_by,'#',1,1)-1 ) idx_col
from t
/
View Created.
This new view will use an index after having the necessary session or system parameters set (the following may be set in your init.ora to make them the default).
alter session set QUERY_REWRITE_ENABLED=TRUE;
Session altered.
alter session set QUERY_REWRITE_INTEGRITY=TRUSTED;
Session altered.
set autotrace on explain
delete /*+ index( t_view t_idx ) */
from t_view
where idx_col = 'amper1_data#amper2_data';
1 row deleted.
Execution Plan
--------------------------------------------
0 DELETE STATEMENT Optimizer=CHOOSE (Cost=1 Card=1
1 0 DELETE OF 'T'
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=1
3 2 INDEX (RANGE SCAN) OF 'T_IDX' (NON-
UNIQUE)
SQL> set autotrace off
This process can make use of an index, and, if the number of rows to be deleted is a small fraction of the 20 million rows present, will definitely speed up the operation.
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/936/viewspace-60590/,如需转载,请注明出处,否则将追究法律责任。