The ALTER SESSION accelerates the subsequent SELECT (during a full table scan, the system then reads 128 instead of only 8 or 32 blocks for each I/O request from the disk). Since the parameter affects the cost accounting, it can only be set to 128 here, and not generally.
As of Release 4.6C, you can use transaction TAANA to determine the value distribution of columns.
Caution: A large number of blocks are imported with these methods, which can increase the I/O load. Do not execute these actions on a large scale when the system has a heavy load.
d) Index fragmentation
To prevent a large number of block accesses being caused by a fragmented index, check the index fragmentation as described in Note 771929.
e) Source of the SQL statement
You can use the report name and source code button in the shared cursor cache to reach the callpoint in ABAP. If it is still not clear there in which context the source code will run, a where-used list can return further information.
Information on the triggering transaction can also be determined by capturing a current query in SM66 or by a permanent SQL trace restricted to the table using ST05.
If no report name is displayed in the shared cursor cache, the statement is called from a tool or script. outside the R/3 system. However, there is also a number of external tools that can be identified with a separate name.
If branching using a source code button fails, you may have dynamically generated source code.
If several reports execute an identical statement, the report that first executed the statement is always displayed.
You cannot branch to the source code for statements that originated in the R/3 kernel.
f) Block accesses for a specific statement execution
To test alternative access plans or partial accesses of a join, SQL statements can be executed at Oracle level. To ensure that it is easy to find their key performance indicators (such as the number of buffer gets and disk reads for each execution) in the shared cursor cache, you should add a unique and easily identifiable comment to the statement to be executed:
SELECT /* */ ...
Now you can easily search for the statement by using the search term in the Shared Cursor Cache.
g) Access paths for varied SQL statements
In various cases it can be useful to determine the access path for varied SQL statements, for instance:
- Effect of more or fewer OR concatenations or IN list entries
- Effect of changed Oracle parameters
- Determining the costs for partial statements
You can use the "Enter SQL statement" button to determine the access path for any statement in transaction ST05. However, you can only change dynamic Oracle parameters with newer releases and Support Packages.
16. How can I optimize a processing-intensive SQL statement?
Before carrying out technical tuning of a processing intensive SQL statement, you should always check from the application view to what extent the SQL statement is even useful. For example, if the SQL statement is the result of an inadequately filled input template, a poorly-qualified SE16 SELECT (report /1BCDWB/DB*), an incorrect query (report AQ*) or a design error in a customer-specific development, the statement must be optimized from a non-technical view or avoided completely. If the statement comes from the standard SAP system, you can check whether a general solution is already available during an SAP note search.
Below is a general overview of classes of expensive SQL statements and possible technical tuning approaches:
- Accesses to objects of the Oracle DDIC (DBA views)
Queries on DBA views are often very processing-intensive. For statements on DBA views, it does not make sense to optimize the access by creating new indexes or specifying hints.
These accesses are usually caused by monitoring tools (transaction RZ20, BR*TOOLS, external monitoring tools). You must therefore first determine the initializing tool and check to what extent the query can be completely deactivated or the frequency of the execution can be reduced.
If long runtimes and a high number of BUFFER GETs occur during DBA_SEGMENT access in particular, then refer to Note 871455.
- Accesses within the F4 search help (M_V* views)
Search help runtimes may vary, depending on the selection criteria. Since all possible queries cannot be optimized to the same extent, it is important to limit query constellations to the main query constellations only. Furthermore, you can implement a &SUBSTITUTE LITERALS& hint in accordance with Note 1008433. This hint partially results in optimal accesses, even if there are no histograms.
- Large number of executions
If an SQL statement is processing intensive, primarily due to the large number of executions, you must check whether the number of executions can be reduced in the application.
If the statement in question contains a FOR ALL ENTRIES expression, it may also be possible to increase the blocking factors as described in Note 881083.
- Large number of processed rows for each execution
If an SQL statement reads a very large number of data records with each execution, you must check in the application whether the scope of the dataset can be reduced (for example, by specifying more conditions in the WHERE part).
- Large number of buffer gets in each execution
Check if the use of resources by Oracle is optimal. Consider the following issues:
- Does the Oracle Optimizer fail to choose the current best possible access path?
If a detailed analysis of the SQL statement means that Oracle does not decide on the optimal access path, you should first check whether the CBO statistics correctly reflect the current dataset. To make sure, you can generate new statistics that are as accurate as possible (Note 588668). You should also ensure that a statistical run is scheduled at least once a week as described in Note 132861.
If the access path is corrected and no side effects occur, you can also create statistics as described in Note 756335 on tables that do not usually receive any statistics as described in 122718.
If the access path is also not optimal with good statistics, this is often due to "features" of the RBO and CBO, which are described in Note 176754. In suche cases you can improve Oracle's access by using hints (see Note 130480) or by adjusting the statistic values (see Note 724545).
Additional reasons for an incorrect access path are the wrong Oracle parameter settings or the use of an obsolete patch set. For more information, see Note 618868.
To understand the CBO decisions better, also refer to Note 750631, which contains approximation formulas for the cost accounting.
Under certain conditions, the displayed access path is also incorrect because Oracle performs another access internally. For more information, see Note 723879.
- Can the access be optimized by creating an additional index?
If the existing selective conditions are not sufficiently supported by an index, a corresponding index can be created for the optimization or - if no side effects are possible - an existing index can be adjusted.
Note that in the R/3 system, some central tables are accessed using special index tables (Note 185530 for SD, Note 191492 for MM/WM, and Note 187906 for PP/PM). In the cases described, you should open the initial screen using an index table, so that no other index must be created. The same applies for accesses on the table BSEG, for the index tables such as BSIS, BSAS, BSID, and BSAD.
- Is the number of buffer gets inexplicably high?
If possible, roughly calculate how many blocks must be read for the statement. A maximum of 5 block accesses are required per execution for each INDEX UNIQUE SCAN (or, more specifically: BLEVEL of index + 1).
If a significantly greater number of blocks is read with INDEX RANGE SCANs that your estimates would lead you to expect, this may be due to an index fragmentation. For more information, see Note 771929.
If changes take placc to data in the same block even as it is accessed, the consistent reading mechanism and its associated access to CR blocks ("Consistent Read") may lead to significantly increased block accesses. Refer to Note 913247 for more detailed information.
Another possible reason for a large number of Buffer Gets is a high volume of network communication between the database and SAP. This effect is usually negligible, however.
When you use LIKE conditions, a placehold at the beginning or in the middle of the search template may cause a greater number of buffer gets, since Oracle can only optimally use placeholders that are at the end of the search term.
If you execute mass-data operations ("FROM TABLE " expressions in the ABAP statement), the underlying SQL statement is executed for each data record of , while the system only records a single executeion in the shared cursor cache. This causes an unnecessarily high number of block accesses for each execution.
In individual cases, the high number of buffer gets is caused by the actual run schedule differing from the displayed run schedule. This can be checked as described in Note 723879.
- Large number of disk reads per execution
A high number of disk reads is often associated with full table scans, because the blocks imported during a full table scan are suppressed more quickly than blocks that were read during an index access. In such a case, you must use appropriate measures to force an index access (create a new index, adjust the statistics, hints, ...).
A large number of disk reads often occurs in connection with index range scans and subsequent table accesses with ROWID if the clustering factor of the index is high. For more information, see Note 832343.
To avoid blocks of a critical table being suppressed from the buffer pool after some time, and later being read again from disk, you can set up a keep pool as described in Note 762808.
If the application needs to read a lot, or all, of the entries of a table repeatedly, and the table is not too large, consider buffering the table completely in the SAP system. You must ensure that the delayed SAP buffer synchronization will cause no problems for the application, and that the table's change rate is not too high.
Otherwise the same applies during the analysis of statements with a high number of disk reads as for statements with a high number of buffer gets.
The usual cause of high CPU load is a large number of BUFFER GETs. Note 712624 contains analysis options and more characteristic reasons for increased CPU consumption.
In many cases, a high elapsed time that cannot be explained by DISK READs or BUFFER GETs is caused by lock situations, especially enqueues (refer to Note 745639). If necessary, you can use ORADEBUG (refer to Note 613872) to create a trace of the statement in question to obtain more information.