oracle statspack survival

49
Oracle Statspack Survival Guide STATSPACK is a performance diagnosis tool, available since Oracle8i. STATSPACK can be considered BSTAT/ESTAT’s successor, incorporating many new features. STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. STATSPACK can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem. Remember to set timed_statistics to true for your instance. Setting this parameter provides timing data which is invaluable for performance tuning. The «more is better» approach is not always better! The single most common misuse of STATSPACK is the «more is better» approach. Often STATSPACK reports spans hours or even days. The times between the snapshots (the collection points) should, in general, be measured in minutes, not hours and never days. The STATSPACK reports we like are from 1 5-minute intervals during a busy or peak time, when the performance is at its worst. That provides a very focused look at what was going wrong at that exact moment in time. The problem with a very large STATSPACK snapshot window, where the time between the two snapshots is measured in hours, is that the events that caused serious performance issues for 20 minutes during peak processing don’t look so bad when they’re spread out over an 8-hour window. It’s also true with STATSPACK that measuring things over too long of a period tends to level them out over time. Nothing will stand out and strike you as being wrong. So, when taking snapshots, schedule them about 15 to 30 minutes (maximum) apart. You might wait 3 or 4 hours between these two observations, but you should always do them in pairs and within minutes of each other. 1

Upload: umar-farook

Post on 04-Apr-2015

168 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Oracle Statspack Survival

Oracle Statspack Survival   Guide

STATSPACK is a performance diagnosis tool, available since Oracle8i. STATSPACK can be considered BSTAT/ESTAT’s successor, incorporating many new features. STATSPACK is a diagnosis tool for instance-wide performance problems; it also supports application tuning activities by providing data which identifies high-load SQL statements. STATSPACK can be used both proactively to monitor the changing load on a system, and also reactively to investigate a performance problem.

Remember to set timed_statistics to true for your instance. Setting this parameter provides timing data which is invaluable for performance tuning.

The «more is better» approach is not always better!

The single most common misuse of STATSPACK is the «more is better» approach. Often STATSPACK reports spans hours or even days. The times between the snapshots (the collection points) should, in general, be measured in minutes, not hours and never days.

The STATSPACK reports we like are from 1 5-minute intervals during a busy or peak time, when the performance is at its worst. That provides a very focused look at what was going wrong at that exact moment in time. The problem with a very large STATSPACK snapshot window, where the time between the two snapshots is measured in hours, is that the events that caused serious performance issues for 20 minutes during peak processing don’t look so bad when they’re spread out over an 8-hour window. It’s also true with STATSPACK that measuring things over too long of a period tends to level them out over time. Nothing will stand out and strike you as being wrong. So, when taking snapshots, schedule them about 15 to 30 minutes (maximum) apart. You might wait 3 or 4 hours between these two observations, but you should always do them in pairs and within minutes of each other.

«Having a history of the good times is just as important as having a history of the bad; you need both»

Another common mistake with STATSPACK is to gather snapshots only when there is a problem. That is fine to a point, but how much better would it be to have a STATSPACK report from when things were going good to compare it with when things are bad. A simple STATSPACK report that shows a tremendous increase in physical 1/0 activity or table scans (long tables) could help you track down that missing index. Or, if you see your soft parse percentage value went from 99% to 70%, you know that someone introduced a new feature into the system that isn’t using bind variables (and is killing you). Having a history of the good times is just as important as having a history of the bad; you need both.

1

Page 2: Oracle Statspack Survival

ArchitectureTo fully understand the STATSPACK architecture, we have to look at the basic nature of the STATSPACK utility. The STATSPACK utility is an outgrowth of the Oracle UTLBSTAT and UTLESTAT utilities, which have been used with Oracle since the very earliest versions.

UTLBSTAT – UTLESTAT

The BSTAT-ESTAT utilities capture information directly from the Oracle’s in-memory structures and then compare the information from two snapshots in order to produce an elapsed-time report showing the activity of the database. If we look inside utlbstat.sql and utlestat.sql, we see the SQL that samples directly from the view: V$SYSSTAT;

insert into stats$begin_stats select * from v$sysstat;insert into stats$end_stats select * from v$sysstat;

STATSPACK

When a snapshot is executed, the STATSPACK software will sample from the RAM in-memory structures inside the SGA and transfer the values into the corresponding STATSPACK tables. These values are then available for comparing with other snapshots.

Note that in most cases, there is a direct correspondence between the v$ view in the SGA and the corresponding STATSPACK table. For example, we see that the stats$sysstat table is similar to the v$sysstat view.

SQL> desc v$sysstat;Name Null? Type—————————————– ——– ———————–STATISTIC# NUMBERNAME VARCHAR2(64)CLASS NUMBERVALUE NUMBERSTAT_ID NUMBER

SQL> desc stats$sysstat;Name Null? Type—————————————– ——– ———————–SNAP_ID NOT NULL NUMBERDBID NOT NULL NUMBERINSTANCE_NUMBER NOT NULL NUMBERSTATISTIC# NOT NULL NUMBERNAME NOT NULL VARCHAR2(64)VALUE NUMBER

2

Page 3: Oracle Statspack Survival

It is critical to your understanding of the STATSPACK utility that you realize the information captured by a STATSPACK snapshot is accumulated values. The information from the V$VIEWS collects database information at startup time and continues to add the values until the instance is shutdown. In order to get a meaningful elapsed-time report, you must run a STATSPACK report that compares two snapshots as shown above. It is critical to understand that a report will be invalid if the database is shut down between snapshots. This is because all of the accumulated values will be reset, causing the second snapshot to have smaller values than the first snapshot.

Installing and Configuring STATSPACKCreate PERFSTAT Tablespace

The STATSPACK utility requires an isolated tablespace to obtain all of the objects and data. For uniformity, it is suggested that the tablespace be called PERFSTAT, the same name as the schema owner for the STATSPACK tables. It is important to closely watch the STATSPACK data to ensure that the stats$sql_summary table is not taking an inordinate amount of space.

SQL> CREATE TABLESPACE perfstatDATAFILE ‘/u01/oracle/db/AKI1_perfstat.dbf’ SIZE 1000M REUSEEXTENT MANAGEMENT LOCAL UNIFORM SIZE 512KSEGMENT SPACE MANAGEMENT AUTOPERMANENTONLINE;

Run the Create Scripts

Now that the tablespace exists, we can begin the installation process of the STATSPACK software. Note that you must have performed the following before attempting to install STATSPACK.

Run catdbsyn.sql as SYS Run dbmspool.sql as SYS

$ cd $ORACLE_HOME/rdbms/admin$ sqlplus “/ as sysdba”SQL> start spcreate.sql

Choose the PERFSTAT user’s password———————————–Not specifying a password will result in the installation FAILING

Enter value for perfstat_password: perfstat

3

Page 4: Oracle Statspack Survival

Choose the Default tablespace for the PERFSTAT user—————————————————Below is the list of online tablespaces in this database which canstore user data. Specifying the SYSTEM tablespace for the user’sdefault tablespace will result in the installation FAILING, asusing SYSTEM for performance data is not supported.

Choose the PERFSTAT users’s default tablespace. This is the tablespacein which the STATSPACK tables and indexes will be created.

TABLESPACE_NAME CONTENTS STATSPACK DEFAULT TABLESPACE—————————— ——— —————————-PERFSTAT PERMANENTSYSAUX PERMANENT *USERS PERMANENT

Pressing <return> will result in STATSPACK’s recommended defaulttablespace (identified by *) being used.

Enter value for default_tablespace: PERFSTAT

Choose the Temporary tablespace for the PERFSTAT user—————————————————–Below is the list of online tablespaces in this database which canstore temporary data (e.g. for sort workareas). Specifying the SYSTEMtablespace for the user’s temporary tablespace will result in theinstallation FAILING, as using SYSTEM for workareas is not supported.

Choose the PERFSTAT user’s Temporary tablespace.

TABLESPACE_NAME CONTENTS DB DEFAULT TEMP TABLESPACE—————————— ——— ————————–TEMP TEMPORARY *

Pressing <return> will result in the database’s default Temporarytablespace (identified by *) being used.

Enter value for temporary_tablespace: TEMP

…..…..Creating Package STATSPACK…

Package created.

4

Page 5: Oracle Statspack Survival

No errors.Creating Package Body STATSPACK…

Package body created.

No errors.

NOTE:SPCPKG complete. Please check spcpkg.lis for any errors.

Check the Logfiles: spcpkg.lis, spctab.lis, spcusr.lis

Adjusting the STATSPACK Collection Level

STATSPACK has two types of collection options, level and threshold. The level parameter controls the type of data collected from Oracle, while the threshold parameter acts as a filter for the collection of SQL statements into the stats$sql_summary table.

SQL> SELECT * FROM stats$level_description ORDER BY snap_level;

Level 0 This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information.

Level 5 This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels.

Level 6 This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels.

Level 7 This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels.

Level 10 This level includes capturing Child Latch statistics, along with all data captured by lower levels.

You can change the default level of a snapshot with the statspack.snap function. The i_modify_parameter => ‘true’ changes the level permanent for all snapshots in the future.

5

Page 6: Oracle Statspack Survival

SQL> exec statspack.snap(i_snap_level => 6, i_modify_parameter => ‘true’);

Create, View and Delete Snapshots

sqlplus perfstat/perfstatSQL> exec statspack.snap;SQL> select name,snap_id,to_char(snap_time,’DD.MM.YYYY:HH24:MI:SS’)“Date/Time” from stats$snapshot,v$database;

NAME         SNAP_ID Date/Time——— ———- ——————-AKI1               4 14.11.2004:10:56:01AKI1               1 13.11.2004:08:48:47AKI1               2 13.11.2004:09:00:01AKI1               3 13.11.2004:09:01:48

SQL> @?/rdbms/admin/sppurge;Enter the Lower and Upper Snapshot ID

Create the Report

sqlplus perfstat/perfstatSQL> @?/rdbms/admin/spreport.sql

Statspack at a Glance

What if you have this long STATSPACK report and you want to figure out if everything is running smoothly? Here, we will review what we look for in the report, section by section. We will use an actual STATSPACK report from our own Oracle 10g system.

Statspack Report Header

STATSPACK report for

DB Name         DB Id    Instance     Inst Num Release     RAC Host———— ———– ———— ——– ———– — —————-AKI1          2006521736 AKI1                1 10.1.0.2.0  NO  akira

Snap Id     Snap Time      Sessions Curs/Sess Comment——— —————— ——– ——— ——————-Begin Snap:         5 14-Nov-04 11:18:00       15      14.3End Snap:         6 14-Nov-04 11:33:00       15      10.2Elapsed:                15.00 (mins)

6

Page 7: Oracle Statspack Survival

Cache Sizes (end)~~~~~~~~~~~~~~~~~Buffer Cache:        24M      Std Block Size:         4KShared Pool Size:       764M          Log Buffer:     1,000K

Note that this section may appear slightly different depending on your version of Oracle. For example, the Curs/Sess column, which shows the number of open cursors per session, is new with Oracle9i (an 8i Statspack report would not show this data).

Here, the item we are most interested in is the elapsed time. We want that to be large enough to be meaningful, but small enough to be relevant (15 to 30 minutes is OK). If we use longer times, we begin to lose the needle in the haystack.

Statspack Load Profile

Load Profile~~~~~~~~~~~~                            Per Second       Per Transaction—————       —————Redo size:            425,649.84         16,600,343.64Logical reads:              1,679.69             65,508.00Block changes:              2,546.17             99,300.45Physical reads:                 77.81              3,034.55Physical writes:                 78.35              3,055.64User calls:                  0.24                  9.55Parses:                  2.90                113.00Hard parses:                  0.16                  6.27Sorts:                  0.76                 29.82Logons:                  0.01                  0.36Executes:                  4.55                177.64Transactions:                  0.03

% Blocks changed per Read:  151.59    Recursive Call %:    99.56Rollback per transaction %:    0.00       Rows per Sort:    65.61

Here, we are interested in a variety of things, but if we are looking at a “health check”, three items are important:

The Hard parses (we want very few of them) Executes (how many statements we are executing per second / transaction)

Transactions (how many transactions per second we process).

This gives an overall view of the load on the server. In this case, we are looking at a very good hard parse number and a fairly light system load (1 – 4 transactions per second is low).

7

Page 8: Oracle Statspack Survival

Statspack Instance Efficiency Percentage

Next, we move onto the Instance Efficiency Percentages section, which includes perhaps the only ratios we look at in any detail:

Instance Efficiency Percentages (Target 100%)~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Buffer Nowait %:  100.00       Redo NoWait %:   99.99Buffer  Hit   %:   95.39    In-memory Sort %:  100.00Library Hit   %:   99.42 Soft Parse %:   94.45Execute to Parse %:   36.39 Latch Hit %:  100.00Parse CPU to Parse Elapsd %:   59.15     % Non-Parse CPU:   99.31

Shared Pool Statistics        Begin   End——  ——Memory Usage %:   10.28   10.45% SQL with executions>1:   70.10   71.08% Memory for SQL w/exec>1:   44.52   44.70

The three in bold are the most important: Library Hit, Soft Parse % and Execute to Parse. All of these have to do with how well the shared pool is being utilized. Time after time, we find this to be the area of greatest payback, where we can achieve some real gains in performance.

Here, in this report, we are quite pleased with the Library Hit and the Soft Parse % values. If the library Hit ratio was low, it could be indicative of a shared pool that is too small, or just as likely, that the system did not make correct use of bind variables in the application. It would be an indicator to look at issues such as those.

OLTP System

The Soft Parse % value is one of the most important (if not the only important) ratio in the database. For a typical OLTP system, it should be as near to 100% as possible. You quite simply do not hard parse after the database has been up for a while in your typical transactional / general-purpose database. The way you achieve that is with bind variables. In a regular system like this, we are doing many executions per second, and hard parsing is something to be avoided.

Data Warehouse

In a data warehouse, we would like to generally see the Soft Parse ratio lower. We don’t necessarily want to use bind variables in a data warehouse. This is because they typically use materialized views, histograms, and other things that are easily thwarted by bind variables. In a data warehouse, we may have many seconds between executions, so hard parsing is not evil; in fact, it is good in those environments.

8

Page 9: Oracle Statspack Survival

The moral of this is …

… to look at these ratios and look at how the system operates. Then, using that knowledge, determine if the ratio is okay given the conditions. If we just said that the execute-to-parse ratio for your system should be 95% or better, that would be unachievable in many web-based systems. If you have a routine that will be executed many times to generate a page, you should definitely parse once per page and execute it over and over, closing the cursor if necessary before your connection is returned to the connection pool.

Statspack Top 5 Timed Events

Moving on, we get to the Top 5 Timed Events section (in Oracle9i Release 2 and later) or Top 5 Wait Events (in Oracle9i Release 1 and earlier).

Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                      % TotalEvent                                               Waits    Time (s)     Call Time——————————————– ———— ———– ———CPU time                                 122     91.65db file sequential read                 1,571     2      1.61db file scattered read                  1,174     2      1.59log file sequential read                              342           2      1.39control file parallel write                           450           2      1.39————————————————————-Wait Events  DB/Inst: AKI1/AKI1  Snaps: 5-6

-> s  – second-> cs – centisecond -     100th of a second-> ms – millisecond -    1000th of a second-> us – microsecond – 1000000th of a second-> ordered by wait time desc, waits desc (idle events last)

This section is among the most important and relevant sections in the Statspack report. Here is where you find out what events (typically wait events) are consuming the most time. In Oracle9i Release 2, this section is renamed and includes a new event: CPU time.

CPU time is not really a wait event (hence, the new name), but rather the sum of the CPU used by this session, or the amount of CPU time used during the snapshot window. In a heavily loaded system, if the CPU time event is the biggest event, that could point to some CPU-intensive processing (for example, forcing the use of an index when a full scan should have been used), which could be the cause of the bottleneck.

Db file sequential read – This wait event will be generated while waiting for writes to TEMP space generally (direct loads, Parallel DML (PDML) such as

9

Page 10: Oracle Statspack Survival

parallel updates. You may tune the PGA AGGREGATE TARGET parameter to reduce waits on sequential reads.

Db file scattered read - Next is the db file scattered read wait value. That generally happens during a full scan of a table. You can use the Statspack report to help identify the query in question and fix it.

SQL ordered by Gets

Here you will find the most CPU-Time consuming SQL statements

SQL ordered by Gets  DB/Inst: AKI1/AKI1  Snaps: 5-6-> Resources reported for PL/SQL code includes the resources used by all SQLstatements called by the code.-> End Buffer Gets Threshold:     10000 Total Buffer Gets:         720,588-> Captured SQL accounts for    3.1% of Total Buffer Gets-> SQL reported below exceeded  1.0% of Total Buffer Gets

CPU      Elapsd     OldBuffer Gets    Executions  Gets per Exec  %Total Time (s)  Time (s) Hash Value————— ———— ————– —— ——– ——— ———-16,926            1       16,926.0    2.3     2.36      3.46 1279400914Module: SQL*Pluscreate table test as select * from all_objects

Tablespace IO Stats

Tablespace——————————Av      Av     Av                    Av        Buffer Av BufReads Reads/s Rd(ms) Blks/Rd       Writes Writes/s      Waits Wt(ms)————– ——- —— ——- ———— ——– ———- ——TAB      1,643       4    1.0    19.2       16,811       39          0    0.0UNDO       166       0    0.5     1.0        5,948       14          0    0.0SYSTEM     813       2    2.5     1.6          167        0          0    0.0STATSPACK  146       0    0.3     1.1          277        1          0    0.0SYSAUX      18       0    0.0     1.0           29        0          0    0.0IDX         18       0    0.0     1.0           18        0          0    0.0USER        18       0    0.0     1.0           18        0          0    0.0————————————————————-

Rollback Segment Stats

->A high value for “Pct Waits” suggests more rollback segments may be required

10

Page 11: Oracle Statspack Survival

->RBS stats may not be accurate between begin and end snaps when using Auto Undomanagment, as RBS may be dynamically created and dropped as needed

Trans Table       Pct   Undo BytesRBS No      Gets        Waits     Written        Wraps  Shrinks  Extends—— ————– ——- ————— ——– ——– ——–0            8.0    0.00               0        0        0        01        3,923.0    0.00      14,812,586       15        0       142        5,092.0    0.00      19,408,996       19        0       193          295.0    0.00         586,760        1        0        04        1,312.0    0.00       4,986,920        5        0        55            9.0    0.00               0        0        0        06            9.0    0.00               0        0        0        07            9.0    0.00               0        0        0        08            9.0    0.00               0        0        0        09            9.0    0.00               0        0        0        010            9.0    0.00               0        0        0        0————————————————————-

Rollback Segment Storage

->Optimal Size should be larger than Avg Active

RBS No    Segment Size      Avg Active    Optimal Size    Maximum Size—— ————— ————— ————— —————0         364,544               0                         364,5441      17,952,768       8,343,482                      17,952,7682      25,292,800      11,854,857                      25,292,8003       4,321,280         617,292                       6,418,4324       8,515,584       1,566,623                       8,515,5845         126,976               0                         126,9766         126,976               0                         126,9767         126,976               0                         126,9768         126,976               0                         126,9769         126,976               0                         126,97610         126,976               0                         126,976————————————————————-

Generate Execution Plan for given SQL statement

If you have identified one or more problematic SQL statement, you may want to check the execution plan. Remember the “Old Hash Value” from the report above (1279400914), then execute the scrip to generate the execution plan.

11

Page 12: Oracle Statspack Survival

sqlplus perfstat/perfstatSQL> @?/rdbms/admin/sprepsql.sqlEnter the Hash Value, in this example: 1279400914

SQL Text~~~~~~~~create table test as select * from all_objects

Known Optimizer Plan(s) for this Old Hash Value~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Shows all known Optimizer Plans for this database instance, and the Snap Id’sthey were first found in the shared pool.  A Plan Hash Value will appearmultiple times if the cost has changed-> ordered by Snap Id

First        First          PlanSnap Id     Snap Time     Hash Value        Cost——— ————— ———— ———-6 14 Nov 04 11:26   1386862634        52

Plans in shared pool between Begin and End Snap Ids~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Shows the Execution Plans found in the shared pool between the begin and endsnapshots specified.  The values for Rows, Bytes and Cost shown below are thosewhich existed at the time the first-ever snapshot captured this plan – thesevalues often change over time, and so may not be indicative of current values-> Rows indicates Cardinality, PHV is Plan Hash Value-> ordered by Plan Hash Value

——————————————————————————–| Operation                      | PHV/Object Name     |  Rows | Bytes|   Cost |——————————————————————————–|CREATE TABLE STATEMENT          |—– 1386862634 —-|       |      |     52 ||LOAD AS SELECT                  |                     |       |      |        || VIEW                           |                     |     1K|  216K|     44 ||  FILTER                        |                     |       |      |        ||   HASH JOIN                    |                     |     1K|  151K|     38 ||    TABLE ACCESS FULL           |USER$                |    29 |  464 |      2 ||    TABLE ACCESS FULL           |OBJ$                 |     3K|  249K|     35 ||   TABLE ACCESS BY INDEX ROWID  |IND$                 |     1 |    7 |      2 ||    INDEX UNIQUE SCAN           |I_IND1               |     1 |      |      1 ||   NESTED LOOPS                 |                     |     5 |  115 |     16 ||    INDEX RANGE SCAN            |I_OBJAUTH1           |     1 |   10 |      2 ||    FIXED TABLE FULL            |X$KZSRO              |     5 |   65 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 |

12

Page 13: Oracle Statspack Survival

|   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   FIXED TABLE FULL             |X$KZSPR              |     1 |   26 |     14 ||   VIEW                         |                     |     1 |   13 |      2 ||    FAST DUAL                   |                     |     1 |      |      2 |——————————————————————————–

Resolving Your Wait Events

The following are 10 of the most common causes for wait events, along with explanations and potential solutions:

1. DB File Scattered Read

This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.

2. DB File Sequential Read

This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL. Check to ensure

13

Page 14: Oracle Statspack Survival

that index scans are necessary, and check join orders for multiple table joins. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they’re also memory hogs that could cause high wait numbers for sequential reads. They can also show up as direct path read/write waits.

3. Free Buffer

This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency. To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks.

4. Buffer Busy

This is a wait for a buffer that is being used in an unshareable way or is being read into the buffer cache. Buffer busy waits should not be greater than 1 percent. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header. If this is the case, increase the freelist groups or increase the pctused to pctfree gap. If the wait is on an undo header, you can address this by adding rollback segments; if it’s on an undo block, you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. If the wait is on a data block, you can move data to another block to avoid this hot block, increase the freelists on the table, or use Locally Managed Tablespaces (LMTs). If it’s on an index block, you should rebuild the index, partition the index, or use a reverse key index. To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it’s not as “hot.” When a DML (insert/update/ delete) occurs, Oracle Database writes information into the block, including all users who are “interested” in the state of the block (Interested Transaction List, ITL). To decrease waits in this area, you can increase the initrans, which will create the space in the block to allow multiple ITL slots. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified by maxtrans, when there are not enough slots built with the initrans that is specified).

14

Page 15: Oracle Statspack Survival

5. Latch Free

Latches are low-level queuing mechanisms (they’re accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). Latches are like locks on memory that are very quickly obtained and released. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. Most latch problems are related to the failure to use bind variables (library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers LRU chain), and hot blocks in the buffer cache (cache buffers chain). There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case. When latch miss ratios are greater than 0.5 percent, you should investigate the issue.

6. Enqueue

An enqueue is a lock that protects a shared resource. Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time. An enqueue includes a queuing mechanism, which is FIFO (first in, first out). Note that Oracle’s latching mechanism is not FIFO. Enqueue waits usually point to the ST enqueue, the HW enqueue, the TX4 enqueue, and the TM enqueue. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to preallocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces. HW enqueues are used with the high-water mark of a segment; manually allocating the extents can circumvent this wait. TX4s are the most common enqueue waits. TX4 enqueue waits are usually the result of one of three issues. The first issue is duplicates in a unique index; you need to commit/rollback to free the enqueue. The second is multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. The third and most likely issue is when multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. Finally, TM enqueues occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue.

7. Log Buffer Space

This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow. To address this problem, increase the size of the log files, or increase the size of the log

15

Page 16: Oracle Statspack Survival

buffer, or get faster disks to write to. You might even consider using solid-state disks, for their high speed.

8. Log File Switch

All commit requests are waiting for “logfile switch (archiving needed)” or “logfile switch (Checkpoint. Incomplete).” Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem.

9. Log File Sync

When a user commits or rolls back data, the LGWR flushes the session’s redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete. To reduce wait events here, try to commit more records (try to commit a batch of 50 instead of one at a time, for example). Put redo logs on a faster disk, or alternate redo logs on different physical disks, to reduce the archiving effect on LGWR. Don’t use RAID 5, since it is very slow for applications that write a lot; potentially consider using file system direct I/O or raw devices, which are very fast at writing information.

10. Idle Event.

There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table.

Remove STATSPACK from the Database

After a STATSPACK session you want to remove the STATSPACK tables.

sqlplus “/ as sysdba”SQL> @?/rdbms/admin/spdrop.sqlSQL> DROP TABLESPACE perfstat INCLUDING CONTENTS AND DATAFILES;

16

Page 17: Oracle Statspack Survival

Optimizing a SQL   statement

Introduction

As a DBA, you are responsible for optimizing a SQL statement using the EXPLAIN PLAN statement in case of performance problems. Your job’s responsibilities dictate that you should at least be informed of the following basic fundamental subjects:

Using the EXPLAIN PLAN statement Creating the PLAN_TABLE table Submitting a SQL statement using the EXPLAIN PLAN statement Using the SET STATEMENT_ID clause Recalling the EXECUTION plan from the PLAN_TABLE table Understanding of the following operations: TABLE ACCESS FULL TABLE ACCESS BY INDEX INDEX UNIQUE SCAN NESTED LOOPS MERGE JOIN FILTER SORT AGGREGATE Commands: START %ORACLE_HOME% EXPLAIN PLAN SET STATEMENT_ID=

Hands-on

In this exercise you will learn how to use the explain plan statement to determine how the optimizer will execute the query in question.

First, let’s connect to SQLPlus as the ISELF user.

SQL> CONNECT iself/schooling

Check to see if the PLAN_TABLE exists in the user’s schema.

SQL> DESC plan_table

17

Page 18: Oracle Statspack Survival

Create PLAN_TABLE

In order to optimize a SQL statement, you execute the EXPLAIN PLAN statement to populate a list plan of execution in PLAN_TABLE. Then you write a SQL statement against the table to query a plan of execution list generated by EXPLANIN PLAN.

If PLAN_TABLE does not exist, run the utlxplan.sql script provided in the rdbmsfolder to create the PLAN_TABLE table.

SQL> START %ORACLE_HOME%

Now, the PLAN_TABLE table was created.

Check the number of records in the table.

SQL> SELECT count(1)

FROM plan_table

/

There should be no records in the table.

Evaluate a SQL statement

Submit a query to the database using the EXPLAIN PLAN statement, so that the database will list the plan of execution. Use the SET STATEMENT_ID clause to identify the plan for later review. You should have one single unique statement_id for each specific SQL statement that you want to optimize.

SQL> EXPLAIN PLAN

SET STATEMENT_ID=’MY_FIRST_TEST’

INTO plan_table FOR

SELECT last_name, trade_date,

sum(shares_owned*current_price) portfolio_value

FROM customers, portfolio, stocks s

WHERE id = customer_id and stock_symbol = symbol

AND trade_date = (SELECT max(trade_date) FROM stocks

18

Page 19: Oracle Statspack Survival

WHERE symbol = s.symbol)

GROUP BY last_name, trade_date;

Check the number of records in the table again.

SQL> SELECT count(1)

FROM plan_table

/

Now, there should be at least 13 records in the table.

Display the result of the SQL statement evaluation

Now, recall the execution plan from the PLAN_TABLE table.

SQL> SELECT id, parent_id,

lpad(‘ ‘, 2*(level-1)) || operation || ‘ ‘ ||

options || ‘ ‘ || object_name || ‘ ‘ ||

decode (id, 0, ‘Cost = ‘ || position) “Query_Plan”

FROM plan_table

START WITH id = 0 and STATEMENT_ID = ‘MY_FIRST_TEST’

CONNECT BY PRIOR ID = PARENT_ID

AND STATEMENT_ID = ‘MY_FIRST_TEST’

/

How to read PLAN_TABLE

Now, assuming the following is an output of the above query, let’s learn how to read the output report.

The previous output report will be read this way. Notice that the PARENT_ID and ID columns show a child and parent relationship.

ID PARENT_ID Query_Plan

19

Page 20: Oracle Statspack Survival

— ———- ———————————————-

0 SELECT STATEMENT Cost =

SORT GROUP BY

“SORT GROUP BY” means Oracle will perform a sort on the data obtained for the user.

1 0 SORT GROUP BY

FILTER

“FILTER” means that this is an operation that adds selectivity to a TABLE ACCESS FULL operation, based on the contents of the where clause.

2 1 FILTER

NESTED LOOPS

“NESTED LOOPS” indicates that the join statement is occurring.

3 2 NESTED LOOPS

MERGE JOIN

“MERGE JOIN” indicates that the join statement is occurring.

4 3 MERGE JOIN

SORT JOIN

“SORT JOIN” indicates that the join statement is sorting. “TABLE ACCESS FULL” means that Oracle will look at every row in the table (slowest way).

5 4 SORT JOIN

6 5 TABLE ACCESS FULL STOCKS

7 4 SORT JOIN

8 7 TABLE ACCESS FULL PORTFOLIO

TABLE ACCESS BY INDEX

“TABLE ACCESS BY INDEX” means that Oracle will use the ROWID method to find a row in the table. It is very fast.

20

Page 21: Oracle Statspack Survival

9 3 TABLE ACCESS BY INDEX ROWID CUSTOMERS

INDEX UNIQUE SCAN

“INDEX UNIQUE SCAN” means Oracle will use the primary or unique key. This is the most efficient way to search an index.

10 9 INDEX UNIQUE SCAN SYS_C003126

SORT AGGREGATE

“SORT AGGREGATE” means Oracle will perform a sort on the data obtained for the user.

11 2 SORT AGGREGATE

12 11 TABLE ACCESS FULL STOCKS

Read your output and see what the problem of the query is

21

Page 22: Oracle Statspack Survival

–Why tuning and what is Granule   unit–

Introduction

As a DBA you, are also responsible for detecting performance problems of your organization’s database. You need to know how to start investigating a performance problem and then solve it. Your job’s responsibilities dictate that you should at least be informed of the following basic fundamental subjects:

Hands-on

In this exercise you will learn about the GRANULE unit, and how to perform performance tuning.

Performance Tuning Steps

When your clients complain about application performance, you look at the problem with the following sequence.

1- SQL Statement tuning,2- Optimizing sorting Operations,3- Memory Allocation.a- Operating System Memory size,b- Oracle allocated Memory size (SGA-System Global Area),4- I/O contentions,5- Latches & Locks,6- Network Load.

Granule Unit

Remember that the SGA components are allocated and de-allocated in units of contiguous memory called Granule. So it is very important that the amount of allocated memory must be a product of the Granule size and an integer.

If the SGA is less than 128MB, then a granule is 4MB. If the SGA is larger than 128MB, then a granule is 16MB.

The minimum number of granules allocated at startup is: 1 for the buffer cache, 1 for the shared pool, and 1 for the fixed SGA, which includes redo buffers.

22

Page 23: Oracle Statspack Survival

Oracle Tools and Utilities TKPROF (Transient Kernel   Profiling)

(Transient Kernel Profiling)

(Source Oracle Docs)

Understanding TKPROF

You can run the TKPROF program to format the contents of the trace file and place the output into a readable output file. TKPROF can also:

Create a SQL script that stores the statistics in the databaseDetermine the execution plans of SQL statements

———————————————————————

Note:If the cursor for a SQL statement is not closed, TKPROF output does not automatically include the actual execution plan of the SQL statement. In this situation, you can use the EXPLAIN option with TKPROF to generate an execution plan.———————————————————————

TKPROF reports each statement executed with the resources it has consumed, the number of times it was called, and the number of rows which it processed. This information lets you easily locate those statements that are using the greatest resource. With experience or with baselines available, you can assess whether the resources used are reasonable given the work done.

Using the SQL Trace Facility and TKPROF

Step 1: Setting Initialization Parameters for Trace File ManagementWhen the SQL Trace facility is enabled for a session, Oracle generates a trace file containing statistics for traced SQL statements for that session. When the SQL Trace facility is enabled for an instance, Oracle creates a separate trace file for each process. Before enabling the SQL Trace facility:

Check the settings of the TIMED_STATISTICS, MAX_DUMP_FILE_SIZE, and USER_DUMP_DEST initialization parameters.

TIMED_STATISTICS

23

Page 24: Oracle Statspack Survival

This enables and disables the collection of timed statistics, such as CPU and elapsed times, by the SQL Trace facility, as well as the collection of various statistics in the dynamic performance tables. The default value of false disables timing. A value of true enables timing. Enabling timing causes extra timing calls for low-level operations. This is a dynamic parameter. It is also a session parameter.

MAX_DUMP_FILE_SIZE

When the SQL Trace facility is enabled at the instance level, every call to the server produces a text line in a file in the operating system’s file format. The maximum size of these files (in operating system blocks) is limited by this initialization parameter. The default is 500. If you find that the trace output is truncated, then increase the value of this parameter before generating another trace file. This is a dynamic parameter. It is also a session parameter.

USER_DUMP_DEST

This must fully specify the destination for the trace file according to the conventions of the operating system. The default value is the default destination for system dumps on the operating system.This value can be modified with ALTER SYSTEM SET USER_DUMP_DEST= newdir. This is a dynamic parameter. It is also a session parameter.

See Also:

“Interpreting Statistics” for considerations when setting the STATISTICS_LEVEL, DB_CACHE_ADVICE, TIMED_STATISTICS, or TIMED_OS_STATISTICS initialization parameters“Setting the Level of Statistics Collection” for information about STATISTICS_LEVEL settingsOracle Database Reference for information on the STATISTICS_LEVEL initialization parameterOracle Database Reference for information about the dynamic performance V$STATISTICS_LEVEL view

Devise a way of recognizing the resulting trace file.

Be sure you know how to distinguish the trace files by name. Oracle writes them to the user dump destination specified by USER_DUMP_DEST. However, this directory can soon contain many hundreds of files, usually with generated names. It might be difficult to match trace files back to the session or process that created them. You can tag trace files by including in your programs a statement like SELECT ‘program_name’ FROM DUAL. You can then trace each file back to the process that created it.

24

Page 25: Oracle Statspack Survival

You can also set the TRACEFILE_IDENTIFIER initialization parameter to specify a custom identifier that becomes part of the trace file name. For example, you can add my_trace_id to subsequent trace file names for easy identification with the following:

ALTER SESSION SET TRACEFILE_IDENTIFIER = ‘my_trace_id’;

See Also:

Oracle Database Reference for information on the TRACEFILE_IDENTIFIER initialization parameter

If the operating system retains multiple versions of files, then be sure that the version limit is high enough to accommodate the number of trace files you expect the SQL Trace facility to generate.

The generated trace files can be owned by an operating system user other than yourself. This user must make the trace files available to you before you can use TKPROF to format them.

See Also:

“Setting the Level of Statistics Collection” for information about STATISTICS_LEVEL settingsOracle Database Reference for information on the STATISTICS_LEVEL initialization parameter

Step 2: Enabling the SQL Trace FacilityEnable the SQL Trace facility for the session by using one of the following:

DBMS_SESSION.SET_SQL_TRACE procedureALTER SESSION SET SQL_TRACE = TRUE;

————————————————————————————————Caution:

Because running the SQL Trace facility increases system overhead, enable it only when tuning SQL statements, and disable it when you are finished.

You might need to modify an application to contain the ALTER SESSION statement. For example, to issue the ALTER SESSION statement in Oracle Forms, invoke Oracle Forms using the -s option, or invoke Oracle Forms (Design) using the statistics option. For more information on Oracle Forms, see the Oracle Forms Reference.

————————————————————————————————

25

Page 26: Oracle Statspack Survival

To disable the SQL Trace facility for the session, enter:

ALTER SESSION SET SQL_TRACE = FALSE;

The SQL Trace facility is automatically disabled for the session when the application disconnects from Oracle.

You can enable the SQL Trace facility for an instance by setting the value of the SQL_TRACE initialization parameter to TRUE in the initialization file.

SQL_TRACE = TRUE

After the instance has been restarted with the updated initialization parameter file, SQL Trace is enabled for the instance and statistics are collected for all sessions. If the SQL Trace facility has been enabled for the instance, you can disable it for the instance by setting the value of the SQL_TRACE parameter to FALSE.

—————————————————————————————————-Note:Setting SQL_TRACE to TRUE can have a severe performance impact. For more information, see Oracle Database Reference.

—————————————————————————————————-

Step 3: Formatting Trace Files with TKPROF

TKPROF accepts as input a trace file produced by the SQL Trace facility, and it produces a formatted output file. TKPROF can also be used to generate execution plans.

After the SQL Trace facility has generated a number of trace files, you can:

Run TKPROF on each individual trace file, producing a number of formatted output files, one for each session.Concatenate the trace files, and then run TKPROF on the result to produce a formatted output file for the entire instance.Run the trcsess command-line utility to consolidate tracing information from several trace files, then run TKPROF on the result. See “Using the trcsess Utility”.TKPROF does not report COMMITs and ROLLBACKs that are recorded in the trace file.

Sample TKPROF OutputSample output from TKPROF is as follows:

SELECT * FROM emp, deptWHERE emp.deptno = dept.deptno;

26

Page 27: Oracle Statspack Survival

Call Count Cpu Elapsed Disk Query Current Rows

Parse 1 0.16 0.29 3 13 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 1 0.03 0.26 2 2 4 14

Misses in library cache during parse: 1Parsing user id: (8) SCOTT

Rows Execution Plan

14 MERGE JOIN

4 SORT JOIN

4 TABLE ACCESS (FULL) OF ‘DEPT’

14 SORT JOIN

14 TABLE ACCESS (FULL) OF ‘EMP’

For this statement, TKPROF output includes the following information:

The text of the SQL statementThe SQL Trace statistics in tabular formThe number of library cache misses for the parsing and execution of the statement.The user initially parsing the statement.The execution plan generated by EXPLAIN PLAN.TKPROF also provides a summary of user level statements and recursive SQL calls for the trace file.

Syntax of TKPROFTKPROF is run from the operating system prompt. The syntax is:

tkprof filename1 filename2 [waits=yes|no] [sort=option] [print=n][aggregate=yes|no] [insert=filename3] [sys=yes|no] [table=schema.table][explain=user/password] [record=filename4] [width=n]

The input and output files are the only required arguments. If you invoke TKPROF without arguments, then online help is displayed. Use the arguments in Table 20-2 with TKPROF.

Table 20-2 TKPROF ArgumentsArgument Description

filename1

27

Page 28: Oracle Statspack Survival

Specifies the input file, a trace file containing statistics produced by the SQL Trace facility. This file can be either a trace file produced for a single session, or a file produced by concatenating individual trace files from multiple sessions.

filename2Specifies the file to which TKPROF writes its formatted output.

WAITSSpecifies whether to record summary for any wait events found in the trace file. Values are YES or NO. The default is YES.

SORTSSorts traced SQL statements in descending order of specified sort option before listing them into the output file. If more than one option is specified, then the output is sorted in descending order by the sum of the values specified in the sort options. If you omit this parameter, then TKPROF lists statements into the output file in order of first use. Sort options are listed as follows:

PRSCNTNumber of times parsed.

PRSCPUCPU time spent parsing.

PRSELAElapsed time spent parsing.

PRSDSKNumber of physical reads from disk during parse.

PRSQRYNumber of consistent mode block reads during parse.

PRSCUNumber of current mode block reads during parse.

PRSMISNumber of library cache misses during parse.

EXECNTNumber of executes.

EXECPUCPU time spent executing.

28

Page 29: Oracle Statspack Survival

EXEELAElapsed time spent executing.

EXEDSKNumber of physical reads from disk during execute.

EXEQRYNumber of consistent mode block reads during execute.

EXECUNumber of current mode block reads during execute.

EXEROWNumber of rows processed during execute.

EXEMISNumber of library cache misses during execute.

FCHCNTNumber of fetches.

FCHCPUCPU time spent fetching.

FCHELAElapsed time spent fetching.

FCHDSKNumber of physical reads from disk during fetch.

FCHQRYNumber of consistent mode block reads during fetch.

FCHCUNumber of current mode block reads during fetch.

FCHROWNumber of rows fetched.

USERIDUserid of user that parsed the cursor.

PRINTLists only the first integer sorted SQL statements from the output file. If you omit this parameter, then TKPROF lists all traced SQL statements. This parameter does not affect

29

Page 30: Oracle Statspack Survival

the optional SQL script. The SQL script always generates insert data for all traced SQL statements.

AGGREGATEIf you specify AGGREGATE = NO, then TKPROF does not aggregate multiple users of the same SQL text.

INSERTCreates a SQL script that stores the trace file statistics in the database. TKPROF creates this script with the name filename3. This script creates a table and inserts a row of statistics for each traced SQL statement into the table.

SYSEnables and disables the listing of SQL statements issued by the user SYS, or recursive SQL statements, into the output file. The default value of YES causes TKPROF to list these statements. The value of NO causes TKPROF to omit them. This parameter does not affect the optional SQL script. The SQL script always inserts statistics for all traced SQL statements, including recursive SQL statements.

TABLESpecifies the schema and name of the table into which TKPROF temporarily places execution plans before writing them to the output file. If the specified table already exists, then TKPROF deletes all rows in the table, uses it for the EXPLAIN PLAN statement (which writes more rows into the table), and then deletes those rows. If this table does not exist, then TKPROF creates it, uses it, and then drops it.

The specified user must be able to issue INSERT, SELECT, and DELETE statements against the table. If the table does not already exist, then the user must also be able to issue CREATE TABLE and DROP TABLE statements. For the privileges to issue these statements, see the Oracle Database SQL Reference.

This option allows multiple individuals to run TKPROF concurrently with the same user in the EXPLAIN value. These individuals can specify different TABLE values and avoid destructively interfering with each other’s processing on the temporary plan table.

If you use the EXPLAIN parameter without the TABLE parameter, then TKPROF uses the table PROF$PLAN_TABLE in the schema of the user specified by the EXPLAIN parameter. If you use the TABLE parameter without the EXPLAIN parameter, then TKPROF ignores the TABLE parameter.

If no plan table exists, TKPROF creates the table PROF$PLAN_TABLE and then drops it at the end.

EXPLAINDetermines the execution plan for each SQL statement in the trace file and writes these

30

Page 31: Oracle Statspack Survival

execution plans to the output file. TKPROF determines execution plans by issuing the EXPLAIN PLAN statement after connecting to Oracle with the user and password specified in this parameter. The specified user must have CREATE SESSION system privileges. TKPROF takes longer to process a large trace file if the EXPLAIN option is used.

RECORDCreates a SQL script with the specified filename4 with all of the nonrecursive SQL in the trace file. This can be used to replay the user events from the trace file.

WIDTHAn integer that controls the output line width of some TKPROF output, such as the explain plan. This parameter is useful for post-processing of TKPROF output.

TRCSESS

The syntax for the trcsess utility is:

trcsess [output=output_file_name][session=session_id][clientid=client_id][service=service_name][action=action_name][module=module_name][trace_files]

Where

Output specifies the file where the output is generated. If this option is not specified, then standard output is used for the output.Session consolidates the trace information for the session specified. The session identifier is a combination of session index and session serial number, such as 21.2371. You can locate these values in the V$SESSION view.Clientid consolidates the trace information given client Id.Service consolidates the trace information for the given service name.Action consolidates the trace information for the given action name.Module consolidates the trace information for the given module name.trace_files is a list of all the trace file names, separated by spaces, in which trcsess should look for trace information. The wild card character * can be used to specify the trace file names. If trace files are not specified, all the files in the current directory are taken as input to trcsess.One of the session, clientid, service, action, or module options must be specified. If more then one of the session, clientid, service, action, or module options is specified, then the trace files which satisfies all the criteria specified are consolidated into the output file.

31

Page 32: Oracle Statspack Survival

Explain Plan

Understanding EXPLAIN PLAN

The EXPLAIN PLAN statement displays execution plans chosen by the Oracle optimizer for SELECT, UPDATE, INSERT, and DELETE statements. A statement’s execution plan is the sequence of operations Oracle performs to run the statement.

The row source tree is the core of the execution plan. It shows the following information:

An ordering of the tables referenced by the statementAn access method for each table mentioned in the statementA join method for tables affected by join operations in the statementData operations like filter, sort, or aggregationIn addition to the row source tree, the plan table contains information about the following:

Optimization, such as the cost and cardinality of each operationPartitioning, such as the set of accessed partitionsParallel execution, such as the distribution method of join inputsThe EXPLAIN PLAN results let you determine whether the optimizer selects a particular execution plan, such as, nested loops join. It also helps you to understand the optimizer decisions, such as why the optimizer chose a nested loops join instead of a hash join, and lets you understand the performance of a query.

With the query optimizer, execution plans can and do change as the underlying optimizer inputs change. EXPLAIN PLAN output shows how Oracle runs the SQL statement when the statement was explained. This can differ from the plan during actual execution for a SQL statement, because of differences in the execution environment and explain plan environment.

Execution plans can differ due to the following:

Different Schemas

The execution and explain plan happen on different databases.The user explaining the statement is different from the user running the statement. Two users might be pointing to different objects in the same database, resulting in different execution plans. Schema changes (usually changes in indexes) between the two operations.

Different Costs

Even if the schemas are the same, the optimizer can choose different execution plans if the costs are different. Some factors that affect the costs include the following:

32

Page 33: Oracle Statspack Survival

Data volume and statisticsBind variable types and valuesInitialization parameters – set globally or at session level

Looking Beyond Execution Plans

The execution plan operation alone cannot differentiate between well-tuned statements and those that perform poorly. For example, an EXPLAIN PLAN output that shows that a statement uses an index does not necessarily mean that the statement runs efficiently. Sometimes indexes can be extremely inefficient. In this case, you should examine the following:The columns of the index being used

Their selectivity (fraction of table being accessed)

It is best to use EXPLAIN PLAN to determine an access plan, and then later prove that it is the optimal plan through testing. When evaluating a plan, examine the statement’s actual resource consumption.

Using V$SQL_PLAN Views

In addition to running the EXPLAIN PLAN command and displaying the plan, you can use the V$SQL_PLAN views to display the execution plan of a SQL statement:

After the statement has executed, you can display the plan by querying the V$SQL_PLAN view. V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. Its definition is similar to the PLAN_TABLE. See “PLAN_TABLE Columns”.

The advantage of V$SQL_PLAN over EXPLAIN PLAN is that you do not need to know the compilation environment that was used to execute a particular statement. For EXPLAIN PLAN, you would need to set up an identical environment to get the same plan when executing the statement.

The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation in the plan, such as the number of output rows and elapsed time. All statistics, except the number of output rows, are cumulative. For example, the statistics for a join operation also includes the statistics for its two inputs.

The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.

The V$SQL_PLAN_STATISTICS_ALL view enables side-by-side comparisons of the estimates that the optimizer provides for the number of rows and elapsed time. This view combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.

33

Page 34: Oracle Statspack Survival

Installing the Trace Analyzer

Download the latest version of this tool from TRCA.zip

Unzip into a dedicated directory on either the database server, or a client that can connect to the database server.

Connect into SQL*Plus with USER that created Raw SQL Trace to be analyzed.

If using Oracle Apps, connect as APPS USER.

Execute script to create Staging Repository and Package to be used by the Trace Analyzer:

sqlplus scott/tiger

SQL> START TRCACREA.sql;

When the Raw SQL Trace resides under the UDUMP Directory where it was created, there is no need to create any other Directory Alias. But if the Raw SQL Trace was moved into any other Directory on the database server, a new Directory Alias pointing to this non UDUMP Directory must be created:

sqlplus system/<system_pwd>

SQL> START TRCADIRA.sql my_directory D:\ORACLE\ADMIN\SRIDEVI\UDUMP\ SCOTT;

If you get some PLS-00201 errors while installing the Staging Repository or executing the Trace Analyzer, you may need to create some GRANTs for the USER which will be using the Trace Analyzer. If this is the case, use the example below. You would have to connect into SQL*Plus as SYS or SYSTEM.

SQL> START TRCAGRNT.sql scott;

If you are on RDBMS 9.0.x, and you got some ‘ORA-00942: table or view does not exist’ errors, you need to install this TRCA tool on a dictionary managed tablespace. Create a dictionary managed tablespace using example below, and modify script TRCAREPO.sql to use this new tablespace.

# sqlplus scott/tiger

SQL> create tablespace TRCA datafile ‘/oracle/em40db/oradata/em40/trca01.dbf’size 100M autoextend on next 100M permanentdefault storage ( initial 1M next 1M)extent management dictionary;

34

Page 35: Oracle Statspack Survival

Trace Analyzer reports, by default, all SQL commands executed while tracing was active, including recursive SYS commands. If for any reason, you want to exclude from Trace Analyzer report the recursive SQL executed by user SYS, use the TRCAISYS.sql script provided. This script, when executed with parameter value of NO, provides same functionality than TKPROF and sys=no parameter. To reset to default behavior (sys=yes), execute with value of YES.

# sqlplus apps/<apps_pwd>SQL> TRCAISYS.sql NO;

——————————————————————————–

Maintaining the Staging RepositoryThe Trace Analyzer automatically keeps into the Staging Repository up to 14 days of data related to analyzed Raw SQL Traces. The spaced used on the Tablespace where the Staging Repository was installed, is equivalent to the size of the Raw SQL Traces analyzed during the last 14 days. If you need to purge the Staging Repository in order to keep less than 14 days, use example below (keeping 3 days of data) connecting to SQL*Plus as USER which installed the Trace Analyzer.SQL> START TRCAPURG.sql 3;Truncating the Staging Repository provides the better performance executing the Trace Analyzer, and it recovers all used space on the Tablespace where the Staging Repository was installed. Use command below connecting to SQL*Plus as USER which installed the Trace Analyzer.SQL> START TRCATRNC.sql;The Trace Analyzer staging repository includes several objects with prefix trca$.

35