performance tuning addedinfo oracle

63
If my query is slow all of a sudden Query might be modified. If it it is a new query it was tested in dev with with indexes but not updated in prod And check below steps Database level slowness Check CPU utilization A Database level “running jobs”. Blocking sessions Locks on objects Wait events AWR CPU load Instance efficiency % all should be 85 to 90% then the instance is good condition Buffer hit ration >95% then it is good Shared pool statistics >95% is good MMAN define size of buffer cache, based on stastics gathered by MMAN will automatically defines SGA. Explain plan :- Explain plan can be generated using below procedure First generate a plan table by $ORACLE_HOME/rdbms/admin/utlxplan.sql (Table created)

Upload: sridkas

Post on 20-Nov-2015

274 views

Category:

Documents


8 download

DESCRIPTION

My readings all in one place

TRANSCRIPT

If my query is slow all of a sudden

If my query is slow all of a suddenQuery might be modified.If it it is a new query it was tested in dev with with indexes but not updated in prodAnd check below stepsDatabase level slownessCheck CPU utilizationA Database level running jobs.Blocking sessionsLocks on objectsWait eventsAWRCPU loadInstance efficiency % all should be 85 to 90% then the instance is good conditionBuffer hit ration >95% then it is goodShared pool statistics >95% is goodMMAN define size of buffer cache, based on stastics gathered by MMAN will automatically defines SGA.Explain plan :-Explain plan can be generated using below procedureFirst generate a plan table by$ORACLE_HOME/rdbms/admin/utlxplan.sql (Table created)Sql> explain plan forCopy past the query hereThen give select * from table (dbms_xplan.display); then you will get the O/P Table access full new index might be added Index full scan: - proper index should be created Table access by index rowed:- this is a good sign Index range scan:- this is a good sign Db file sequential read is a good sign Db file scattered read check indexes and stats of a table Full table scan it is good if update statement is running Index scan is good if select and insert statements Stats gather info is stored in dba_tab_stastics Index stats gather info is stored in dba_ind_stastics To check number of inserts / updated are updated in all_tab_modification. Join loop are nested loop add HINTS (hash join).Two ways we can gather statistics 1) Analyze 2) dbms_stats.gather_schema_statsAnalyze will gather only the database level statisticsDBMS_STATS collects information about CPU+Database.Stats gather, index rebuild if any modification is design, still if the issue exists then we have to add hintsHints: Ordered Hints- records will be in sequential format Parallel hint-4. Index hints Optimizer parameters ( opt param )it has 3 hintsi.All_rows hintsii.optimizer_index_cachingiii.optimizer_index_cost_adj

Explain plan table column example shows belowId| operation|name|Row | Bytes | tempspc | cost % cpu | time If temp space is 136MB that means sorting operation are going on .query is using temp tbs instead of memory, then create index, profile, stats gathering and do index rebuild For SQL tuning advisory we have to go with profile creation we have to create the profiles for the given query. Profile means extended version of statistics.TKPROF:The TKPROF program converts Oracle trace files into a more readable form. If you have a problem query you can user TKPROF to get more information. To get the most out of the utility you must enable timed statistics by setting the init.ora parameter or performing the following command.ALTER SYSTEM SET TIMED_STATISTICS = TRUE;@ORACLE_HOME/rdbms/admin/utlxplan.sqlCREATE PUBLIC SYNONYM PLAN_TABLE FOR SYS.PLAN_TABLE;GRANT SELECT, INSERT, UPDATE, DELETE ON SYS.PLAN_TABLE TO PUBLIC;Now we can trace a statement.Alter session set sql_trace=true;Select count(*) from dual;Alter session set sql_trace=false;The resulting trace file will be located in theUSER_DUMP_DESTdirectory. This can then be interpreted using TKPROF at the commmand prompt as follows.TKPROF explain=user/password@service table=sys.plan_tableHistogramsHistograms tell the Optimizer about the distribution of data within a column. By default (without a histogram), the Optimizer assumes a uniform distribution of rows across the distinct values in a column. If the data distribution in that column is not uniform (i.e., a data skew) then the cardinality estimate will be incorrect. In order to accurately reflect a non-uniform data distribution, a histogram is required on the column. The presence of a histogram changes the formula used by the Optimizer to estimate the cardinality, and allows it to generate a more accurate execution plan. Oracle automatically determines the columns that need histograms based on the column usageinformation (SYS.COL_USAGE$), and the presence of a data skew. For example, Oracle will not automatically create a histogram on a unique column if it is only seen in equality predicates. There are two types of histograms, frequency or height-balanced. Oracle determines the type of Histogram to be created based on the number of distinct values in the column.

Frequency HistogramsFrequency histograms are created when the number of distinct values in the column is less than 254. Oracle uses the following steps to create a frequency histogram.Height balanced HistogramsHeight-balanced histograms are created when the number of distinct values in the column is greater than 254. In a height-balanced histogram, column values are divided into buckets so that each bucket contains approximately the same number of rows.you can usev$event_histogramview and thedba_hist_system_eventtable to plot the distribution of physical disk read speed

Gathering StatisticsFor database objects that are constantly changing, statistics must be regularly gathered so that they Accurately describe the database object. The PL/SQL package, DBMS_STATS, is Oracles preferred Method for gathering statistics, and replaces the now obsolete ANALYZE2 command for collecting Statistics. The DBMS_STATS package contains over 50 different procedures for gathering and managing Statistics but most important of these procedures are the GATHER_*_STATS procedures. These Procedures can be used to gather table, column, and index statistics. You will need to be the owner of the object or have the ANALYZE ANY system privilege or the DBA role to run these procedures. The parameters used by these procedures are nearly identical, so this paper will focus on the GATHER_TABLE_STATS procedure.GATHER_TABLE_STATSThe DBMS_STATS.GATHER_TABLE_STATS procedure allows you to gather table, partition, index, and column statistics. Although it takes 15 different parameters, only the first two or three parameters need to be specified to run the procedure, and are sufficient for most customers;

Automatic Statistics Gathering JobOracle will automatically collect statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window (10pm to 2am weekdays and 6am to 2am at the weekends). This AutoTask gathers Optimizer statistics by calling the internal procedure DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC. This procedure operates in a very similar Understanding Optimizer Statistics 16fashion to the DBMS_STATS.GATHER_DATABASE_STATS procedure using the GATHER AUTO option. The primary difference is that Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. You can verify that the automatic statistics gathering job exists by querying the DBA_AUTOTASK_CLIENT_JOB view or through Enterprise Manager

Restoring StatisticsFrom Oracle Database 10g onwards, when you gather statistics using DBMS_STATS, the original statistics are automatically kept as a backup in dictionary tables, and can be easily restored by running DBMS_STATS.RESTORE_TABLE_STATS if the newly gathered statistics lead to any kind of problem. The dictionary view DBA_TAB_STATS_HISTORY contains a list of timestamps when statistics were saved for each table.The example below restores the statistics for the table SALES to what they were yesterday, andautomatically invalidates all of the cursors referencing the SALES table in the SHARED_POOL. We want to invalidate all of the cursors; because we are restoring yesterdays statistics and want them to impact any cursor instantaneously. The value of the NO_INVALIDATE parameter determines if the cursors referencing the table will be invalidated or not.BEGINDBMS_STATS.RESTORE_TABLE_STATS(ownname => SH, tabname => SALES, as_of_timestamp => SYSTIMESTAMP-1force => FALSE,no_invalidate => FALSE);END;/Table and Column StatisticsTable statistics include information on the number of rows in the table, the number of data blocks used for the table, as well as the average row length in the table. The Optimizer uses this information,in conjunction with other statistics, to compute the cost of various operations in an execution plan, and to estimate the number of rows the operation will produceUSER_TAB_STATISTICSYou can view column statistics in the dictionary view USER_TAB_COL_STATISTICS.Concurrent Statistic gathering In Oracle Database 11g Release 2 (11.2.0.2), a concurrent statistics gathering mode was introduced to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. Gathering statistics on multiple tables and (sub)partitions concurrently can reduce the overall time it takes to gather statistics by allowing Oracle to fully utilize a multi-processor environment.

Exporting / Importing StatisticsThe following example creates a statistics table called TAB1 and exports the statistics from the SH schema into the MYSTATS statistics table.Sql > create or replace directory stats_dir as /home/oracle/mariaSql>begin Dbms_stats,create_stat_table(sh,mystat); End; /Begin dbms_stats,export_schema_stats(owner=>sh , stattab=>mystats);Expdp sh/sh tables=mystats directory=stats_dir dumpfile=schema_stats.dmp logfile=schems_stats.logImpdp sh/sh tables =mystats directory=stats_dir dumpfile=schema_stats.dmp logfile=schems_stats.log

Extended StatisticsIn Oracle Database 11g, extensions to column statistics were introduced. Extended statistics encompasses two additional types of statistics; column groups and expression statisticsIndex StatisticsIndex statistics provide information on the number of distinct values in the index (distinct keys), the depth of the index (blevel), the number of leaf blocks in the index (leaf_blocks), and the clustering factor1. The Optimizer uses this information in conjunction with other statistics to determine the cost of an index access. For example the Optimizer will use b-level, leaf_blocks and the table statistics Num_rows to determine the cost of an index range scan (when all predicates are on the leading edge of the index).EVENTSDB File Sequential Read-- A single-block read (i.e., index fetch by ROWID)Top segments are going with physical reads then we have to create index and rebuilds The Oracle process wants a block that is currently not in the SGA, and it is waiting for the database block to be read into the SGA from disk. sequential read is a single-block read.Single block I/Os are usually the result of using indexes. To determine the actual object being waited can be checked by the p1, p2, p3 info inv$session_wait The two important numbers to look for are the TIME_WAITED and AVERAGE_WAIT by individual sessions.Hence to reduce this wait event follow the below points . Tune Oracle -tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time. Tune Physical Devices -Distribute(stripe) the data on diferent disk to reduce the i/o . Logical distribution is useless. "Physical" I/O performance is only governed by "independency of devices". Faster Disk -Buy the faster disk to reduce theunnecessary I/O request . Increase db_block_buffers- A larger buffer cache can (not will, "might") help .

DB File Scattered Read-- A multiblock read (a full-table scan, OPQ, sorting) Reading fragmented info from the buffer cache A db file scattered read issues a scatter-read to read the data into multiple discontinuous memory locations. A scattered read is usually a multiblock read It may be because of insufficient indexes or unavailability of updated statistics. The db file scattered read wait event identifies that a full table scan is occurring. This is why the corresponding wait event is called 'db file scattered read'. Multiblock (up toDB_FILE_MULTIBLOCK_READ_COUNTblocks) reads due to full table scans into the buffer cache show up as waits for 'db file scattered read'."

DB File Parallel Read 'db file parallel read'occurs during recovery. The datablocks that need to be changed are read from various datafiles and are placed in non-contiguousbufferblocks. Theserver processwaits till all the blocks are read in to thebuffer. If we are doing a lot of partition activity then expect to see this wait event.it could be table or index partition. Tune sql, tune indexing, tune disk I/O, increase buffer cache.DB File Parallel write 'db file parallel write'occurs whendatabasewriter (DBWr) is performing parallel write to files and blocks. Check the average_wait in V$SYSTEM_EVENT, if it is greater than 10 milliseconds then it signals a slow IO throughput. Tuning options - The main blocker for this wait event is the OS I/O sub systems. Hence use OSmonitoring tools(sar -d, iostat)to checkthe write performance. To improve the average_wait time you can consider the following, If the data files reside on raw devices use asynchronous writes. However if the data files reside on cookedfile systemsuse synchronous writes with direct IO. Note: If the average_wait time for db file parallel write is high then you may see that the system waits on freebufferwaits event.Control file Parallel write.This event occurs while the session is writing physical blocks to all controlfilesthe session starts a controlfile transaction (to make sure that the controlfiles are up to date in case the session crashes before committing the controlfile transaction)the session commits a transaction to a controlfileThis may be a case where too many checkpoints are generated as a result of excessive log swaps. Usev$loghistto check how many log swaps have been done. Add the /*+ APPEND */ hint and no logging to INSERT statements. This can reduce the log files filled.Recreate larger log files using something like the following:ALTER DATABASE ADD LOGFILE THREAD 1 GROUP 3 '/oracle/oradata/CURACAO9/redo03.log' size 500malter system switch logfile;alter system checkpoint;

Trouble shooting:First, you can ensure that you have placed your control files on disks that are not under excessive heavy loads.Also consider using operating system mirroring instead of Oracle multiplexing of the control files, taking and verify your control file backups daily, and create a new backup control file whenever a structural change is made.The common practice is to have multiple copies of the control files, keep them on separate physical disks, and not putting them on heavily accessed disks.Reduce the frequent log switches . To find the optimal time and size for log switch

log file syncThe Oracle"log file sync"wait event is triggered when a user session issues a commit (or a rollback).The user session will signal LGWR to write the log buffer to the redo log file. When the LGWR has finished writing, it will post the user session. The wait is entirely dependent on LGWR to write out the necessary redo blocks and send confirmation of its completion back to the user session. The wait time includes the writing of the log buffer and the post, and is sometimes called "commit latency".Troubleshooting The application might be committing after every row, rather than batching COMMITs Moving the online redo logs to super-fast SSD storage and increasing the log_buffersize above 10 megabytes (It is automatically set in 11g and beyond).To reduce the log file sync wait event, One of consultant as advised us to reduce the log_buffer init parameter to 1mb from default i.e 25mbin 11g this parameter is auto takencare by oracleBuffer busy waits (%) -when a session wants to access a database block in the buffer cache but it cannot because the buffer is busy. Another session is modifying the block the session modifying the block marks the block header with a flag letting other users know a change is taking place and to wait until the complete change is applied.The two main cases where this wait can occur are: Another session is reading the block into the buffer Another session holds the buffer in an incompatible mode to our request Reducingbuffer busy waits by tuning the SQL to access rows with fewer block reads by adding indexes, adjusting the database writer or adding freelists to tables and indexes. Even if there is a hugedb_cache_sizeDirect path read:When a session reads buffers from disk directly into the PGAthe wait is on direct path read temp. This is closely related to the direct path read wait. If the I/O subsystem doesnt support asynchronous I/Os, then each wait corresponds to a physical read request. If the I/O subsystem supports asynchronous I/O, then the process overlaps read requests with processing the blocks already in the PGA. When the process attempts to access a block in the PGA that has not yet been read from disk, it issues a wait call and updates the statistics for this event. So, the number of waits is not always the same as the number of read requests.

Using optimizer_index_cost_adjTheoptimizer_index_cost_adjparameter was created to allow use to change the relative costs of full-scan versus index operations. This is the most important parameter of all, and the default setting of 100 is incorrect for most Oracle systems. For some OLTP systems, re-setting this parameter to a smaller value (between 10- to 30) may result in huge performance gains!

Oracle hash joins tipsIn cases where a very small table is being joined to a large table, the Oracle hash join will often dramatically speed-up the query. Hash joins are far faster than nested loop joins in certain cases, often in cases where your SQL is joining a large table to a small table. However, in a production database with very large tables, it is not always easy to get your database to invoke hash joins without increasing the RAM regions that control hash joins. For large tables, hash joins requires lots of RAM.The rules are quite different depending on your release, and you need to focus on the hash_area_size OR the pga_aggregate_target parameters.

Unfortunately, the Oracle hash join is more memory intensive than a nested loop join. To be faster than a nested loop join, we must set the hash_area_size large enough to hold the entire hash table in memory (about 1.6 times the sum of the rows in the table). If the Oracle hash join overflows the hash_area_size memory, the hash join will page into the TEMP tablespace, severely degrading the performance of the hash join. You can use the following script, to dynamically allocate the proper hash_area_size for your SQL query in terms of the size of your hash join driving table.

IndexesAn index is basically used for faster access to tables. Over a period of time the index gets fragmented because of several DMLs running on table.When the index gets fragmented, data inside the index is scattered, rows / block reduces, index consumes more space and scanning of index takes more time and more blocks for same set of queries.To talk in index terminology, we will have a single root block, but as fragmentation increases there will be more number of branch blocks and more leaf blocks. Also the height of index will increase.To fix the above issue, we go for index rebuild. During index rebuild, the data inside the index is reorganized and compressed to fit in minimum number of blocks, height of the index is reduced to minimum possible level and performance of queries will increase.Your search becomes faster and your query will read less number of blocks.There are 2 methods to rebuild the index.1) Offline index rebuild alter index rebuild;2) Online index rebuild alter index rebuild online;1) Offline index rebuild alter index rebuild;With offline index rebuild, table and index is locked in exclusive mode preventing any translations on the table. This is most intrusive method and is used rarely in production unless we know for sure that modules are not going to access the table and we have complete downtime.2) Online index rebuild alter index rebuild online;With online index rebuild, transactions can still access the table and index. Only for very less amount of time the lock is acquired on the table (by index rebuild operation). For rest of the time table and index is available for transactions.During index build you can use the CREATE INDEX ONLINE to create an index without placing an exclusive lock over the table. CREATE INDEX ONLINE statement can speed up the creation as it works even when reads or updates are happening on the table.

The ALTER INDEX REBUILD ONLINE can be used to rebuild the index, resuming failed operations, performing batch DML, adding stop words to index or for optimizing the index.

Table Locks:A table lock is required on the index base table at the start of the CREATE or REBUILD process to guarantee DD information. A lock at the end of the process is also required so as to merge changes into the final index structure.

The time taken to complete the indexing process will increase as the indexing process will hang if there is an active transaction on the base table of the index being created or rebuilt at the time one of these locks is required. Another important issue to be considered is that any other transaction taking place on the base table that started after indexing process will also be locked unless the indexing releases its locked.

This issue can have serious impact on the response time in highly concurrent systems. This backlog of locked transactions can be quite significant depending on the time taken by initial transactions to commit or rollback.Oracle 11gOracle11g has provided enormous improvements in the locking implications regarding creating or rebuilding indexes online. Creating or Rebuilding Indexes Online in Oracle 11g also requires two associated locks on the base table. One lock is required at the start of indexing process and other at the end of indexing process.The indexing process still hangs until all prior transactions have been completed if an active transaction is going on the base table at the time one of these locks is required.However such transaction will no longer be locked and complete successfully if the indexing process has been locked out and subsequent transactions relating to the base table starts afterwards.In Oracle 11g the indexing process no longer effects other concurrent transactions on the base table. The only process potentially left hanging while waiting to acquire its associated lock resource. Now we will compare the index rebuild locking mechanism in Oracle 10g and Oracle 11g.Oracle Global Index vs. Local Index Global Index: A global index is a one-to-many relationship, allowing one index partition to map to many table partitions. The docs says that a "global index can be partitioned by the range or hash method, and it can be defined on any type of partitioned, or non-partitioned, table". Local Index: A local index is a one-to-one mapping between a index partition and a table partition. In general, local indexes allow for a cleaner divide and conquer approach for generating fast SQL execution plans with partition pruning. Local Partitioned indexes are easier to manage and each partition of local indexes are associated with that partition 1. What is local index? Local Partitioned indexes are easier to manage and each partition of local indexes are associated with that partition. They also offer greater availability and are common in DSS environments. When we take any action(MERGE, SPLIT,EXCHANGE etc) on local partition, it impacts only that partition and other partition will be available. We can not explicity add local index to new partition. Local index will be added implicitly to new partition when we add new partition on table. Likewise, we can not drop the local index on specific partition. It can be dropped automatically when we drop the partition from underlying table. Local indexes can be unique when partition key is part of the composit index. Unique local indexes are useful for OLTP environment. We can can create bitmap indexes on partitioned tables, with the restriction that the bitmap indexes must be local to the partitioned table. They cannot be global indexes. 2. What is global index? Global index used in OLTP environments and offer efficient access to any individual record. We have two types of Global index. They are global Non-partitioned index and Global partitioned index. Global nonpartitioned indexes behave just like a nonpartitioned index. Global partitioned index partition key is independent of Table partition key. The highest partition of a global index must have a partition bound, all of whose values are MAXVALUE. If you want to add new partition, always, you need to split the MAX partition. If a global index partition is empty, you can explicitly drop it by issuing the ALTER INDEX DROP PARTITION statement. If a global index partition contains data, dropping the partition causes the next highest partition to be marked unusable. You cannot drop the highest partition in a global index.

Row chaining and Row MigrationConcepts: There are two circumstances when this can occur, the data for a row in a table may be too large to fit into a single data block. This can be caused by either row chaining or row migration. Row Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable. Row Migration: Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the blocks free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change.When a row is chained or migrated, performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for that row. o INSERT and UPDATE statements that cause migration and chaining perform poorly, because they perform additional processing. o SELECTs that use an index to select migrated or chained rows must perform additional I/Os. Row migration is typically caused by UPDATE operationRow chaining is typically caused by INSERT operation.SQL statements which are creating/querying these chained/migrated rows will degrade the performance due to more I/O work.To diagnose chained/migrated rows use ANALYZE command , query V$SYSSTAT viewTo remove chained/migrated rows use higher PCTFREE using ALTER TABLE MOVE.

Detection: Migrated and chained rows in a table or cluster can be identified by using the ANALYZE command with the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and places this information into a specified output table. To create the table that holds the chained rows,execute script UTLCHAIN.SQL.SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS;SQL> SELECT * FROM chained_rows;You can also detect migrated and chained rows by checking the table fetch continued row statistic in the v$sysstat view.SQL> SELECT name, value FROM v$sysstat WHERE name = table fetch continued row;NAME VALUE- table fetch continued row 308 Although migration and chaining are two different things, internally they are represented by Oracle as one. When detecting migration and chaining of rows you should analyze carrefully what you are dealing with. Resolving: o In most cases chaining is unavoidable, especially when this involves tables with large columns such as LONGS, LOBs, etc. When you have a lot of chained rows in different tables and the average row length of these tables is not that large, then you might consider rebuilding the database with a larger blocksize. e.g.: You have a database with a 2K block size. Different tables have multiple large varchar columns with an average row length of more than 2K. Then this means that you will have a lot of chained rows because you block size is too small. Rebuilding the database with a larger block size can give you a significant performance benefit.o Migration is caused by PCTFREE being set too low, there is not enough room in avoid migration, all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.SQL Script to eliminate row migration : Get the name of the table with migrated rows:ACCEPT table_name PROMPT Enter the name of the table with migrated rows: Tips 1. Analyze the table and check the chained count for that particular table 8671 Chain Count2. Increase the pctfree size to 30alter table tbl_tmp_transaction_details pctfree 30

Differences between STATSPACK and AWR 1. AWR is the next evolution of statspack utility.2. AWR is the 10g NEW feature but statspack can still be used in 10g.3. AWR repository holds all the statistics available in statspack as well as some additional statistics which are not (10g new features).4.Statspack does not STORE the ASH statistics which are available in AWR dba_hist_active_sess_history VIEW.5.Important difference between both is STATSPACK doesnt store history for new metric statistics introduced in Oracle 10g.The key AWR views dba_hist_sysmetric_history and dba_hist_sysmetric_summary.6. AWR contains views such as dba_hist_service_stat, dba_hist_service_wait_class and dba_hist_service_name.7.Latest version of statspack included with 10g contains a specific tables which track history of statistics that reflect the performance of Oracle streams feature. These tables are stats$streams_capture, stats$treams_apply_sum, stats_buffered_subscribers, stats$rule_set.8.The AWR does not contain specific tables that reflect oracle streams activity. Therefore if DBA relies on Oracle streams it would be useful to monitor its performance using Statspack utiity.9.AWR snapshots are scheduled every 60 minutes by default. 10. Statspack snapshot purges must be scheduled manually but AWR snapshots are purged automatically by MMON every night.

Fractured block in OracleA block in which the header and footer are not consistent at a given SCN. In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file. It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half, while the second half contains older data. In this case, the block is fractured.For non-RMAN backups, the ALTER TABLESPACE BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP command is the solution for the fractured block problem. When a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can reconstruct this block if media recovery finds that this block was fractured.The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time. If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.

Difference between Execution plan and Explain planNote however that there are cases where the "execution plan" showed by "explain plan" differs from the actual "execution plan" when the statement is actually executed.

There are several possible reasons for this to happen: Usage of bind variables and bind variable peeking functionality, different optimizer related session settings, statement caching in the SGA etc.

The actual execution plan can be retrieved from V$SQL_PLAN resp. using the DBMS_XPLAN.DISPLAY_CURSOR function introduced in Oracle 10g.We can call the EXPLAIN PLAN as the guess of the CBO how it would handle the query if you execute it. Whereas the EXECUTION PLAN is the actual actions that have been done to execute the query.

At the SQL trace tkprof reports if you have the ROW SOURCE OPERATION header you can trust that those steps were done to do that specific query.Snapshot Too Old Error In OracleOracle holds undo information in case you need to roll back a transaction and also to keep a read-consistent version of data. Long-running queries may need the read-consistent versions of the data in undo segments because they may not be at the same System Change Number (SCN) as the ones currently in memory. (They may have been changed since the start of the query.) If the undo segment holding the original data is overwritten, the user receives the dreaded Snapshot Too Old error.The ORA-01555 is caused by Oracle read consistency mechanism. If you have a long running SQL that starts at 10:30 AM, Oracle ensures that all rows are as they appeared at 10:30 AM, even if the query runs until noon!Oracles does this by reading the "before image" of changed rows from the online undo segments. If you have lots of updates, long running SQL and too small UNDO, the ORA-01555 error will appear.From the docs we see that the ORA-01555 error relates to insufficient undo storage or a too small value for the undo_retention parameter:Action: If in Automatic Undo Management mode, increase the setting of UNDO_RETENTION. Otherwise, use larger rollback segments.You can get an ORA-01555 error with a too-small undo_retention, even with a large undo tables. However, you can set a super-high value for undo_retention and still get an ORA-01555 error. Also see these important notes on commit frequency and the ORA-01555 error Lock, Latch, Enquiues Locks are used to protect the data or resources from the simultaneous use of them by multiple sessions which might set them in inconsistent state Locks are external mechanism, means user can also set locks on objects by using various oracle statements.Latches are for the same purpose but works at internal level. Latches are used to Protect and control access to internal data structures like various SGA buffers. They are handled and maintained by oracle and we cant access or set it.. this is the main difference.Locks are held at object level,Latches are the locking mechanism to protect the memory structure of oracleEnquiues are the low level serialization locking mechanism which serialization the access to the resource

About Opening with the RESETLOGS OptionThe RESETLOGS option is always required after incomplete media recovery or recovery using a backup control file. Resetting the redo log does the following: Archives the current online redo logs (if they are accessible) and then erases the contents of the online redo logs and resets the log sequence number to 1. For example, if the current online redo logs are sequence 1000 and 1001 when you open RESETLOGS, then the database archives logs 1000 and 1001 and then resets the online logs to sequence 1 and 2. Creates the online redo log files if they do not currently exist. Reinitializes the control file metadata about online redo logs and redo threads. Updates all current datafiles and online redo logs and all subsequent archived redo logs with a new RESETLOGS SCN and time stamp.Because the database will not apply an archived log to a datafile unless the RESETLOGS SCN and time stamps match, the RESETLOGS prevents you from corrupting datafiles with archived logs that are not from direct parent incarnations of the current incarnation.In prior releases, it was recommended that you back up the database immediately after the RESETLOGS. Because you can now easily recover a pre-RESETLOGS backup like any other backup, making a new database backup is optional. In order to perform recovery through resetlogs you must have all archived logs generated since the last backup and at least one control file (current, backup, or created).Figure 18-1 shows the case of a database that can only be recovered to log sequence 2500 because an archived redo log is missing. When the online redo log is at sequence 4000, the database crashes. You restore the sequence 1000 backup and prepare for complete recovery. Unfortunately, one of your archived logs is corrupted. The log before the missing log contains sequence 2500, so you recover to this log sequence and open RESETLOGS. As part of the RESETLOGS, the database archives the current online logs (sequence 4000 and 4001) and resets the log sequence to 1.You generate changes in the new incarnation of the database, eventually reaching log sequence 4000. The changes between sequence 2500 and sequence 4000 for the new incarnation of the database are different from the changes between sequence 2500 and sequence 4000 for the old incarnation. You cannot apply logs generated after 2500 in the old incarnation to the new incarnation, but you can apply the logs generated before sequence 2500 in the old incarnation to the new incarnation. The logs from after sequence 2500 are said to be orphaned in the new incarnation because they are unusable for recovery in that incarnation.

http://docs.oracle.com/cd/E25178_01/server.1111/e16638/technique.htmhttp://docs.oracle.com/cd/E15586_01/fusionapps.1111/e14496/psr_trouble.htm

http://www.oracle.com/technetwork/database/bi-datawarehousing/pres-what-to-expect-from-optimizer--128499.pdfWith SQL Plan Management SQL statement is parsed for the first time and a plan is generated Check the log to see if this is a repeatable SQL statement Add SQL statement signature to the log and execute itPlan performance is still verified by execution

SQL statement is parsed again and a plan is generated Check log to see if this is a repeatable SQL statement Create a Plan history and use current plan as SQL plan baselinePlan performance is verified by execution

Something changes in the environmentSQL statement is parsed again and a new plan is generated New plan is not the same as the baseline new plan is not executedbut marked for verification

Something changes in the environmentSQL statement is parsed again and a new plan is generatedNew plan is not the same as the baseline new plan is not executedbut marked for verification Execute known plan baseline - plan performance is verify by history

Verifying the new planNon-baseline plans will not be used until verifiedDBA can verify plan at any time

Invoke or schedule verificationOptimizer checks if new plan is as good as or better than old planPlans which perform as good as or better than original plan are dded to the plan baselinePlans which dont perform as good as the original plan stay in the plan history and are marked unaccepted

SQL Plan Management the detailsControlled by two init.ora parameter optimizer_capture_sql_plan_baselinesControls auto-capture of SQL plan baselines for repeatable stmtsSet to FALSE by default in 11gR1optimizer_use_sql_plan_baselines

Controls the use of existing SQL plan baselines by the optimizerControls the use of existing SQL plan baselines by the optimizerSet to TRUE by default in 11gR1

Monitoring SPMDictionary view DBA_SQL_PLAN_BASELINEVia the SQL Plan Control in EM DBControlManaging SPMPL/SQL package DBMS_SPM or via SQL Plan Control in EM DBControlRequires the administer sql management object privilege

Viewing plans of oldSQLs

Previously I wrote about how to view a plan of a sql. Today I will tell you about a good feature DBMS_XPLAN.DISPLAY_AWR function comes with Oracle 10G which helps you to view plan of an old sql. . If you have license for tuning pack and diagnostics pack you can get historical information about the old SQLs which ran on your database. For more info about licensing feature of these packs refer to the Oracle Database Licensing Information 10g Release 1 (10.1) manualDBMS_XPLAN.DISPLAY_AWR displays the contents of an execution plan stored in the AWR.Syntax is;DBMS_XPLAN.DISPLAY_AWR(sql_id IN VARCHAR2,plan_hash_value IN NUMBER DEFAULT NULL,db_id IN NUMBER DEFAULT NULL,format IN VARCHAR2 DEFAULT TYPICAL);If db_id paramater is not specified the function will use the id of the local db.If you dont specify plan_hash_value parameter, function will bring all the stored execution plans for the given sql_idFormat parameter have so many capabilities you can get the list from the manual.Simple demonstration ; (all tests are done with 10.2.0.1 Express Edition)You can also use DBA_HIST_SQL_PLAN table for viewing the historic plan info.

Restoring old Optimizer Statistics for troubleshootingpurposeone of them is the possibility to restore old optimizer statistics (i.e. as a troubleshooting measure), should there be issues with the newly gathered ones. Since 10g, we do not simply overwrite old optimizer statistics by gathering new. Instead, the old optimizer statistics are automatically historized and kept for one month by default, so that they can be restored easily if that should be desired. Now did you know that? Here we go with a short demonstration of restoring historic optimizer statistics:

SQL> select * from v$version;BANNER----------------------------------------------------------------Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProdPL/SQL Release 10.2.0.1.0 - ProductionCORE 10.2.0.1.0 ProductionTNS for Linux: Version 10.2.0.1.0 - ProductionNLSRTL Version 10.2.0.1.0 - ProductionFor our scope, it is enough to say that we have an issue caused by generation of new optimizer statistics. We want to get back our good old statistics! Therefore, we look at the historic optimizer statistics:I am now going to gather statistics on the table manually the same would be done automatically by the standard scheduler job during the night:SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 29933962 119585

SQL> select count(*) from sales;

COUNT(*)---------- 30000000SQL> exec dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1)

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 100 1

SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mm-dd:hh24:mi:ss';

Session altered.

SQL> select table_name,stats_update_time from user_tab_stats_history;

TABLE_NAME STATS_UPDATE_TIME------------------------------ --------------------------------------SALES 2010-05-18:09:47:16SALES 2010-05-18:09:47:38We see two rows, representing the old statistics of the sales table. The first was from the time, as there where NULL entries (before the first gather_table_stats). The second row represents the accurate statistics. I am going to restore them:SQL> begindbms_stats.restore_table_stats('ADAM','SALES',to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));end;/PL/SQL procedure successfully completed.SQL> select num_rows,blocks from user_tables; NUM_ROWS BLOCKS---------- ---------- 29933962 119585Explain, Exemplify, EmpowerThe Oracle Instructor Home About Downloads Oracle HAArchitecture DataGuard Exadata OU courseformats Performance Impact of Logical Standby on thePrimaryNew organization of the Cluster Stack with Oracle Grid Infrastructure11gR2 Restoring old Optimizer Statistics for troubleshootingpurposeWe deliver a Oracle Database 10g (sic!) New Features course this week in our education center in Dsseldorf. Many customers are still on 10g productively and some even just upgrade from 9i to 10g, still being in the process of evaluating 11g. Many new features in the 10g version are related to the optimizer. Some are quite well known to the public, like the obsolence of the rule based optimizer and the presence of a scheduler job that gathers optimizer statistics every night out-of-the-box.Others lead a more quiet life, one of them is the possibility to restore old optimizer statistics (i.e. as a troubleshooting measure), should there be issues with the newly gathered ones. Since 10g, we do not simply overwrite old optimizer statistics by gathering new. Instead, the old optimizer statistics are automatically historized and kept for one month by default, so that they can be restored easily if that should be desired. Now did you know that? Here we go with a short demonstration of restoring historic optimizer statistics:SQL> select * from v$version;

BANNER----------------------------------------------------------------Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProdPL/SQL Release 10.2.0.1.0 - ProductionCORE 10.2.0.1.0 ProductionTNS for Linux: Version 10.2.0.1.0 - ProductionNLSRTL Version 10.2.0.1.0 - Production

SQL> grant dba to adam identified by adam;Grant succeeded.

SQL> connect adam/adamConnected.

SQL> create table sales as selectrownum as id,mod(rownum,5) as channel_id,mod(rownum,1000) as cust_id,5000 as amount_sold,sysdate as time_idfrom dual connect by level create index sales_idx on sales(id) nologging;Index created.

SQL> select segment_name,bytes/1024/1024 as mb from user_segments;

SEGMENT_NAME MB------------------------ ----------SALES 942SALES_IDX 566I just created a demo user, a table and an index on that table. Notice that the two segments take about 1.5 Gig space, should you like to reproduce the demo yourself. Right now, there are no optimizer statistics for the table:SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ----------"NULL values here"I am now going to gather statistics on the table manually the same would be done automatically by the standard scheduler job during the night:SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 29933962 119585

SQL> select count(*) from sales;

COUNT(*)---------- 30000000As we can see, the statistics are quite accurate, reflecting well the actual size of the table. The index is used for the following query, as we can tell by runtime already:SQL> set timing onSQL> select amount_sold from sales where id=4711;

AMOUNT_SOLD----------- 5000

Elapsed: 00:00:00.00I am now going to introduce a problem with the optimizer statistics artificially by just setting them very inaccurate. A real-world problem that is caused by new optimizer statistics is a little harder to come up with probably you will never encounter it during your careerSQL> exec dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1)

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 100 1With the above (completely misleading) statistics, the optimizer will think that a full table scan of the sales table is fairly cheap. Please notice that I ask after id 4712 and not 4711 again, because it could happen that the already computed execution plan (index range scan) is still in the library cache available for reuse. I could also flush the shared pool here to make sure that a new execution plan has to be generated for the id 4711.SQL> select amount_sold from sales where id=4712;

AMOUNT_SOLD----------- 5000

Elapsed: 00:00:01.91We can tell by the runtime of almost 2 seconds here that this was a full table scan. Proof would be to retrieve the execution plan from the library cache. I leave that to your studies. Please be aware that the autotrace feature might be misleading here. For our scope, it is enough to say that we have an issue caused by generation of new optimizer statistics. We want to get back our good old statistics! Therefore, we look at the historic optimizer statistics:SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mm-dd:hh24:mi:ss';

Session altered.

SQL> select table_name,stats_update_time from user_tab_stats_history;

TABLE_NAME STATS_UPDATE_TIME------------------------------ --------------------------------------SALES 2010-05-18:09:47:16SALES 2010-05-18:09:47:38We see two rows, representing the old statistics of the sales table. The first was from the time, as there where NULL entries (before the first gather_table_stats). The second row represents the accurate statistics. I am going to restore them:SQL> begindbms_stats.restore_table_stats('ADAM','SALES',to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));end;/PL/SQL procedure successfully completed.SQL> select num_rows,blocks from user_tables; NUM_ROWS BLOCKS---------- ---------- 29933962 119585How to get the past execution plan for a particular query ?Suppose i have a query which used to run faster and after 15 days i found the query execution is very slow.Now i want to compare the present execution and past execution plan.Can someone guide me ..How can i extract (from what views/tables) the old plan execution plan...

As far my knowledge if ia know the sql_id ,We can get it from v$sql (Also AWR maintains history for the sql plan execution.)What if i don't know the sqlID ?3. Re: How to get the past execution plan for a particular query ? do you have sql profile?check this viewDBA_SQL_PROFILES Re: How to get the past execution plan for a particular query ?Look at http://psoug.org/reference/dbms_xplan.html

dbms_xplan.display_awr seems to be what you are looking for.

What if i don't know the sqlID ?Have a look at v$sqlstats, dba_hist_sqltext and the other dba_hist_% views.

19.1.3.1 Using V$SQL_PLAN ViewsIn addition to running the EXPLAIN PLAN command and displaying the plan, you can use the V$SQL_PLAN views to display the execution plan of a SQL statement:After the statement has executed, you can display the plan by querying the V$SQL_PLAN view. V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. Its definition is similar to the PLAN_TABLE. See "PLAN_TABLE Columns".The advantage of V$SQL_PLAN over EXPLAIN PLAN is that you do not need to know the compilation environment that was used to execute a particular statement. For EXPLAIN PLAN, you would need to set up an identical environment to get the same plan when executing the statement.The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation in the plan, such as the number of output rows and elapsed time. All statistics, except the number of output rows, are cumulative. For example, the statistics for a join operation also includes the statistics for its two inputs. The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.The V$SQL_PLAN_STATISTICS_ALL view enables side by side comparisons of the estimates that the optimizer provides for the number of rows and elapsed time. This view combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.Max query length v$undostatOptimizerThe optimizer is one of the most fascinating components of the Oracle Database, since it is essential to the processing of every SQL statement. The optimizer determines the most efficient execution plan for each SQL statement based on the structure of the given query, the available statistical information about the underlying objects, and all the relevant optimizer and execution features.

The RULE (and CHOOSE) OPTIMIZER_MODE has been deprecated and desupported in 11g. (The only way to get rule-based behavior in 11g is by using the RULE hint in a query, which is not supported either). In general, using the RULE hint is not recommended, but for individual queries that need it, it is there. Consult with Oracle support before using the RULE hint in 11g.In 11g, the cost-based optimizer has two modes: NORMAL and TUNING.In NORMAL mode, the cost-based optimizer considers a very small subset of possible execution plans to determine which one to choose. The number of plans considered is far smaller than in past versions of the database in order to keep the time to generate the execution plan within strict limits. SQL profiles (statistical information) can be used to influence which plans are considered.The TUNING mode of the cost-based optimizer can be used to perform more detailed analysis of SQL statements and make recommendations for actions to be taken and for auxiliary statistics to be accepted into a SQL profile for later use when running under NORMAL mode. TUNING mode is also known as the Automatic Tuning Optimizer mode, and the optimizer can take several minutes for a single statement (good for testing). See the Oracle Database Performance Tuning Guide Automatic SQL Tuning (Chapter 17 in the 11.2 docs).Oracle states that the NORMAL mode should provide an acceptable execution path for most SQL statements. SQL statements that do not perform well in NORMAL mode may be tuned in TUNING mode for later use in NORMAL mode. This should provide a better performance balance for queries that have defined SQL profiles, with the majority of the optimizer work for complex queries being performed in TUNING mode once, rather than repeatedly, each time the SQL statement is parsed.

With each new release the optimizer evolves to take advantage of new functionality andthe new statistical information to generate better execution plans. Oracle Database 12c makes this evolution go a step further with the introduction of a new adaptive approach to query optimizations

Adaptive Query OptimizationBy far the biggest change to the optimizer in Oracle Database 12c is Adaptive Query Optimization. Adaptive Query Optimization is a set of capabilities that enable the optimizer to make run time adjustments to execution plans and discover additional information that can leadto better statistics. This new approach is extremely helpful when existing statistics are not sufficient to generate an optimal plan.There are two distinct aspects in Adaptive Query Optimization, adaptive plans, which focuses on Improving the initial execution of a query and adaptive statistics, which provide additional information to improve subsequent executions

Adaptive PlansAdaptive plans enable the optimizer to defer the final plan decision for a statement,until execution time. The optimizer instruments its chosen plan (the default plan), with statistics collectors so that at runtime, it can detect if its cardinality estimates, differ greatly from the actual number of rows seen by the operations in the plan. If there is a significant difference, then the plan or a portion of it can be automatically adapted to avoid suboptimal performance on the first execution of a SQL statement

Adaptive StatisticsThe quality of the execution plans determined by the optimizer depends on the quality of the tatistics available. However, some query predicates become too complex to rely on base table statistics alone and the optimizer can now augment these statistics with adaptive statistics.

How to Check the Optimizer versionSQL> show parameter optimobject_cache_optimal_size integer 102400optimizer_dynamic_sampling integer 2optimizer_features_enable string 10.2.0.3optimizer_index_caching integer 60optimizer_index_cost_adj integer 20optimizer_mode string ALL_ROWSoptimizer_secure_view_merging boolean TRUEplsql_optimize_level integer 2

How Optimization Looks at the DataRule-based optimization is Oracle-centric, whereas cost-based optimization is data-centric. The optimizer mode under which the database operates is set via the initialization parameter OPTIMIZER_MODE. The possible optimizer modes are as follows: ALL_ROWS Gets all rows faster (generally forces index suppression). This is good for untuned, high-volume batch systems. This is the default. All_rows attempts to optimize the query to get the very last row as fast as possible. This makes sense in a stored procedure for example where the client does not regain control until the stored procedure completes. You don't care if you have to wait to get the first row if the last row gets back to you twice as fast. In a client server/interactive application you may well care about that. SELECT /*+ ALL_ROWS */ employee_id, last_name, salary, job_idFROM employeesWHERE employee_id = 107;

If you specify either the ALL_ROWS or the FIRST_ROWS hint in a SQL statement, and if the data dictionary does not have statistics about tables accessed by the statement, then the optimizer uses default statistical values, such as allocated storage for such tables, to estimate the missing statistics and to subsequently choose an execution plan. These estimates might not be as accurate as those gathered by the DBMS_STATS package, so you should use the DBMS_STATS package to gather statistics.

If you specify hints for access paths or join operations along with either the ALL_ROWS or FIRST_ROWS hint, then the optimizer gives precedence to the access paths and join operations specified by the hints. Syntax OPTIMIZER_MODE = { first_rows_[1 | 10 | 100 | 1000] | first_rows | all_rows }

Default value ALL_ROWS Modifiable ALTER SESSION, ALTER SYSTEM

OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance. FIRST_ROWS Gets the first row faster (generally forces index use). This is good for untuned systems that process lots of single transactions. FIRST_ROWS (1|10|100|1000) Gets the first n rows faster. This is good for applications that routinely display partial results to users such as paging data to a user in a web application.In TOAD or SQL Navigator, When we select the data, it display immediately. But it does not mean that, it is faster. If we scroll down, it might be fetching the data in the background mode. First_rows is best place for OLTP environment. Also in some reporting environment, if user wants to see initial data first and later see the rest of the data, then first_rows is good option. When we run the query in the stored procedure, first_rows would not be a good choice, all_rows is good option here, because, there is no use to fetch the first few records immediatley inside the stored procedure.FIRST_ROWS_N - The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

FIRST_ROWS - The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows. When do we use FIRST_ROWS?To answer this question, we can use first_rows when user want to see the first few rows immediately. It is mostly used in OLTP, some reporting environment.

When do we use ALL_ROWS?To answer this question, we can use all_rows when user want to process all the rows before we see the output.. Mostly used in OLAP. All_rows use less resource when compared to first_rows.

Important factor in FIRST_ROWS1. It prefer to use the index scan2. It prefer to use nested loop join over hash joins. Because, nested loop joins data as selected. but hash join hashes the data in hash table which takes time.3. Good for OLTP

Important factor in ALL_ROWS

1. It use both index scan & full table scan depends on how many blocks optimizer is reading in the table.2. Good for OLAP3. It most likly to use hash join, again depends upon other factors. Optimizer parameters ( opt param )it has 3 hintsi.All_rows hintsii.optimizer_index_cachingiii.optimizer_index_cost_adjOPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ Wrap-up

The setting of the OPTIMIZER_INDEX_CACHING and OPTIMIZER_INDEX_COST_ADJ parameters will not make the plans run faster. It just affects which plan is chosen. It is important to remember that setting these parameter values does not affect how much of the index is actually cached or how expensive a single-block I/O truly is in relation to multiblock I/O.

note added by me right now: see that is important. It does not CHANGE how the query is processed at all. It does not cause more of the index to be cached, it lets you tell Oracle how much of the index you guess will be cached. Just changing these parameters will not cause the same plan to go faster (or slower). It will just change the COSTS associated with the plan. It might result in a DIFFERENT PLAN being chosen and that is where you would see performance differences!!!

Rather, this allows you to pass this information you have learned onto the CBO so it can make better decisions on your system. Also, it points out why just looking at the cost of a query plan in an attempt to determine which plan is going to be faster is an exercise in futility: Take two identical plans, with two different costs, which one is faster? Neither is.

The effect of adjusting these two parameters is that they have a profound and immediate impact on the CBO. They radically change the costing assigned to various steps. This, in turn, dramatically affects the plans generated. Therefore, you want to test thoroughly the effects of these parameters on your test system first! Ive seen systems go from nonfunctional to blazingly fast simply by adjusting these two knobs. Out of all of the Oracle initialization parameters, these two are most likely to be defaulted inappropriately for your system. Adjusting them is likely to greatly change your opinion of the CBOs abilitieshttp://www.dba-oracle.com/art_so_optimizer_index_caching.htm

http://docs.oracle.com/cd/E25178_01/server.1111/e16638/technique.htmhttp://docs.oracle.com/cd/E15586_01/fusionapps.1111/e14496/psr_trouble.htmHow Optimization Looks at the DataRule-based optimization is Oracle-centric, whereas cost-based optimization is data-centric. The optimizer mode under which the database operates is set via the initialization parameter OPTIMIZER_MODE. The possible optimizer modes are as follows: ALL_ROWS Gets all rows faster (generally forces index suppression). This is good for untuned, high-volume batch systems. This is the default. FIRST_ROWS Gets the first row faster (generally forces index use). This is good for untuned systems that process lots of single transactions. FIRST_ROWS (1|10|100|1000) Gets the first n rows faster. This is good for applications that routinely display partial results to users such as paging data to a user in a web application. CHOOSE Now obsolete and unsupported but still allowed. Uses cost-based optimization for all analyzed tables. This is a good mode for well-built and well-tuned systems (for advanced users). This option is not documented for 11gR2 but is still usable. RULE Now obsolete and unsupported but still allowed. Always uses rule-based optimization. If you are still using this, you need to start using cost-based optimization, as rule-based optimization is no longer supported under Oracle 10g Release 2 and higher. The default optimizer mode for Oracle 11g Release 2 is ALL_ROWS. Also, cost-based optimization is used even if the tables are not analyzed. Although RULE/CHOOSE are definitely desupported and obsolete and people are often scolded for even talking about it, I was able to set the mode to RULE in 11gR2. Consider the following error I received when I set OPTIMIZER_MODE to a mode that doesnt exist (SUPER_FAST): The ALL_ROWS hint instructs the optimizer to optimize a statement block with a goal of best throughput, which is minimum total resource consumption. For example, the optimizer uses the query optimization approach to optimize this statement for best throughput:

ALL_ROWS

SELECT /*+ ALL_ROWS */ employee_id, last_name, salary, job_idFROM employeesWHERE employee_id = 107;

If you specify either the ALL_ROWS or the FIRST_ROWS hint in a SQL statement, and if the data dictionary does not have statistics about tables accessed by the statement, then the optimizer uses default statistical values, such as allocated storage for such tables, to estimate the missing statistics and to subsequently choose an execution plan. These estimates might not be as accurate as those gathered by the DBMS_STATS package, so you should use the DBMS_STATS package to gather statistics.

If you specify hints for access paths or join operations along with either the ALL_ROWS or FIRST_ROWS hint, then the optimizer gives precedence to the access paths and join operations specified by the hints. Syntax OPTIMIZER_MODE = { first_rows_[1 | 10 | 100 | 1000] | first_rows | all_rows }

Default value ALL_ROWS Modifiable ALTER SESSION, ALTER SYSTEM

OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.

Values:

FIRST_ROWS_N - The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

FIRST_ROWS - The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.

ALL_ROWS - The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement). First_rows attempts to optimize the query to get the very first row back to the client as fast as possible. This is good for an interactive client server environment where the client runs a query and shows the user the first 10 rows or so and waits for them to page down to get more.

All_rows attempts to optimize the query to get the very last row as fast as possible. This makes sense in a stored procedure for example where the client does not regain control until the stored procedure completes. You don't care if you have to wait to get the first row if the last row gets back to you twice as fast. In a client server/interactive application you may well care about that.

In TOAD or SQL Navigator, When we select the data, it display immediately. But it does not mean that, it is faster. If we scroll down, it might be fetching the data in the background mode. First_rows is best place for OLTP environment. Also in some reporting environment, if user wants to see initial data first and later see the rest of the data, then first_rows is good option. When we run the query in the stored procedure, first_rows would not be a good choice, all_rows is good option here, because, there is no use to fetch the first few records immediatley inside the stored procedure. When do we use FIRST_ROWS?

To answer this question, we can use first_rows when user want to see the first few rows immediately. It is mostly used in OLTP, some reporting environment.

When do we use ALL_ROWS?

To answer this question, we can use all_rows when user want to process all the rows before we see the output.. Mostly used in OLAP. All_rows use less resource when compared to first_rows.

Important factor in FIRST_ROWS

1. It prefer to use the index scan2. It prefer to use nested loop join over hash joins. Because, nested loop joins data as selected. but hash join hashes the data in hash table which takes time.3. Good for OLTP

Important factor in ALL_ROWS

1. It use both index scan & full table scan depends on how many blocks optimizer is reading in the table.2. Good for OLAP3. It most likly to use hash join, again depends upon other factors.

How to Check the Optimizer versionSQL> show parameter optim

object_cache_optimal_size integer 102400optimizer_dynamic_sampling integer 2optimizer_features_enable string 10.2.0.3optimizer_index_caching integer 60optimizer_index_cost_adj integer 20optimizer_mode string ALL_ROWSoptimizer_secure_view_merging boolean TRUEplsql_optimize_level integer 2

http://www.oracle.com/technetwork/database/bi-datawarehousing/pres-what-to-expect-from-optimizer--128499.pdfWith SQL Plan Management SQL statement is parsed for the first time and a plan is generated Check the log to see if this is a repeatable SQL statement Add SQL statement signature to the log and execute itPlan performance is still verified by execution

SQL statement is parsed again and a plan is generated Check log to see if this is a repeatable SQL statement Create a Plan history and use current plan as SQL plan baselinePlan performance is verified by execution

Something changes in the environmentSQL statement is parsed again and a new plan is generated New plan is not the same as the baseline new plan is not executedbut marked for verification

Something changes in the environmentSQL statement is parsed again and a new plan is generatedNew plan is not the same as the baseline new plan is not executedbut marked for verification Execute known plan baseline - plan performance is verify by history

Verifying the new planNon-baseline plans will not be used until verifiedDBA can verify plan at any time

Invoke or schedule verificationOptimizer checks if new plan is as good as or better than old planPlans which perform as good as or better than original plan are dded to the plan baselinePlans which dont perform as good as the original plan stay in the plan history and are marked unaccepted

SQL Plan Management the detailsControlled by two init.ora parameter optimizer_capture_sql_plan_baselinesControls auto-capture of SQL plan baselines for repeatable stmtsSet to FALSE by default in 11gR1optimizer_use_sql_plan_baselines

Controls the use of existing SQL plan baselines by the optimizerControls the use of existing SQL plan baselines by the optimizerSet to TRUE by default in 11gR1

Monitoring SPMDictionary view DBA_SQL_PLAN_BASELINEVia the SQL Plan Control in EM DBControlManaging SPMPL/SQL package DBMS_SPM or via SQL Plan Control in EM DBControlRequires the administer sql management object privilege

Viewing plans of oldSQLs

Previously I wrote about how to view a plan of a sql. Today I will tell you about a good feature DBMS_XPLAN.DISPLAY_AWR function comes with Oracle 10G which helps you to view plan of an old sql. . If you have license for tuning pack and diagnostics pack you can get historical information about the old SQLs which ran on your database. For more info about licensing feature of these packs refer to the Oracle Database Licensing Information 10g Release 1 (10.1) manualDBMS_XPLAN.DISPLAY_AWR displays the contents of an execution plan stored in the AWR.Syntax is;DBMS_XPLAN.DISPLAY_AWR(sql_id IN VARCHAR2,plan_hash_value IN NUMBER DEFAULT NULL,db_id IN NUMBER DEFAULT NULL,format IN VARCHAR2 DEFAULT TYPICAL);If db_id paramater is not specified the function will use the id of the local db.If you dont specify plan_hash_value parameter, function will bring all the stored execution plans for the given sql_idFormat parameter have so many capabilities you can get the list from the manual.Simple demonstration ; (all tests are done with 10.2.0.1 Express Edition)You can also use DBA_HIST_SQL_PLAN table for viewing the historic plan info.

Restoring old Optimizer Statistics for troubleshootingpurposeone of them is the possibility to restore old optimizer statistics (i.e. as a troubleshooting measure), should there be issues with the newly gathered ones. Since 10g, we do not simply overwrite old optimizer statistics by gathering new. Instead, the old optimizer statistics are automatically historized and kept for one month by default, so that they can be restored easily if that should be desired. Now did you know that? Here we go with a short demonstration of restoring historic optimizer statistics:

SQL> select * from v$version;BANNER----------------------------------------------------------------Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProdPL/SQL Release 10.2.0.1.0 - ProductionCORE 10.2.0.1.0 ProductionTNS for Linux: Version 10.2.0.1.0 - ProductionNLSRTL Version 10.2.0.1.0 - ProductionFor our scope, it is enough to say that we have an issue caused by generation of new optimizer statistics. We want to get back our good old statistics! Therefore, we look at the historic optimizer statistics:I am now going to gather statistics on the table manually the same would be done automatically by the standard scheduler job during the night:SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 29933962 119585

SQL> select count(*) from sales;

COUNT(*)---------- 30000000SQL> exec dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1)

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 100 1

SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mm-dd:hh24:mi:ss';

Session altered.

SQL> select table_name,stats_update_time from user_tab_stats_history;

TABLE_NAME STATS_UPDATE_TIME------------------------------ --------------------------------------SALES 2010-05-18:09:47:16SALES 2010-05-18:09:47:38We see two rows, representing the old statistics of the sales table. The first was from the time, as there where NULL entries (before the first gather_table_stats). The second row represents the accurate statistics. I am going to restore them:SQL> begindbms_stats.restore_table_stats('ADAM','SALES',to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));end;/PL/SQL procedure successfully completed.SQL> select num_rows,blocks from user_tables; NUM_ROWS BLOCKS---------- ---------- 29933962 119585Explain, Exemplify, EmpowerThe Oracle Instructor Home About Downloads Oracle HAArchitecture DataGuard Exadata OU courseformats Performance Impact of Logical Standby on thePrimaryNew organization of the Cluster Stack with Oracle Grid Infrastructure11gR2 Restoring old Optimizer Statistics for troubleshootingpurposeWe deliver a Oracle Database 10g (sic!) New Features course this week in our education center in Dsseldorf. Many customers are still on 10g productively and some even just upgrade from 9i to 10g, still being in the process of evaluating 11g. Many new features in the 10g version are related to the optimizer. Some are quite well known to the public, like the obsolence of the rule based optimizer and the presence of a scheduler job that gathers optimizer statistics every night out-of-the-box.Others lead a more quiet life, one of them is the possibility to restore old optimizer statistics (i.e. as a troubleshooting measure), should there be issues with the newly gathered ones. Since 10g, we do not simply overwrite old optimizer statistics by gathering new. Instead, the old optimizer statistics are automatically historized and kept for one month by default, so that they can be restored easily if that should be desired. Now did you know that? Here we go with a short demonstration of restoring historic optimizer statistics:SQL> select * from v$version;

BANNER----------------------------------------------------------------Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - ProdPL/SQL Release 10.2.0.1.0 - ProductionCORE 10.2.0.1.0 ProductionTNS for Linux: Version 10.2.0.1.0 - ProductionNLSRTL Version 10.2.0.1.0 - Production

SQL> grant dba to adam identified by adam;Grant succeeded.

SQL> connect adam/adamConnected.

SQL> create table sales as selectrownum as id,mod(rownum,5) as channel_id,mod(rownum,1000) as cust_id,5000 as amount_sold,sysdate as time_idfrom dual connect by level create index sales_idx on sales(id) nologging;Index created.

SQL> select segment_name,bytes/1024/1024 as mb from user_segments;

SEGMENT_NAME MB------------------------ ----------SALES 942SALES_IDX 566I just created a demo user, a table and an index on that table. Notice that the two segments take about 1.5 Gig space, should you like to reproduce the demo yourself. Right now, there are no optimizer statistics for the table:SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ----------"NULL values here"I am now going to gather statistics on the table manually the same would be done automatically by the standard scheduler job during the night:SQL> exec dbms_stats.gather_table_stats('ADAM','SALES')

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 29933962 119585

SQL> select count(*) from sales;

COUNT(*)---------- 30000000As we can see, the statistics are quite accurate, reflecting well the actual size of the table. The index is used for the following query, as we can tell by runtime already:SQL> set timing onSQL> select amount_sold from sales where id=4711;

AMOUNT_SOLD----------- 5000

Elapsed: 00:00:00.00I am now going to introduce a problem with the optimizer statistics artificially by just setting them very inaccurate. A real-world problem that is caused by new optimizer statistics is a little harder to come up with probably you will never encounter it during your careerSQL> exec dbms_stats.set_table_stats('ADAM','SALES',numrows=>100,numblks=>1)

PL/SQL procedure successfully completed.

SQL> select num_rows,blocks from user_tables;

NUM_ROWS BLOCKS---------- ---------- 100 1With the above (completely misleading) statistics, the optimizer will think that a full table scan of the sales table is fairly cheap. Please notice that I ask after id 4712 and not 4711 again, because it could happen that the already computed execution plan (index range scan) is still in the library cache available for reuse. I could also flush the shared pool here to make sure that a new execution plan has to be generated for the id 4711.SQL> select amount_sold from sales where id=4712;

AMOUNT_SOLD----------- 5000

Elapsed: 00:00:01.91We can tell by the runtime of almost 2 seconds here that this was a full table scan. Proof would be to retrieve the execution plan from the library cache. I leave that to your studies. Please be aware that the autotrace feature might be misleading here. For our scope, it is enough to say that we have an issue caused by generation of new optimizer statistics. We want to get back our good old statistics! Therefore, we look at the historic optimizer statistics:SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='yyyy-mm-dd:hh24:mi:ss';

Session altered.

SQL> select table_name,stats_update_time from user_tab_stats_history;

TABLE_NAME STATS_UPDATE_TIME------------------------------ --------------------------------------SALES 2010-05-18:09:47:16SALES 2010-05-18:09:47:38We see two rows, representing the old statistics of the sales table. The first was from the time, as there where NULL entries (before the first gather_table_stats). The second row represents the accurate statistics. I am going to restore them:SQL> begindbms_stats.restore_table_stats('ADAM','SALES',to_timestamp('2010-05-18:09:47:38','yyyy-mm-dd:hh24:mi:ss'));end;/PL/SQL procedure successfully completed.SQL> select num_rows,blocks from user_tables; NUM_ROWS BLOCKS---------- ---------- 29933962 119585How to get the past execution plan for a particular query ?Suppose i have a query which used to run faster and after 15 days i found the query execution is very slow.Now i want to compare the present execution and past execution plan.Can someone guide me ..How can i extract (from what views/tables) the old plan execution plan...

As far my knowledge if ia know the sql_id ,We can get it from v$sql (Also AWR maintains history for the sql plan execution.)What if i don't know the sqlID ?3. Re: How to get the past execution plan for a particular query ? do you have sql profile?check this viewDBA_SQL_PROFILES Re: How to get the past execution plan for a particular query ?Look at http://psoug.org/reference/dbms_xplan.html

dbms_xplan.display_awr seems to be what you are looking for.

What if i don't know the sqlID ?Have a look at v$sqlstats, dba_hist_sqltext and the other dba_hist_% views.

19.1.3.1 Using V$SQL_PLAN ViewsIn addition to running the EXPLAIN PLAN command and displaying the plan, you can use the V$SQL_PLAN views to display the execution plan of a SQL statement:After the statement has executed, you can display the plan by querying the V$SQL_PLAN view. V$SQL_PLAN contains the execution plan for every statement stored in the cursor cache. Its definition is similar to the PLAN_TABLE. See "PLAN_TABLE Columns".The advantage of V$SQL_PLAN over EXPLAIN PLAN is that you do not need to know the compilation environment that was used to execute a particular statement. For EXPLAIN PLAN, you would need to set up an identical environment to get the same plan when executing the statement.The V$SQL_PLAN_STATISTICS view provides the actual execution statistics for every operation in the plan, such as the number of output rows and elapsed time. All statistics, except the number of output rows, are cumulative. For example, the statistics for a join operation also includes the statistics for its two inputs. The statistics in V$SQL_PLAN_STATISTICS are available for cursors that have been compiled with the STATISTICS_LEVEL initialization parameter set to ALL.The V$SQL_PLAN_STATISTICS_ALL view enables side by side comparisons of the estimates that the optimizer provides for the number of rows and elapsed time. This view combines both V$SQL_PLAN and V$SQL_PLAN_STATISTICS information for every cursor.