How to identify which indexes can be safely dropped in Oracle

The longer an application has been successfully run, the more likely you are to have indexes that are no longer used or beneficial. Removing these indexes not only saves space but can also improve the performance of any DML operations.

But knowing which indexes can be safely dropped can be tricky.

In Oracle Database 12c, things got a little easier with the introduction of a new view called DBA_INDEX_USAGE.

The DBA_INDEX_USAGE view displays cumulative index usage statistics, but unlike previous releases, it’s not just a binary value (YES or NO). This new view shows how often an index was used, who accessed it, and how effective it was via histogram based on the number of rows returned. The index access information is captured in memory and periodically flushed to disk every 15 minutes. You can query the last_flush_Time in v$INDEX_USAGE_INFO to determine when it was updated.

You should also be aware that the Index usage is sampled by default rather than tracking all index usage. The sampling approach will likely notice all index usage in a standard running system, where indexes are repeatedly accessed, but it can easily miss indexes used for one-off tests. You can change the tracking approach so that we track every execution, but there is a significant performance overhead for this.

As always, it is easier to understand the benefits of a dictionary view by looking at an example. In my example below, I join the DBA_INDEX_USAGE view to USER_INDEXES via an outer join to ensure I capture information on indexes that are never used because these indexes won’t appear in DBA_INDEX_USAGE at all.

SELECT i.index_name, u.total_access_count tot_access, u.total_exec_count exec_cnt,
       u.bucket_0_access_count B0, u.bucket_1_access_count B1, u.bucket_2_10_access_count B2_10,
       u.bucket_11_100_access_count B11_100, u.bucket_101_1000_access_count B101_1K,
       u.bucket_1000_plus_access_count B1K, u.last_used
FROM    DBA_INDEX_USAGE u
RIGHT JOIN DBA_INDEXES i
ON     i.index_name = u.name 
WHERE  i.owner='MARIA'
ORDER BY u.total_access_count;

The output of this query should look something like this.

INDEX_NAME     	TOT_ACCESS    EXEC_CNT   B0      B1      B2_10  B11_100   B101_1K     B1K     LAST_USED
---------------- ---------- ---------- ------- ------- ------- -------- ---------- ---------- -----------
PROD_CUST_SALES       1  	1       0       0       0       0        	0          1 	06-APR-23
INDX_LOC       	      2  	2       0       1       1       0		0          0 	12-APR-23
INDX_DEPTNO    	     19        19      18       1       0       0   	        0          0 	26-APR-23
PROD_SUP_INDX        27        27      25       0       0       0        	2          0 	26-APR-23
EMPNO_PK_IND         82        82      48       32      0       2        	0          0 	26-APR-23
CHAN_SOLD
PROD_SUB_IDX

As you can see in the output above, for each index, we get the total number of accesses (TOT_ACCESS), the execution count (EXEC_CNT), the last date the index was used (LAST_USED), and a histogram of how many rows were returned across several buckets.

Let’s look at the  EMPNO_PK INDEX  index (the third row from the bottom). This index returned 0 rows 48 times out of  82 total accesses and a single row 32 times. So, it looks like this index is commonly used and valuable.

However, if we look at PROD_CUST_SALES (the first row), it was accessed only once and returned over 1,000 rows. Is that a helpful index?

Maybe/maybe not. It may be beneficial if this index access replaces a full table scan of an extremely wide table. If, on the other hand, it’s only marginally cheaper than a full table scan, it may not be worth the overhead of maintaining the index.

But perhaps the most exciting part of the query output above is that we have two indexes (CHAN_SOLD, PROD_SUB_IDX) that have never been accessed. These are good candidates to be dropped.

But dropping an index can be risky. What can you do to ensure you don’t get fired for dropping the wrong index?

Rather than dropping the index immediately, marking the index invisible is far easier. The optimizer will be unaware of the index in the future and won’t select it as part of any execution plan. However, the index will continue to be maintained.

ALTER INDEX prod_sub_idx INVISIBLE;

If no one complains about their query performance digressing after some time, you can safely drop the index. If, on the other hand, someone does complain, you can alter the index visible again in a matter of seconds.

ALTER INDEX prod_sub_idx VISIBLE;

You can also create a new index as invisible. The optimizer is unaware of the new index until you can verify it improves performance by setting the parameter OPTIMIZER_USE_INVISBLE_INDEXES within a single session. This will allow the optimizer to consider the new index only for SQL statements issued within your session. All other sessions will continue to use the existing execution plans.

-- New indexes can be marked invisible until you have an opportunity to prove they improve performance
CREATE INDEX my_idx ON t(x, object_id) INVISIBLE; 
 
-- Test newly created invisible indexes by setting OPTIMIZER_USE_INVISBLE_INDEXES to TRUE
ALTER SESSION SET optimizer_use_invisible_indexes  = TRUE;

Better diagnostics for SQL regressions in 19c and beyond #JoelKallmanDay

When diagnosing and correcting a performance regression for a SQL statement, it can often be challenging to find out what is happening during the execution and why your corrective measures are not working.

In today’s blog, I want to share several enhancements introduced in recent Oracle Database releases to help you better understand how to improve a SQL statement and understand why your corrective measures aren’t working as you hoped.

Enhanced Execution Plan

One of the most valuable tools at our disposal is the execution plan. Over the last several years, the Oracle Optimizer team has made a considerable effort to share even more insights and guidance on what might be impacting the performance of a SQL statement via the execution plan.

The Note section under the plan

The note section under the execution plan contains valuable information on what has happened during parse and execution. Over the last several releases, you will find a lot more information appearing in the note section, including details on the following features:

  • Dynamic Sampling – indicates a non-default value for the parameter OPTIMIZER_DYNAMIC_SAMPLING or that one or more objects referenced in the query are missing statistics.
  • Plan Stability Features – indicates that a store outline, SQL profile, SQL patch SQL directives, or SQL plan baseline influenced the plan chosen.
  • Adaptive plan – indicates that the plan shown through the Explain Plan command may be different from the plan used during execution as the Optimizer will use run-time statistics to make the final decision on what join types and aggregation methods to use. It is best to view the actual plan used via v$SQL_PLAN after the initial execution.
  • Statistics Feedback – indicates the plan was re-evaluated based on previous execution statistics, and a new plan was found.

Hint Usage Reporting in Oracle Database 19c

Starting in 19c, you will see a new hint usage report under the execution plan. The new report gives you a better understanding of what happened to each hint in a SQL statement.

Remember, the database doesn’t issue an error if a hint is invalid or not applicable for a SQL statement. The database will treat the hint as a comment, effectively ignoring it. Previously, the only way to find this information was via a 10053 trace file, and even then, it wasn’t very detailed.

The new hint report shows which hints were used and which were ignored and explains why hints weren’t used. The most common reasons for a hint not to be used are:

  • Syntax errors – the hint contains a typo or an invalid argument.
  • Unresolved hints – the hint contains an invalid argument or is not applicable to the SQL statement. For example, you request an index be used, but no index exists on the table.
  • Unused hints – the hint can’t be used in this specific scenario. For example, you requested a Hash Join for a non-equality join condition.
  • Conflicting hints – multiple hints provide conflicting directives.

In the example below, the hint NO_QURY_TRANSFORMATION was reported to contain a syntax error. The word query is misspelled in the hint, so the hint can’t be used.

---------------------------------------------------------------------------   
| Id  | Operation                      | Name     | Rows  | Bytes | Cost (%CPU)|   
--------------------------------------------------------------------------------    
|   0 | SELECT STATEMENT               |          |       |       |    47 (100)|    
|   1 |  HASH GROUP BY                 |          |   269 | 37929 |    47   (7)|    
|   2 |   HASH JOIN                    |          |   269 | 37929 |    46   (5)|    
|   3 |     TABLE ACCESS STORAGE FULL  | SALES    | 10000 | 90000 |     5   (0)|   
|*  4 |     TABLE ACCESS STORAGE FULL  | PRODUCTS | 43108 |  4841K|    40   (3)|    
--------------------------------------------------------------------------------    
Predicate Information (identified by operation id):                                            
---------------------------------------------------                                            
   4 - access("ITEM_1"="P"."PROD_ID")                                                          
   
Hint Report (identified by operation id / Query Block Name / Object Alias):                    
Total hints for statement: 1 (E - Syntax error (1))
--------------------------------------------------------------------------
 0 -  SEL$1                                                                                  
         E -  NO_QURY_TRANSFORMATION 
Note                                                                                           
-----
- dynamic statistics used: dynamic sampling (level=2)                                       
                                                              

In this second example, I provided two hints on how to access the employees table. One hint requested that the primary key index be used, and the other requested that the access leverage parallel execution.

SELECT /*+ index(e empno_pk_ind) parallel(e 8) */ e.empno, ename 
FROM    employees eWHERE   e.empno < 7700;

----------------------------------------------------------------------------
|Id  | Operation			     | Name	    | Rows | Bytes |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT		     |	            |	   |	   |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED | EMPLOYEES    |	 8 |	80 |
|*  2 |   INDEX RANGE SCAN		     | EMPNO_PK_IND |	 8 |	   |
----------------------------------------------------------------------------
Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------
	 U -  parallel(e 8)

The parallel hint is not used, as I’ve supplied an invalid combination of hints. An index range scan can’t be parallelized unless the index is partitioned, which is not true in this example. Therefore, the Optimizer can not honor both hints.

Note Nigel Bayliss, the Optimizer product manager, has also blogged about Optimizer hint reporting and has shared details on some limitations regarding execution hints such as GATHER_OPTIMIZER_STATISTICS, APPEND, etc.

SQL Analysis Report in Oracle Database 23c

In Oracle Database 23c, the execution plan got another new section: a SQL Analysis Report. This handy addition helps you diagnose common problems that can cause suboptimal execution plans. For example, the new report will point out situations where you are:

  • Missing join conditions
  • Have a WHERE clause predicate that prevents an index from being used
  • Have a datatype mismatch in a WHERE clause predicate
  • Using a UNION instead of a UNION ALL
EXPLAIN PLAN FOR
SELECT * FROM addresses
 WHERE UPPER(state) = 'CA';
 
SELECT * FROM dbms_xplan.display();
 
Plan hash value: 3184888728
 
--------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)|
--------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |   239 | 13384 |    61   (0)| 
|*  1 |  TABLE ACCESS FULL| ADDRESSES |   239 | 13384 |    61   (0)| 
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(UPPER("state")='CA')
 
SQL Analysis Report (identified by operation id/Query Block Name/Object Alias):
-----------------------------------------------------------------
 
   1 -  SEL$1 / "ADDRESSES"@"SEL$1" 
-  The following columns have predicates which preclude their use as keys in index range scan. Consider rewriting the predicates. "STATE"

Again, Nigel Bayliss has blogged about this in more detail on the official Optimizer blog.

Note that SQL Monitor active reports also contain SQL Analysis reports. You can find them under the new  SQL Analysis tab.

New Optimizer Dictionary Views

Along with enhancements to the execution plan in 23c, we added new data dictionary views to help identify what happens when we parse and execute a SQL statement.

Apps and users frequently change the value for one or more parameters that impact the Optimizer at a session level. As a DBA or performance engineer, you are often unaware of these logon triggers or ALTER SESSION commands. The first hint that the environment may have changed is when you see a plan change.

To check if a change in the optimizer environment caused the plan change, you can check the column OPTIMIZER_ENV_HASH_VALUE in V$SQL, V$SQLAREA, and DBA_HIST_SQLSTAT. However, this column doesn’t tell you what has changed.

Starting in 23c, you can query a new dictionary view, DBA_HIST_OPTIMIZER_ENV_DETAILS, to find out exactly what has changed in the optimizer environment when we parsed and executed a SQL statement.

Using the optimizer_env_hash_value for the original and new plan, you can query DBA_HIST_OPTIMIZER_ENV_DETAILS and get a list of the parameters settings for each scenario, which you can then compare to find the difference.

SELECT DISTINCT optimizer_env_hash_value
FROM  dba_hist_sqlstat
WHERE sql_id = 'bsvavk15n7cra'
ORDER BY 1;

OPTIMIZER_ENV_HASH_VALUE                                                        
------------------------                                                        
              1309615723                                                        
              2369923737 

SELECT name, value
FROM dba_hist_optimizer_env_details
WHERE optimizer_env_hash_value = 1309615723
ORDER BY 1;

NAME                                               VALUE                        
-------------------------------------------------- -------------------------    
BlockChain_ledger_infrastructure                   0                            
CLI_internal_cursor                                0                            
PMO_altidx_rebuild                                 0                            
_adaptive_window_consolidator_enabled              true                         
_add_stale_mv_to_dependency_list                   true  
   .
   .

You can get more information on how to leverage the DBA_HIST_OPTIMIZER_ENV_DETAILS view in MOS NOTE:2953121.1Examining the Optimizer Environment within Which a SQL Statement was Parsed in AWR.

How to use DBMS_STATS DIFF_TABLE_STATS functions

In 11g, Oracle introduced the DBMS_STAT.DIFF_TABLE_STATS functions to help you compare two sets of statistics for a table along with all its dependent objects (indexes, columns, partitions).

There are three versions of this function depending on where the statistics being compared are located:

  • DBMS_STAT.DIFF_TABLE_STATS_IN_HISTORY (compares statistics for a table from two timestamps in the past)
  • DBMS_STAT.DIFF_TABLE_STATS_IN_PENDING (compares pending statistics and statistics as of a timestamp or statistics from the data dictionary)
  • DBMS_STAT.DIFF_TABLE_STATS_IN_STATTAB (compares statistics from a user statistics table and the data dictionary, from two different user statistics tables, or a single user statistics table using two different STATSIDs)

The functions return a report that has three sections:

  1. Basic table statistics
    The report compares the basic table statistics (number of rows, blocks, etc.).
  2. Column statistics
    The second section of the report examines column statistics, including histograms.
  3. Index Statistics
    The final section of the report covers differences in index statistics.

Statistics will only be displayed in the report if the difference in the statistics exceeds a certain threshold (%). The threshold can be specified as an argument to the functions (PCTTHRESHOLD); the default value is 10%. The statistics corresponding to the first source, typically the current table stats in the data dictionary, will be used to compute the differential percentage.

The functions also return the MAXDIFFPCT (a number) along with the report. This is the maximum percentage difference between the statistics. These differences can come from the table, column, or index statistics.

Let’s look at an example.
Continue reading “How to use DBMS_STATS DIFF_TABLE_STATS functions”

SQL Tuning: How to fix a SQL statement that isn’t getting partition pruning

Partitioning is one of the most powerful tools at your disposal when managing large volumes of data and improving the performance of queries that would otherwise scan and filter lots and lots of data.

However, it can be tricky to determine why it isn’t helping when you thought it should.

Typically partitioning improves query performance by ensuring only the partition(s) needed to answer the business user’s query will be scanned rather than the entire table.

But how can you tell if you got partition pruning in an execution plan? Or worse yet, how do you determine why you didn’t get partitioning pruning when you were expecting it.

In the video below, I share the steps I use to determine if partition pruning has occurred and what to look at and correct if you don’t automatically get the partition pruning you were expecting.

This blog post is part of a series on SQL Tuning. I shared some simple steps to help you tune a SQL Statement using the wrong Join Type in part one. While part two deals with how to tackle a problem where the optimizer picks the wrong index, and part three shares tips on how to fix statements where the Optimizer chooses a nested loop join instead of a hash join.

Happy Tuning!

 

Explain the Explain Plan: Join Order

 

Continuing my blog series on reading and interpreting Oracle execution plans, this week’s post covers Join Order.

So what is the Join Order?

The join order is the order in which the tables are joined together in a multi-table SQL statement. Ideally, a plan should start with the join that eliminates the most data to minimize the amount of data carried into the subsequent joins.

How is the Join Order Determined?

The join order is determined based on cost, which is strongly influenced by the cardinality estimates and available access paths. However, the Optimizer will also always adhere to some basic rules:

    • The Optimizer will always select a join that will produce at most 1 row as the initial join in the plan. For example, a join between two row sources that only have 1 row each. Like a primary key lookup or an index unique scan.
    • If a SQL statement uses an outer join, then the optimizer must obey the join order specified by the outer join. That is to say; the row preserving table must come after the other table in the predicate to ensure all of the additional rows that don’t satisfy the join condition can be added to the result set correctly. For example, with the Oracle syntax for outer joins, the table with the outer join operator must come after the other table in the predicate. Thus, in this example, the cites table must come before the countries table.
       WHERE cities.country_id = countries.id(+);
    •  For SQL statements that reference a database view,  the Optimizer will attempt to do view merging, where the definition of the view is inserted into the rest of the SQL statement, and the entire expanded statement is optimized as a whole. However, there are a few cases where view merging isn’t possible. In these cases, the optimizer will join all of the tables in the view together before the resulting data set is joined to the tables outside the view.
    • When a subquery has been converted into an anti-join (NOT IN subquery) or semi-join (EXISTS subquery), the tables from the subquery must come after those tables in the outer query block to which they were connected or correlated. However, hash anti-joins and semi-joins can override this ordering condition under certain circumstances.

How to determine the Join Order in an execution plan

You can take several approaches to determine the Join Order in a plan, from looking at the indentation of the tables in the operation column to a depth-first search. To clearly explain how to identify the Join Order in an execution plan, I’ve created a short video demonstrating several approaches using real-world examples.

What if I don’t get the Join Order I want?

The leading cause of the wrong Join Order is typically a cardinality misestimate on the table or joins in the query or missing access methods.

Don’t forget this post is part of a series of posts on interpreting execution plans, covering how to generate plans, cardinality estimatesaccess methods, and join methods.

Explain the Explain Plan: Join Methods

Continuing my blog series on reading and interpreting Oracle execution plans, this week’s post covers the different Join Methods and types available to the Optimizer.

What is an Oracle Join Method?

Join Methods are the techniques used by the Oracle Optimizer to join data coming from two data producing operators in an execution plan. You can find the individual Join Methods chosen by the Optimizer in the Operations column of an Execution table.

How many Join Methods does the Optimizer have to choose from?

 

The Oracles Optimizer supports three join methods; Nested Loops, Hash Join and Sort Merge Join.

Nested Loops Join

Nested loops joins are useful when small subsets

of data are being joined and if there is an efficient way of accessing the second table (for example an index lookup).

For every row in the first table (the outer table), Oracle accesses all the rows in the second table (the inner table) looking for a match. You can think of it as a set of embedded FOR loops.

NOTE: In Oracle Database 11g the internal implementation for nested loop joins changed to reduce overall latency for physical I/O. You may see two NESTED LOOPS operators in the plan’s operations column, where you previously only saw one on earlier versions of Oracle. You can find more details on why there are two operators in the video below.

Hash Joins

Hash joins are used for joining large data sets. The Optimizer uses the smaller of the two tables or data sources to build a hash table, based on the join key, in memory. It then scans the larger table and performs the same hashing algorithm on the join column(s). It then probes the previously built hash table for each value and if they match, it returns a row.

Sort Merge Joins

Sort Merge joins are useful when the join condition between two tables is an in-equality condition such as, <, <=, >, or >=. Sort merge joins can perform better than nested loop joins for large data sets. The join consists of two steps:

  1. Sort join operation: Both the inputs are sorted on the join key.
  2. Merge join operation: The sorted lists are merged together.

A Sort Merge join is more likely to be chosen if there is an index on one of the tables that will eliminate one of the sorts.

When will the Optimizer choose each of these methods, and what can I do to influence that decision?

To clearly explain how each of the Join Methods works and when they will be chosen, I’ve created the short video below.

What if I don’t get the Join Method I want?

The leading cause of getting the wrong Join Method is typically a cardinality misestimate on the table on the left-hand side of the join.  That’s why Oracle introduced Adaptive Plans and more specifically Adaptive Join Methods in Oracle Database 12c to help automatically correct itself if the wrong Join Method is chosen.

How Adaptive Joins work

During the initial execution of a plan, if Oracle detects that the Optimizer’s cardinality estimates were wrong, the join method can be changed “on the fly” to a better option.

It’s possible to change an adaptive plan “on the fly” because it consists of a default plan, which is the plan that the Optimizer picks based on the current statistics and an alternative method for various portions of the plan. For example, the default plan could be a Nested Loops plan, and the alternative(subplan) would be a Hash join.

A new operated called a Statistics Collector is inserted into the plan, right above the table on the left-hand side of the join, which will buffer the rows coming out of the table until we can get a sense of how many rows will be returned. Once we know the number of rows returned or the number is above a certain threshold, the Optimizer will choose the final join method. After the initial execution, the Statistics Collector and the subplan components not chosen become no-ops, and the impact on execution plan performance is nill.

Don’t forget this post is part of a series of posts on interpreting execution plans, which also covers, how to generate plans, cardinality estimates, and access methods.

The next instalment will be all about join orders.

 

Explain the Explain Plan: Access Methods

At the end of last year, I began a blog series on reading and interpreting Oracle execution plans. In this week’s post, I will tackle the aspect of execution plans that I get the most questions about, Access Methods.

What are Oracle Access Paths or Methods?

Access Methods or Access Paths are the techniques used by Oracle to access the data needed to answer a query. Access Methods can be divided into two categories; data accessed via a table scan or index access. You can find the individual Access Methods chosen by the Optimizer in the Operations column of an Execution table.

How Many Access Paths are available to the Optimizer?

Oracle supports nine different Access Methods today, as shown in the image below.

When will the Optimizer choose each of these methods, and what can I do to influence that decision?

To clearly explain how each of the Access Methods works and when it will be chosen, I’ve created a short video.

What if I don’t get the Access Method I want?

If the Access Method you see in an execution plan is not what you expect, check the cardinality estimates for that object are correct, and the join order allows the access method you desire. Remember, Optimizer transformations (the rewriting of your query to open up additional access methods) can also greatly impact the Access Method.

SQL Tuning Tips You Can’t Do Without

Last week, I enjoyed presenting at the aioug Sangam 20 on one of my favorite topics, SQL Tuning.

Often, we are led to believe you need a degree in wizardry to tune sub-optimal SQL statement, but in reality, you usually need to know where to look.

In the session, I look at four different SQL statements with sub-optimal plans and share details on where I look for information to help me understand why. Once I know the root cause of a problem, it’s easy to apply the appropriate solution.

Those who couldn’t make the session in person can download the presentation file from here.

Explain the Explain Plan: Cardinality Estimates

In last week’s post, I began a series on how to read and interpret Oracle execution plans by explaining what an execution plan is and how to generate one. This week I’m going to tackle the most important piece of information the Optimizer shares with you via the execution plan, it’s cardinality estimates.

What is a Cardinality Estimate?

A cardinality estimate is the estimated number of rows, the optimizer believes will be returned by a specific operation in the execution plan. The Optimizer determines the cardinality for each operation based on a complex set of formulas that use table and column level statistics as input (or the statistics derived by dynamic sampling). It’s considered the most important aspect of an execution plan because it strongly influences all of the other decisions the optimizer makes.

In part 4 of our series, I share some of the formulas used by the optimizer to estimate cardinalities, as well as showing you how to identify cardinalities in a plan. I also demonstrate multiple ways to determine if the cardinality estimates are accurate.

What can cause a Cardinality Misestimate and how do I fix it?

Several factors can lead to incorrect cardinality estimates even when the basic table and column statistics are up to date. In part 5 of our series, I explain the leading causes of cardinality misestimates and how you can address them.

Next weeks, instalment will be all about the different access methods available to the Optimizer and what you can do to encourage the optimizer to select the access method you want!

Don’t forget this series also covers, how to read an explain plan as well as the different join methods and join orders.

Don’t forget more information on the Oracle Optimizer can always be found on the Optimizer blog.

Explaining the Explain Plan – How to Read and Interpret Execution Plans

 

Examining the different aspects of an execution plan, from cardinality estimates to parallel execution, and understanding what information you should glean from it can be overwhelming even for the most experienced DBA.

That’s why I’ve put together a series of short videos that will walk you through each aspect of the plan and explain what information you can find there and what to do if the plan isn’t what you were expecting.

What is an Execution Plan?

The series starts at the very beginning with a comprehensive overview of what an execution plan is and what information is displayed in each section. After all, you can’t learn to interpret what is happening in a plan, until you know what a plan actually is.

How to Generate an Execution Plan?

Although multiple different tools will display an Oracle Execution Plan for you, there really are only two ways to generate the plan. You can use the Explain Plan command, or you can view the execution plan of a SQL statement currently in the Cursor Cache using the dictionary view V$SQL_Plan. This session covers both techniques for you and provides insights into what additional information you can get the Optimizer to share with you when you generate a plan. It also explains why you don’t always get the same plan with each approach, as I discussed in an earlier post.

How to use DBMS_XPLAN to FORMAT an Execution Plan

The FORMAT parameter within the DBMS_XPLAN.DISPLAY_CURSOR function is the best tool to show you detailed information about a what’s happened in an execution plan including the bind variable values used, the actual number of rows returned by each step, and how much time was spent on each step.  I’ve also covered a lot of the content in this video in a previous post.

Part 2 of the series covers Cardinality Estimates and what you can do to improve them!

Part 3 of the series covers access Methods and what you can do if you don’t get the access method you were expecting.

Part 4 of the series covers Join Methods and when you can expect each one and what to do if you don’t get the join method you were expecting.

Remember you can always get more information on the Oracle Optimizer on the Optimizer team’s blog.