In 11g, Oracle introduced the DBMS_STAT.DIFF_TABLE_STATS functions to help you compare two sets of statistics for a table along with all its dependent objects (indexes, columns, partitions).
There are three versions of this function depending on where the statistics being compared are located:
- DBMS_STAT.DIFF_TABLE_STATS_IN_HISTORY (compares statistics for a table from two timestamps in the past)
- DBMS_STAT.DIFF_TABLE_STATS_IN_PENDING (compares pending statistics and statistics as of a timestamp or statistics from the data dictionary)
- DBMS_STAT.DIFF_TABLE_STATS_IN_STATTAB (compares statistics from a user statistics table and the data dictionary, from two different user statistics tables, or a single user statistics table using two different STATSIDs)
The functions return a report that has three sections:
- Basic table statistics
The report compares the basic table statistics (number of rows, blocks, etc.).
- Column statistics
The second section of the report examines column statistics, including histograms.
- Index Statistics
The final section of the report covers differences in index statistics.
Statistics will only be displayed in the report if the difference in the statistics exceeds a certain threshold (%). The threshold can be specified as an argument to the functions (PCTTHRESHOLD); the default value is 10%. The statistics corresponding to the first source, typically the current table stats in the data dictionary, will be used to compute the differential percentage.
The functions also return the MAXDIFFPCT (a number) along with the report. This is the maximum percentage difference between the statistics. These differences can come from the table, column, or index statistics.
Let’s look at an example.
Continue reading “How to use DBMS_STATS DIFF_TABLE_STATS functions”
Last week, I enjoyed presenting at the aioug Sangam 20 on one of my favorite topics, SQL Tuning.
Often, we are led to believe you need a degree in wizardry to tune sub-optimal SQL statement, but in reality, you usually need to know where to look.
In the session, I look at four different SQL statements with sub-optimal plans and share details on where I look for information to help me understand why. Once I know the root cause of a problem, it’s easy to apply the appropriate solution.
Those who couldn’t make the session in person can download the presentation file from here.
In last week’s post, I began a series on how to read and interpret Oracle execution plans by explaining what an execution plan is and how to generate one. This week I’m going to tackle the most important piece of information the Optimizer shares with you via the execution plan, it’s cardinality estimates.
What is a Cardinality Estimate?
A cardinality estimate is the estimated number of rows, the optimizer believes will be returned by a specific operation in the execution plan. The Optimizer determines the cardinality for each operation based on a complex set of formulas that use table and column level statistics as input (or the statistics derived by dynamic sampling). It’s considered the most important aspect of an execution plan because it strongly influences all of the other decisions the optimizer makes.
In part 4 of our series, I share some of the formulas used by the optimizer to estimate cardinalities, as well as showing you how to identify cardinalities in a plan. I also demonstrate multiple ways to determine if the cardinality estimates are accurate.
What can cause a Cardinality Misestimate and how do I fix it?
Several factors can lead to incorrect cardinality estimates even when the basic table and column statistics are up to date. In part 5 of our series, I explain the leading causes of cardinality misestimates and how you can address them.
Next weeks, instalment will be all about the different access methods available to the Optimizer and what you can do to encourage the optimizer to select the access method you want!
Don’t forget this series also covers, how to read an explain plan as well as the different join methods and join orders.
Don’t forget more information on the Oracle Optimizer can always be found on the Optimizer blog.
In my previous life as the Optimizer Lady, I wrote a blog on the importance of gathering fixed object statistics since they were not collected initially as part of the automatic statistics gathering task.
Starting with Oracle Database 12c Release 1, Oracle will automatically gather fixed object statistics as part of the automated statistics gathering task if they have not been previously collected. Does that mean we are off the hook then?
The answer (as always) is it depends!
Let me begin by explaining what we mean by I the term “fixed objects”.
Continue reading “Automatic Collection of Fixed Objects Statistics in 12c”
While at the HotSOS Symposium, last month, I caused quite a stir when I recommended that folks should never gather system statistics.
Why such a stir?
It turns out this goes against what we recommend in the Oracle SQL Tuning Guide, which says “Oracle recommends that you gather system statistics when a physical change occurs in the environment”.
So, who right?
Well in order to figure that out, I spoke with Mohamed Zait, the head of the optimizer development team and Nigel Bayliss, the product manager for the optimizer, upon my return to the office.
After our discussions, Nigel very kindly agreed to write a detailed blog post that explains exactly what system statistics are, how they influence the Optimizer, and provides clear guidance on when, if ever, you should gather system statistics!
What did I learn from all this?
Don’t gather system statistics unless you are in a pure data warehouse environment, with a good IO subsystem (e.g. Exadata) and you want to encourage the Optimizer to pick more full table scans and never says never!
At the recent OUG Ireland conference I had the privilege of participating in a panel discussion on the Oracle Database. During the course of the session the topic of Optimizer histograms came up. As always, a heated discussion ensued among the members of the panel, as we each had very different views on the subject.
Why so many different opinions when it comes to histograms?
The problem arises from the fact that some folks have been burnt by histograms in the past. In Oracle Database 9i and 10g, histograms in combination with bind-peeking lead to some unpredictable performance problems, which is explained in detail in this post on the Optimizer blog. This has resulted in a number of folks becoming histogram shy. In fact, I reckon if you were to put 3 Oracle experts on a panel, you would get at least 5 different opinions on when and how you should gather histograms!
So I thought it would be a good idea to explain some of the common misconceptions that surround histograms and the impact of adopting them.
This is a long post, so you might want to grab a coffee before you get into it!
Continue reading “Optimizer Histograms”