How to use DBMS_STATS DIFF_TABLE_STATS functions

In 11g, Oracle introduced the DBMS_STAT.DIFF_TABLE_STATS functions to help you compare two sets of statistics for a table along with all its dependent objects (indexes, columns, partitions).

There are three versions of this function depending on where the statistics being compared are located:

  • DBMS_STAT.DIFF_TABLE_STATS_IN_HISTORY (compares statistics for a table from two timestamps in the past)
  • DBMS_STAT.DIFF_TABLE_STATS_IN_PENDING (compares pending statistics and statistics as of a timestamp or statistics from the data dictionary)
  • DBMS_STAT.DIFF_TABLE_STATS_IN_STATTAB (compares statistics from a user statistics table and the data dictionary, from two different user statistics tables, or a single user statistics table using two different STATSIDs)

The functions return a report that has three sections:

  1. Basic table statistics
    The report compares the basic table statistics (number of rows, blocks, etc.).
  2. Column statistics
    The second section of the report examines column statistics, including histograms.
  3. Index Statistics
    The final section of the report covers differences in index statistics.

Statistics will only be displayed in the report if the difference in the statistics exceeds a certain threshold (%). The threshold can be specified as an argument to the functions (PCTTHRESHOLD); the default value is 10%. The statistics corresponding to the first source, typically the current table stats in the data dictionary, will be used to compute the differential percentage.

The functions also return the MAXDIFFPCT (a number) along with the report. This is the maximum percentage difference between the statistics. These differences can come from the table, column, or index statistics.

Let’s look at an example.
Continue reading “How to use DBMS_STATS DIFF_TABLE_STATS functions”

Explain the Explain Plan: Cardinality Estimates

In last week’s post, I began a series on how to read and interpret Oracle execution plans by explaining what an execution plan is and how to generate one. This week I’m going to tackle the most important piece of information the Optimizer shares with you via the execution plan, it’s cardinality estimates.

What is a Cardinality Estimate?

A cardinality estimate is the estimated number of rows, the optimizer believes will be returned by a specific operation in the execution plan. The Optimizer determines the cardinality for each operation based on a complex set of formulas that use table and column level statistics as input (or the statistics derived by dynamic sampling). It’s considered the most important aspect of an execution plan because it strongly influences all of the other decisions the optimizer makes.

In part 4 of our series, I share some of the formulas used by the optimizer to estimate cardinalities, as well as showing you how to identify cardinalities in a plan. I also demonstrate multiple ways to determine if the cardinality estimates are accurate.

What can cause a Cardinality Misestimate and how do I fix it?

Several factors can lead to incorrect cardinality estimates even when the basic table and column statistics are up to date. In part 5 of our series, I explain the leading causes of cardinality misestimates and how you can address them.

Next weeks, instalment will be all about the different access methods available to the Optimizer and what you can do to encourage the optimizer to select the access method you want!

Don’t forget this series also covers, how to read an explain plan as well as the different join methods and join orders.

Don’t forget more information on the Oracle Optimizer can always be found on the Optimizer blog.

Explaining the Explain Plan – How to Read and Interpret Execution Plans

 

Examining the different aspects of an execution plan, from cardinality estimates to parallel execution, and understanding what information you should glean from it can be overwhelming even for the most experienced DBA.

That’s why I’ve put together a series of short videos that will walk you through each aspect of the plan and explain what information you can find there and what to do if the plan isn’t what you were expecting.

What is an Execution Plan?

The series starts at the very beginning with a comprehensive overview of what an execution plan is and what information is displayed in each section. After all, you can’t learn to interpret what is happening in a plan, until you know what a plan actually is.

How to Generate an Execution Plan?

Although multiple different tools will display an Oracle Execution Plan for you, there really are only two ways to generate the plan. You can use the Explain Plan command, or you can view the execution plan of a SQL statement currently in the Cursor Cache using the dictionary view V$SQL_Plan. This session covers both techniques for you and provides insights into what additional information you can get the Optimizer to share with you when you generate a plan. It also explains why you don’t always get the same plan with each approach, as I discussed in an earlier post.

How to use DBMS_XPLAN to FORMAT an Execution Plan

The FORMAT parameter within the DBMS_XPLAN.DISPLAY_CURSOR function is the best tool to show you detailed information about a what’s happened in an execution plan including the bind variable values used, the actual number of rows returned by each step, and how much time was spent on each step.  I’ve also covered a lot of the content in this video in a previous post.

Part 2 of the series covers Cardinality Estimates and what you can do to improve them!

Part 3 of the series covers access Methods and what you can do if you don’t get the access method you were expecting.

Part 4 of the series covers Join Methods and when you can expect each one and what to do if you don’t get the join method you were expecting.

Remember you can always get more information on the Oracle Optimizer on the Optimizer team’s blog.

How to Provision a VM in the Oracle Cloud?

This blog post outlines the 10 simple steps necessary to provision, and connect to, a VM on Oracle Cloud Infrastructure (OCI).

          1. Confirm you have an SSH Public Key on your laptop or localhost.
            $ cd ~/.ssh
            — check if you have an existing if you have a key already
            $ ls
            id_rsa    id_rsa.pub    known_hosts

            You’re looking for a file named either id_dsa or id_rsa and a matching file with a .pub extension. The .pub file is your public key, and the other file is the corresponding private key. If you don’t have these files or you don’t remember your passphrase, you will need to complete the steps outlined here.NOTE: You can’t move on unless you have your SSH Public key.
          2. Connect to the OCI console to begin the provisioning process.
            From the hamburger menu in the upper left-hand corner select, the Compute menu item followed by the Instances option.
          3. On the Instances page, click the Create Instance button.
            Step3_begin_Create_instance_2
          4. Specify a unique name for your instance and accept the default Oracle Linux image.
            Step4_name_instance
          5. Scroll down and click on the Change Shape button.
            Step5_change_shape
          6. To run my demos I typically use Swingbench, which needs a minimum of 2 OCPUs (4 OCPUs if you use the JSON workload). So, I select a Virtual Machine with Intel Skylake processors and  2 OCPUs. Then click the Select Shape button.
            Step6_pick_shape
          7. I use the automatic defaults in the Configure Networking and Boot Volume sections and move on to the SSH Key section. Here I select Paste SSH Keys and cut and paste my public key from my .ssh directory into the window provided.
            Picture7_ssh_key
          8. Finally hit the Create button at the end of the page.
          9. Instantly you will see a new VM is being provisioned for you. Once available, you can connect to the machine using the public IP address and the user OPC. You will find the IP address, on the main Instance console page.Step9_connect
          10. Simply ssh into the machine from your laptop using the supplied OPC user.

            $ ssh opc@XXX.XXX.XX.XXX
            Enter passphrase for key '/Users/sqlmaria/.ssh/id_rsa':
            Last login: Tue Jul 28 16:39:35 2020 from XXXXX

Generating an AWR report for Autonomous Databases

Oracle Autonomous Database automates the lifecycle management of a database, everything from provisioning, scaling, backups and patching, but what it doesn’t do yet is fully tune your application.

You are still on the hook to make sure your app doesn’t have any concurrency bottlenecks or poorly written SQL that Auto Indexing can’t address.

So, what can you do to monitor your app while it’s running on an Autonomous Database?

The first place you can start is the Performance Hub tab on the cloud console. Here you’ll find both real-time and historical performance data in the form of Active Session History (ASH) information for the last seven days, and SQL Monitor reports for the high-load SQL. You can aggregate the ASH data several different ways including by wait class, database service, resource group or SQL ID. The same information is also available in Oracle Management Cloud (OMC), and SQL Developer Web.

But if you are trying to get a holistic view of how your app is behaving on a database, nothing beats an Automatic Workload Repository report or AWR report.

Continue reading “Generating an AWR report for Autonomous Databases”

JEFF Talks From Kscope18

The first day of the ODTUG Kscope conference is always symposium Sunday. This year’s Database symposium, organized by @ThatJeffSmith, consisted of multiple, short, rapid  sessions, covering a wide variety of database and database tool topics, similar to Ted Talks but we called then JEFF Talks!

I was lucky enough to present 3 of this year’s JEFF Talks that I thought I would share on my blog since there wasn’t a way to uploaded to the conference site.

In the first session I covered  5 useful tips for getting the most out of your Indexes, including topics like reverse key indexes, partial indexes, and invisible indexes.

Next up was my session on JSON and the Oracle Database. In this session, I covered topics like what data type you should use to store JSON documents (varchar2, clob or blob) the pros and cons of using an IS JSON check constraint, and how to load, index, and query JSON documents.

In my finally JEFF talk I covered some of the useful PL/SQL packages that are automatically supplied with the Oracle Database. Since the talk was only 15 minutes I only touched on 4 of the 300 supplied packages you get with Oracle Database 18c but hopefully it will give you enough of a taste to get you interested in investigating some of the others!

 

 

Avoiding reparsing SQL queries due to partition level DDLs – Part 2

In my pervious post, I promised to provide an alternative solution to avoiding repasing SQL queries due to partition level DDLs.

So, what is it?

In Oracle Database 12 Release 2 we implementing a fine-grained cursor invalidation mechanism, so that cursors can remain valid if they access partitions that are not the target of an EXCHANGE PARTITION, TRUNCATE PARTITION or MOVE PARTITION command.

As I said in my previous post, this enhancements can’t help in the case of a DROP PARTITION command due to the partition number changing but hopefully you can change the DROP to either an EXCHANGE PARTITION or a TRUNCATE PARTITION command to avoid the hard parse, as I have done in the example below. 

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we will now TRUNCATE the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Let’s begin by executing both queries and checking their execution plans.
Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 2”

Avoiding reparsing SQL queries due to partition level DDLs – Part 1

A couple of weeks ago, I published a blog post that said specifying a partition name in the FROM clause of a query would prevent an existing cursor from being hard parsed when a partition is dropped from the same table. This was not correct.

It’s actually impossible for us not to re-parse the existing queries executing against the partitioned table when you drop a partition, because all of the partition numbers change during a drop operation. Since we display the partition numbers in the execution plan,  we need the re-parse each statement to generate a new version of the plan with the right partition information.

What actually happened in my example was the SQL statement with the partition name specified in the FROM clause reused child cursor 0 when it was hard parsed after the partition drop, while the SQL statement that just specified the table name in theFROM clause got a new child cursor 0.

But it’s not all bad news. I do have a solution that will reduce hard parses when executing DDL operations on partitioned tables that you can check out in part 2 of this blog post. But before you click over to read the alternative solution, let me explain in detail what was really happening in the original example I posted.

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we drop the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 1”

Harnessing the Power of Optimizer Hints

Last week I was lucky enough to have participated in the Trivadis Performance Days 2017 conference, and several people have asked if I would share the slides from one of my sessions. I’ve done one better by recording the session so you see the slides and listen to the explanation that goes along with them.

The session in question was “Harnessing the power of optimizer hints.”  Although I am not a strong supporter of adding hints to SQL statements for a whole host of reasons, from time to time, it may become necessary to influence the plan the Optimizer chooses.

The most powerful way to alter the plan chosen is via Optimizer hints. But knowing when and how to use Optimizer hints correctly is somewhat of a dark art.

In this session, I explained how Optimizer hints are interpreted, when and where they should be used, and why they sometimes appear to be ignored.

Be warned this session won’t make you a hinting master overnight, and I’m not advocating you should try and hint every problematic SQL statement you encounter!

Using DBMS_XPLAN.DISPLAY_CURSOR to examine Execution Plans

In last week’s post I described how to use SQL Monitor to determine what is happening during the execution of long-running SQL statements. Shortly after the post went up, I got some requests on both social media and via the blog comments asking, “If it is possible to get the same information from a traditional text-based execution plan, as not everyone has access to SQL Monitor?”.

The answer is yes, it is possible to see a lot of the information shown in SQL Monitor by viewing the execution plan via the DBMS_XPLAN.DISPLAY_CURSOR function. In order to call this function, you will need SELECT or READ privilege on the fixed views V$SQL_PLAN_STATISTICS_ALL, V$SQL, and V$SQL_PLAN, otherwise, you’ll get an error message.

The DBMS_XPLAN.DISPLAY_CURSOR function takes three parameters:

  1. SQL ID – default null, means the last SQL statement executed in this session
  2. CURSOR_CHILD_NO – default 0
  3. FORMAT – Controls the level of details that will be displayed in the execution plan, default TYPICAL.

The video below demonstrates how you can use the FORMAT parameter within the DBMS_XPLAN.DISPLAY_CURSOR function to show you information about what’s happened during an execution plan including the bind variable values used, the actual number of rows returned by each step, and how much time was spent on each step.

Under the video you will find all of the commands used, so you can cut and paste them easily.

How do I see the actual number of rows and elapse time for each step in the plan?

Continue reading “Using DBMS_XPLAN.DISPLAY_CURSOR to examine Execution Plans”

%d bloggers like this: