Generating an AWR report for Autonomous Databases

Oracle Autonomous Database automates the lifecycle management of a database, everything from provisioning, scaling, backups and patching, but what it doesn’t do yet is fully tune your application.

You are still on the hook to make sure your app doesn’t have any concurrency bottlenecks or poorly written SQL that Auto Indexing can’t address.

So, what can you do to monitor your app while it’s running on an Autonomous Database?

The first place you can start is the Performance Hub tab on the cloud console. Here you’ll find both real-time and historical performance data in the form of Active Session History (ASH) information for the last seven days, and SQL Monitor reports for the high-load SQL. You can aggregate the ASH data several different ways including by wait class, database service, resource group or SQL ID. The same information is also available in Oracle Management Cloud (OMC), and SQL Developer Web.

But if you are trying to get a holistic view of how your app is behaving on a database, nothing beats an Automatic Workload Repository report or AWR report.

Continue reading “Generating an AWR report for Autonomous Databases”

What are Query Block Names and how to find them

I got a lot of follow-up questions on what Query Block names are and how to find them, after my recent post about using SQL Patches to influence execution plans. Rather than burying my responses in the comment section under that post, I thought it would be more useful to do a quick post on it.

What are query blocks?

query block is a basic unit of SQL. For example, any inline view or subquery of a SQL statement are considered separate query blocks to the outer query.

The simple query below has just one sub-query, but it has two Query Blocks. One for the outer SELECT and one for the subquery SELECT.

Oracle automatically names each query block in a SQL statement based on the keyword using the following sort of name; sel$1, ins$2, upd$3, del$4, cri$5, mrg$6, set$7, misc$8, etc.

Given there are two SELECT statements in our query, the query block names will begin with SEL. The outer query will be SEL$1 and the inner query SEL$2.

How do I find the name of a query block?

To find the Query Block name, you can set the FORMAT parameter to ‘+alias’ in the DBMS_XPLAN.DISPLAY_CURSOR command. This will display the contents of the OBJECT_ALIAS column in the PLAN_TABLE, as a new section under the execution plan.

The new section will list the Query Block name for each of the lines in the plan.

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format=>'+alias'));
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
SQL_ID 4c8bfsduxhyht, child NUMBER 0
-------------------------------------
SELECT e.ename, e.deptno FROM emp e WHERE e.deptno IN (SELECT d.deptno 
FROM dept d WHERE d.loc='DALLAS')
Plan hash VALUE: 2484013818
---------------------------------------------------------------------------
| Id  | Operation	   | Name | ROWS  | Bytes | Cost (%CPU)| TIME	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |	  |	  |	  |	5 (100)|	  |
|*  1 |  HASH JOIN SEMI    |	  |	5 |   205 |	5  (20)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| EMP  |    14 |   280 |	2   (0)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| DEPT |	1 |    21 |	2   (0)| 00:00:01 |
---------------------------------------------------------------------------
Query Block Name / Object Alias (IDENTIFIED BY operation id):
-------------------------------------------------------------
1 - SEL$5DA710D3
2 - SEL$5DA710D3 / E@SEL$1
3 - SEL$5DA710D3 / D@SEL$2
 
Predicate Information (IDENTIFIED BY operation id):
---------------------------------------------------
1 - access("E"."DEPTNO"="D"."DEPTNO")
3 - FILTER("D"."LOC"='DALLAS')

As you can see @SEL1 is the Query Block name for the outer query, where the EMP table is used and @SEL2 is the Query Block name for the sub-query, where the DEPT tables is used.

Continue reading “What are Query Block Names and how to find them”

How to use a SQL Plan Baseline or a SQL Patch to add Optimizer hints

In a recent chat with Connor McDonald, we discussed if it is realistic to have applications that don’t contain Optimizer hints. Ideally, the answer to this question is “yes”, you don’t need hints if you have a well-written application and you have supplied a representative set of statistics and all the possible constraint information (NOT NULL, Primary keys, Foreign Keys, etc.) to the Optimizer.

But in reality even with all of this in place, there can be cases where something goes wrong with the execution plan for a critical SQL statement and you get called in to fix.

During our chat, Connor used a very apt analogy to describe this situation. He said it was like having a patient arrive in the emergency room, who is bleeding profusely. Your first priority is to stop the patient from bleeding by slapping a band-aid on the wound.

The same is true for our poorly performing SQL statement. Our initial response is to add an optimizer hint to get the SQL statement’s execution plan back to a reasonable response time or acceptable performance.

But once a patient has been stabilized in the emergency room, medical professionals normally take that patient into surgery to make a permanent fix or at the very least stitch up the wound properly.

We need to make sure we do the same thing for our SQL statements.

Rather than leaving a band-aid in the application code in the form of an optimizer hint, we should either fix the root cause or at the very least, make a permanent fix that can be easily traced and ideally can evolve over time.

That’s why you often hear me say, “if you can hint it, you can baseline or patch it”.

What do I mean by that?

I mean we should capture the hinted plan as a SQL plan baseline or at the very least insert the hints via a SQL Patch so that we know that this statement is patched (the use of a SQL patch is visible in the note section of the plan).

Continue reading “How to use a SQL Plan Baseline or a SQL Patch to add Optimizer hints”

SQL Tuning Workshop

Last week I had the pleasure of delivering a five-part SQL Tuning Workshop for my local Oracle User Group –  Northern California Oracle User Group. The workshop explains the fundamentals of the cost-based optimizer, the statistics that feed it, the hints that influence it and key tools you need to exam executions plans.

The workshop also provides a methodology for diagnosing and resolving the most common SQL execution performance problems. Given the volume of interest in this content, I want to share all of the material from the workshop here and give you links to additional material on each of the 5 topics.

Part 1 Understanding the Optimizer
The first part of the workshop covers the history of the Oracle Optimizer and explains the first thing the Optimizer does when it begins to optimize a query – query transformation.

Query transformations or the rewriting of the SQL statement into a semantically equivalent statement allows the Optimizer to consider alternative methods of processing or executing that query, which are often more efficient than the original SQL statement would allow. the majority of Oracle’s query transactions are now cost based, which means the Optimizer will cost the plan with and with the query transformation and pick the plan with the lowest cost. With the help of the Optimizer development team, I’ve already blogged about a number of these transformations including:

 

Part 2 Best Practices for Managing Optimizer Statistics
Part 2 of the workshop focuses on Optimizer Statistics and the best practices for managing them, including when and how to gather statistics, including fixed object statistics.
Continue reading “SQL Tuning Workshop”

How do I scale an Autonomous Database?

Traditionally database deployments have been designed and provision for the peak possible workload. And in reality, a substantial margin of safety was also provisioned on top of that in order to make sure the system could cope with any unforeseen demands.
But peak workloads tend to occur infrequently, leaving most of this costly capacity idle the majority of the time.

In order to enable customers to pay for only the resources they need, Oracle Autonomous Database allow customers to elastically adjust their compute and storage resources when necessary.

An Autonomous Database can be scaled through the UI as shown in the video below or via our cloud APIs or CLI commands. In the video you see how I can scale an 2 CPU  configuration to an 8 CPU configuration in under a minute  to accommodate 48 concurrent users running a JSON workload.

Continue reading “How do I scale an Autonomous Database?”

Getting started with Oracle Autonomous Transaction Processing

Getting started with Oracle Autonomous Transaction Processing is actually much easier than you might think. In fact, with Oracle’s $300 in free cloud credits you can probably get your first 30 days on the service for free. Please note, you will require an active email address and credit card in order to sign up for a trial account. Of course, if you have existing cloud credits you can skip this step.

Once you sign up for trail account you’ll get an email with your tenancy, username and password. Armed with this information, head on over to https://cloud.oracle.com to sign in. The video below explains in detailed the simple steps needed to provision a new Autonomous Transaction Processing database. I’ve also listed these steps below the video, for  easy reference.

Continue reading “Getting started with Oracle Autonomous Transaction Processing”

JEFF Talks From Kscope18

The first day of the ODTUG Kscope conference is always symposium Sunday. This year’s Database symposium, organized by @ThatJeffSmith, consisted of multiple, short, rapid  sessions, covering a wide variety of database and database tool topics, similar to Ted Talks but we called then JEFF Talks!

I was lucky enough to present 3 of this year’s JEFF Talks that I thought I would share on my blog since there wasn’t a way to uploaded to the conference site.

In the first session I covered  5 useful tips for getting the most out of your Indexes, including topics like reverse key indexes, partial indexes, and invisible indexes.

Next up was my session on JSON and the Oracle Database. In this session, I covered topics like what data type you should use to store JSON documents (varchar2, clob or blob) the pros and cons of using an IS JSON check constraint, and how to load, index, and query JSON documents.

In my finally JEFF talk I covered some of the useful PL/SQL packages that are automatically supplied with the Oracle Database. Since the talk was only 15 minutes I only touched on 4 of the 300 supplied packages you get with Oracle Database 18c but hopefully it will give you enough of a taste to get you interested in investigating some of the others!

 

 

Avoiding reparsing SQL queries due to partition level DDLs – Part 2

In my pervious post, I promised to provide an alternative solution to avoiding repasing SQL queries due to partition level DDLs.

So, what is it?

In Oracle Database 12 Release 2 we implementing a fine-grained cursor invalidation mechanism, so that cursors can remain valid if they access partitions that are not the target of an EXCHANGE PARTITION, TRUNCATE PARTITION or MOVE PARTITION command.

As I said in my previous post, this enhancements can’t help in the case of a DROP PARTITION command due to the partition number changing but hopefully you can change the DROP to either an EXCHANGE PARTITION or a TRUNCATE PARTITION command to avoid the hard parse, as I have done in the example below. 

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we will now TRUNCATE the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Let’s begin by executing both queries and checking their execution plans.
Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 2”

Avoiding reparsing SQL queries due to partition level DDLs – Part 1

A couple of weeks ago, I published a blog post that said specifying a partition name in the FROM clause of a query would prevent an existing cursor from being hard parsed when a partition is dropped from the same table. This was not correct.

It’s actually impossible for us not to re-parse the existing queries executing against the partitioned table when you drop a partition, because all of the partition numbers change during a drop operation. Since we display the partition numbers in the execution plan,  we need the re-parse each statement to generate a new version of the plan with the right partition information.

What actually happened in my example was the SQL statement with the partition name specified in the FROM clause reused child cursor 0 when it was hard parsed after the partition drop, while the SQL statement that just specified the table name in theFROM clause got a new child cursor 0.

But it’s not all bad news. I do have a solution that will reduce hard parses when executing DDL operations on partitioned tables that you can check out in part 2 of this blog post. But before you click over to read the alternative solution, let me explain in detail what was really happening in the original example I posted.

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we drop the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 1”