Day 1 Oracle Open World 2018

The biggest day of the Oracle calendar has arrived.

Oracle OpenWorld 2018 officially starts today!

The show kicks off with ton’s of great technical sessions including several great sessions on Oracle Autonomous Database.

  • 9:00 AM – 9:45AM Oracle Exadata: Strategy and Roadmap for New Technologies, Cloud, and On-Premises – session PRM4114 in Moscone West room 3008.
    Kick off your OpenWorld with Juan Loaiza as he provides an overview of current and future Exadata capabilities, which is the underlying infrastructure for our Autonomous Database both in the public cloud, and Oracle Cloud at Customer technologies.
  • 11:00 AM – 12:15 PM Oracle Autonomous Database – session PKN3947  in the Arena in Moscone South.
    In this general session Andy Mendelsohn will shares the latest updates from the Database Development team along with customer reaction to Oracle Autonomous Database.
  • 1:45 PM – 3:00 PM Larry’s keynote in Moscone North – Hall D. A must see, as he is going to share details on Gen 2 Cloud and the Oracle Autonomous Database!
  • 4:45 PM – 5:30 PM Oracle Autonomous Transaction Processing Overview and Roadmap – session PRO3978 in Moscone West room 3003. Finish your afternoon with Juan Loaiza and I as we< explain how our unique Autonomous Database works and illustrates how it can simplify your approach to data management and accelerate your transition to the cloud.

Hope to see ya there!

Time to start planning for Oracle Open World 2018

With just a week to go until Oracle Open Worldthe largest gathering of Oracle customers, partners, developers, and technology enthusiasts, I thought I would share with you what you can expect.

Of course the Autonomous Database team will be there and you will have multiple opportunities to meet up with us, in one of our technical sessions, our hands-on-labs or at the Oracle demogrounds.

This year the Autonomous Database team has 5 technical sessions:

  • Monday, October 22nd at 4:45pm What to expect from Oracle Autonomous Transaction Processing
    Session PRO3978 at Moscone West – room 3003
    Juan Loaiza reveals what motivated Oracle to develop the Autonomous Database and provides a clear understanding of how this unique cloud services works and how you can expect it to change the wait you interact with the Database.
  • Tuesday, October 23rd at 11:15am An Insider’s Guide to Oracle Autonomous Transaction Processing
    Session TRN3979 at Moscone West – room 3003
    Robert Greene and I will provide an insiders view to the technology underlying Oracle Autonomous Database  and give you a guided tour of getting started with the Oracle Autonomous Transaction Processing.
  • Tuesday, October 23rd at 4;45pm Test Drive Automatic Index Creation in Oracle Autonomous Database Cloud
    Session TRN3980 at Moscone West – room 3003
    After hearing all about Autonomous Transaction Processing in the morning session, we are sure you will be ready for a deeper dive into the Automatic Indexing capabilities. Mohamed Zait (lead architect on the Optimizer team) and I will explain exactly how Automatic Indexing works and share with you the results of this awesome capability in action.
  • Wednesday, October 24th at 11:15am The Changing Role of the DBA
    Session TIP5526 at Moscone West – room 3003
    As we move into a new era of Autonomous Database, there is naturally some concern regarding how it will impact the role of the DBA.  In this session Penny Avril and I will describe the evolution of the role of the DBA with the introduction of the Autonomous Database and what you can do to prepare.
  • Thursday,October 25th at 9:00am Autonomous Transaction Processing and the Business-Critical Application
    Session CAS5200 at Moscone West – room 3003
    In our fifth and final session Robert Greene and Binoy Sukumaran, explain how Oracle Autonomous Database  addresses the reality of the business-critical application.

You will also have an opportunity to try out Oracle Autonomous Database for yourself in our hands-on lab, Oracle Autonomous Database Cloud Driving School that will run every day in the Marriott Marquis (Yerba Buena Level) – Salon 9B.


If you have burning questions related to Oracle Autonomous Database, you can ask them at the Autonomous Database demo booths in the Database area of the demogrounds in Moscone South. Members of theAutonomous Database development team as well as the Product Managers will be there Monday to Wednesday from 9:45am until 5:30pm. The demo booth is also the place to get an Autonomous Database laptop sticker! 

The Autonomous Database team aren’t the only ones talking about Oracle Autonomous Database at this year’s conference. There are also a number of great sessions being delivered by Oracle ACEs including Julian Dontcheff‘s session on Migrating to Oracle Autonomous Data and Jim Czuprynski‘s case study. Check out the full searchable OOW catalog on-line to start planning your trip today!

Getting started with Oracle Autonomous Transaction Processing

Getting started with Oracle Autonomous Transaction Processing is actually much easier than you might think. In fact, with Oracle’s $300 in free cloud credits you can probably get your first 30 days on the service for free. Please note, you will require an active email address and credit card in order to sign up for a trial account. Of course, if you have existing cloud credits you can skip this step.

Once you sign up for trail account you’ll get an email with your tenancy, username and password. Armed with this information, head on over to to sign in. The video below explains in detailed the simple steps needed to provision a new Autonomous Transaction Processing database. I’ve also listed these steps below the video, for  easy reference.

Continue reading “Getting started with Oracle Autonomous Transaction Processing”

What you can expect from Oracle Autonomous Transaction Processing

Today Larry Ellison announced the general availability of Oracle Autonomous Transaction Processing (ATP), the newest member of the Oracle Autonomous Database family, combining the flexibility of cloud with the power of machine learning to deliver data management as a service.

Traditionally, creating a database management system required a team of experts to custom build and manually maintain a complex hardware and software stack. With each system being unique, this approach led to poor economies of scale and a lack of the agility typically needed to give the business a competitive edge.

ATP enables businesses to safely run a complex mix of high-performance transactions, reporting, and batch processing using the most secure, available, performant, and proven platform – Oracle Database on Exadata in the cloud. Unlike manually managed transaction processing databases, ATP provides instant, elastic compute and storage, so only the required resources are provisioned at any given time, decreasing runtime costs.

But what does the Autonomous in Autonomous Transaction Processing really mean?


ATP is a self-driving database, meaning it eliminates the human labor needed to provision, secure, update, monitor, backup, and troubleshooting a database.  This reduction in database maintenance tasks, reducing costs and freeing scarce administrator resources to work on higher value tasks.

Continue reading “What you can expect from Oracle Autonomous Transaction Processing”

How does Autonomous Transaction Processing differ from the Autonomous Data Warehouse?

In my previous post, I explained that  Oracle Autonomous Transaction Processing has three main attributes: Self-Driving, Self-Securing and Self-Repairing. All of the functionality I described in that post is shared between both the Autonomous Data Warehouse (ADW) and ATP.

Where the two services differ is actually inside the database itself. Although both services use Oracle Database 18c, they have been optimized differently to support two very different but complementary workloads. The primary goal of ADW is to achieve fast complex analytics, while ATP has been designed to efficiently execute a high volume of simple transactions.


The differences in the two services begin with how we configure them. Continue reading “How does Autonomous Transaction Processing differ from the Autonomous Data Warehouse?”

Oracle Launches TimesTen Scaleout!

What a week for announcements! First we had Oracle Database 18c available on-premises and now Oracle TimesTen Scaleout.

If you are not familiar with TimesTen, let me reminder you that it’s Oracle’s  In-Memory Database. In fact, it’s been a leading in-memory database for mission critical applications for over 20 years, and today they just launched an exciting new scalability feature called TimesTen Scaleout.

So, What is Scaleout?

TimesTen Scaleout is a distributed database, with a shared nothing architecture specifically designed to address real-time, transaction processing workloads. Think IoT, real-time trading, telecommunications billing or real-time fraud detection.

Continue reading “Oracle Launches TimesTen Scaleout!”

Oracle Database 18c is now available for Download!

Today Oracle officially released Oracle Database 18c for download on Linux x86-64.

As you may recall, we originally released 18c on the Oracle Public Cloud and Oracle Engineered Systems back in February.

So, when will you be able to get your hands on 18c on-premises for other platforms?

You can check the Oracle Support document 742060.1 for more details!

18c is the first version of the database to follow the new yearly release model and you can find more details on the release model change in the Oracle Support Document 2285040.1.

Before you freak out about the fact you haven’t even upgraded to 12.2, so how on earth are you ever going to get to 18c – Don’t Panic!

Oracle Database 18c is in fact “Oracle Database 12c Release 2”, the name has simply been changed to reflect the year in which the product is released.

So, what can you expect?

As you’d imagine a patchset doesn’t contain any seismic changes in functionality but there are lots of small but extremely useful incremental improvements, most of which focus on the three key marquee features in Oracle Database 12c Release2:

More details on what has changed in each of these areas and other improvements can be found in the Oracle Database blog post published by Dominic Giles or in the video below with Penny Avril, VP of Database Product Management.

You can also read all about the new features in the 18c documentation and you can try out Oracle Database 18c on LiveSQL.


JEFF Talks From Kscope18

The first day of the ODTUG Kscope conference is always symposium Sunday. This year’s Database symposium, organized by @ThatJeffSmith, consisted of multiple, short, rapid  sessions, covering a wide variety of database and database tool topics, similar to Ted Talks but we called then JEFF Talks!

I was lucky enough to present 3 of this year’s JEFF Talks that I thought I would share on my blog since there wasn’t a way to uploaded to the conference site.

In the first session I covered  5 useful tips for getting the most out of your Indexes, including topics like reverse key indexes, partial indexes, and invisible indexes.

Next up was my session on JSON and the Oracle Database. In this session, I covered topics like what data type you should use to store JSON documents (varchar2, clob or blob) the pros and cons of using an IS JSON check constraint, and how to load, index, and query JSON documents.

In my finally JEFF talk I covered some of the useful PL/SQL packages that are automatically supplied with the Oracle Database. Since the talk was only 15 minutes I only touched on 4 of the 300 supplied packages you get with Oracle Database 18c but hopefully it will give you enough of a taste to get you interested in investigating some of the others!



Avoiding reparsing SQL queries due to partition level DDLs – Part 2

In my pervious post, I promised to provide an alternative solution to avoiding repasing SQL queries due to partition level DDLs.

So, what is it?

In Oracle Database 12 Release 2 we implementing a fine-grained cursor invalidation mechanism, so that cursors can remain valid if they access partitions that are not the target of an EXCHANGE PARTITION, TRUNCATE PARTITION or MOVE PARTITION command.

As I said in my previous post, this enhancements can’t help in the case of a DROP PARTITION command due to the partition number changing but hopefully you can change the DROP to either an EXCHANGE PARTITION or a TRUNCATE PARTITION command to avoid the hard parse, as I have done in the example below. 

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we will now TRUNCATE the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Let’s begin by executing both queries and checking their execution plans.
Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 2”

Avoiding reparsing SQL queries due to partition level DDLs – Part 1

A couple of weeks ago, I published a blog post that said specifying a partition name in the FROM clause of a query would prevent an existing cursor from being hard parsed when a partition is dropped from the same table. This was not correct.

It’s actually impossible for us not to re-parse the existing queries executing against the partitioned table when you drop a partition, because all of the partition numbers change during a drop operation. Since we display the partition numbers in the execution plan,  we need the re-parse each statement to generate a new version of the plan with the right partition information.

What actually happened in my example was the SQL statement with the partition name specified in the FROM clause reused child cursor 0 when it was hard parsed after the partition drop, while the SQL statement that just specified the table name in theFROM clause got a new child cursor 0.

But it’s not all bad news. I do have a solution that will reduce hard parses when executing DDL operations on partitioned tables that you can check out in part 2 of this blog post. But before you click over to read the alternative solution, let me explain in detail what was really happening in the original example I posted.

If you recall, we have a METER_READINGS table that is partitioned by time, with each hour being stored in a separate partition. Once an hour we drop the oldest partition in the table as a new partition is added. We also had two versions of the same SQL statement, one that explicitly specifies the partition name in the FROM clause and one that uses a selective WHERE clause predicate to prune the data set down to just 1 partition.

Continue reading “Avoiding reparsing SQL queries due to partition level DDLs – Part 1”