I’m often asked what is the best tool for viewing execution plans and for me, the answer is always SQL Monitor (included in the Oracle Tuning Pack). It really is the most invaluable tool if you need to determine what is happening during the execution of any long-running SQL statements.
In order to help you get the very most out of using SQL Monitor, I wanted to share with you some of the tips and tricks I‘ve learnt over the years from the original Database Manageability team, especially Cecilia Grant!
So, why do I love it so much?
There are so many reasons to love SQL Monitor; it’s hard for me to know where to begin. So, instead of giving you an exhaustive list I’ve put together a short video to demonstrate how I use SQL Monitor (be sure to set your resolution to 720p).
How do I control what statements are monitored?
By default, a SQL statement that either runs in parallel or has consumed at least 5 seconds of combined CPU and I/O time in a single execution will be monitored.
It is also possible to force monitoring to occur for any SQL statement by simply adding the MONITOR hint to the statement.
SELECT /*+ MONITOR */ col1, col2, col3 FROM t1 WHERE col1=5;
If however, you can’t modify the SQL statement because it’s coming for a third part application etc. you can still force monitoring to occur by setting the event “sql_monitor” with a list of SQL_IDs for the statements you want to monitor at the system level.
ALTER SYSTEM SET EVENTS 'sql_monitor [sql: 5hc07qvt8v737|sql: 9ht3ba3arrzt3] force=true';
However, there are a couple of additional limits you should also be aware of if you have a very large workload or complex execution plans.
By default, Oracle limits the number of SQL statements that will be monitored to 20 X CPU_COUNT. You can increase this limit by setting the underscore parameter _sqlmon_max_plan but be aware this will increase the amount of memory used by SQL Monitor in the Shared_Pool and may result in SQL Monitoring information being aged out of the memory faster.
SQL Monitor will only monitor a SQL statement if the execution plan has less than 300 lines. If you know your execution plans are much larger than that, you can set the underscore parameter _sqlmon_max_planlines to increase this limit. Again, this will increase the amount of memory used by SQL Monitor in the Shared_Pool.
Finally, you can lower or increase the default threshold of 5 seconds by setting the underscore parameter _sqlmon_threshold. However, you should be aware that any increase might mean the monitored executions will age out of the SQL Monitor buffer faster.
Due to the less than desirable side effects of these underscore parameters, I strongly discourage you from setting them in general. If you do need to use them, please only do so at a session-level.
How long is a SQL Monitor report persisted in the Oracle Database?
When SQL Monitor was originally introduced in Oracle Database 11g, the information required to generate the reports was only available in the dynamic performance view V$SQL_MONITOR_* (a size-constrained in-memory buffer) and not persisted to disk. So, there were no guarantees that the information would be retained beyond one minute after the statement completed. But in reality it’s often there a lot longer.
If you are on 11g, I strongly recommend you manually save any SQL Monitor reports you are interested in (see details on how below).
In Oracle Database 12c SQL Monitor reports are persisted in the data dictionary table DBA_HIST_REPORTS. By default, Oracle will retain SQL Monitor reports for 8 days.
It should, as it’s the AWR retention policy. That’s right the SQL Monitor retention policy is controlled by the AWR policy. In fact, each of the SQL Monitor reports stored in the DBA_HIST_REPORTS table is associated with an AWR SNAP_ID. You can change the retention policy using the DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS procedure.
NOTE: Although the retention of SQL Monitor reports is controlled by the AWR retention policies and each report has a SNAP_ID associated with it, SQL Monitor reports are not exported or imported when you export or import the corresponding AWR data.
How can I generate a real-time SQL Monitor report?
You can generate a real-time SQL Monitor report either from Enterprise Manager (EM), EM Database Express, SQL Developer or via the command line.
In EM Database Express click on the Performance Hub in the Performance drop-down menu. Then click on the Monitored SQL tab under the timeline graph.
This will open a window with a list of the currently available monitored SQL Statements.
Click on the SQL_ID of the statement you are interested in and the SQL Monitor report will be automatically opened for that statement. You can then save the report, so you can review it later or send it to others by clicking on the save icon on the upper right-hand side of the screen.
If you wish to save the current content of the entire Performance Hub (including all of the monitored SQL statements) you can do so by clicking on the PerfHub Report icon on the upper right-hand side of the screen.
To generate a SQL Monitor report in SQL Developer, go to the tools menu and click on Real-Time SQL Monitor.
This will open a window with a list of the currently available monitored SQL Statements. Click on the entry you are interested in and the SQL Monitor report will be automatically opened in the lower part of the window. You can save the SQL Monitor report by clicking the save button in the top left corner of the screen.
NOTE: SQL Monitor reports saved from SQL Developer are not active reports but only images.
You can generate a real-time SQL Monitor report on the command line by calling the PL/SQL function DBMS_SQLTUNE.REPORT_SQL_MONITOR or DBMS_SQL_MONITOR starting in 19c.
The example below shows how to use DBMS_SQLTUNE.REPORT_SQL_MONITOR to generate an active report by setting “active” as the report type.
SET trimspool ON SET TRIM ON SET pages 0 SET linesize 32767 SET LONG 1000000 SET longchunksize 1000000 spool sqlmon_active.html SELECT dbms_sqltune.Report_sql_monitor(SQL_ID=>'&sql_id', TYPE=>'active') FROM dual; spool OFF
Just remember to edit the resulting sqlmon_active.html file to remove the first line and last line in the file (the spool off). The resulting HTML file can then be viewed in any browser. The browser must have connectivity to OTN to load the active report code.
When you use the package you will only
How do I retrieve a historical SQL Monitor report in Oracle Database 12c?
You can review a historical or archived SQL Monitor report either from Enterprise Manager (EM), EM Database Express, or via the command line.
In EM Database Express, on the Monitored SQL tab of the Performance Hub page, click on the Select Time Period button above the time graph on the upper left-hand side of the screen. A popup window will appear; where you can select which time period you want to see SQL Monitor reports from.
Select the time period you desire and click OK. This will open a window with a list of monitored SQL Statements for that period.
You can then view the individual SQL monitor report by clicking on a SQL_ID. And just as before you can click SAVE if you want to save a particular report for offline viewing (similar to a real-time report).
It’s also possible to save the entire content of the PerfHub itself, including the individual SQL monitor reports, by clicking the PerfHub Report button on the main PerfHub page.
To manually generate a persisted SQL Monitor report for a single SQL statement, you will first need to find its REPORT_ID and then use the PL/SQL function DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL to extract the report.
The easiest way to find the REPORT_ID is to query DBA_HIST_REPORTS and supply as much information as you can about the SQL statement and when it was executed.
In DBA_HIST_REPORTS most of the column names are self-explanatory. However, there are two columns KEY1 and KEY2 that warrant some explanation, as you are going to need to use at least one of them in order to find the correct REPORT_ID.
KEY1 is the SQL_ID for the statement
KEY2 is the SQL execution_id for the statement
Here’s an example of the query I used:
SELECT report_id FROM dba_hist_reports WHERE dbid = 1954845848 AND component_name = 'sqlmonitor' AND report_name = 'main' AND period_start_time BETWEEN To_date('27/07/2017 11:00:00','DD/MM/YYYY HH:MI:SS') AND To_date('27/07/2017 11:15:00','DD/MM/YYYY HH:MI:SS') AND key1 = 'cvn84bcx7xgp3'; REPORT_ID ========= 42
Once you have the REPORT_ID, you can use the PL/SQL function DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL to generate the SQL Monitor report, as shown below.
SET echo ON SET trimspool ON SET TRIM ON SET pages 0 SET linesize 32767 SET LONG 10000000 SET longchunksize 1000000 spool old_sqlmon.html SELECT dbms_auto_report.Report_repository_detail(rid=>42, TYPE=>'active') FROM dual; spool OFF
Just remember to edit the resulting old_sqlmon.html file to remove the first line and last line in the file (the spool off). The resulting HTML file can then be viewed in any browser. The browser must have connectivity to OTN to load the active report.
Is it possible to generate SQL Monitor reports for all of the SQL statements monitored during a given period of time?
Starting in Oracle Database 12c, you can use the perfhubrpt.sql script, in the $ORACLE_HOME/rdbms/admin directory, to generate a PerfHub for a given time period, which will include SQL Monitor reports for all of the monitored SQL statements during that period.
The script will prompt you for the report level (default is typical but I would use all), the database id (default is the database you are on), instance number (default is the instance you are on) and the time period you are interested in.
The output of the perfhubrpt.sql is an html file that is a historical view of the EM performance hub for the specified time period.
Clicking on the Monitored SQL tab under the timeline graph will open a window with a list of monitored SQL statements during the period you requested.
You can then drill down on the individual reports by clicking on the any of the SQL IDs.
Let me know via the comments section below if you have any other questions on SQL Monitor!
18 thoughts on “Getting the most out of Oracle SQL Monitor”
Thank you for sharing this post.
What if I don’t have the diag and tuning packs? Is it possible to get the same level of information from a standard explain plan?
Yes, it is possible to get all of the information you see in SQL Monitor via the DBMS_XPLAN.DISPLAY_CURSOR function. I’ll put together a post and a video next week to show you how you can do it.
To save HTML report from SQLPlus without need to edit it, you could use this script:
SET trimspool ON
SET TRIM ON
SET pages 0
SET linesize 32767
SET LONG 1000000
SET longchunksize 1000000
set feedback off
set echo off
set verify off
SELECT dbms_sqltune.Report_sql_monitor(SQL_ID=>’&&sql_id’, TYPE=>’active’) FROM dual;
when I run an SQL Monitor report from the v$ (non-historical) repository I can use report_level=>’ALL’ parameter setting to display all details for an SQL execution. This is what I mean:
SELECT dbms_sqltune.Report_sql_monitor(SQL_ID=>’&sql_id’, TYPE=>’text’,report_level=>’ALL’)
However when I grab a report from the AWR repository
using the following select
SELECT dbms_auto_report.Report_repository_detail(rid=>12989, TYPE=>’TEXT’)
Useful information is omitted such as the wait events per execution line.
If I’m not mistaken these reports had already been generated without the additional information.
Is there any way to have Oracle save all the details for historical reports (AWR) as well?
I’m looking for a setting that would result in report_level=>’ALL’ setting for the historical reports.
I did not find the answer to this in the documentation.
To read active reports the browser must have connectivity to OTN however http://download.oracle.com/otn_software/ is broken for more than a year now and we cannot read 184.108.40.206 Active Reports. Is Oracle going to fix this?
one question about sql monitor compared to execution plan (produced by gather_plan_statistics hint in place) – wont it be good, if sql monitor reports the logical io’s (buffer gets) for each step in plan like how “gather_plan_statistics” hint reports rather than as “IO statistics” at the top right corner? that way we could easily focus on where we spend most of IO’s per execution?
Good Day !
I have an question regarding sql monitoring report which is genrated through database level in html format then its open up in any of the browser.
After Adobe Flash End of Life , does this sql monitoring report package work at database level ?
“perfhubrpt.sql” giving “No Data Found”, any pre-req that needs to be followed to get the data
Thank you and Regards,
The Performance Hub report allows you to view all performance data available for a specified time period. The perfhubrpt.sql script asks yo you define a start and end time when you call it. If you are getting a No Data Found message when you call the script, it’s likely to mean your database didn’t exist or wasn’t open during the period of time you specified. Or someone has disabled performance stats gathering. You can check if the database has data for your specified time period by looking in V$ACTIVE_SESSION_HISTORY view and historical data in the DBA_HIST_ACTIVE_SESS_HISTORY view.
Is there a way to exclude certain SQL ID’s from SQL Monitoring?
Specifically, we have way to many “parallel” SQL’s showing up in our PDB’s like “select * from user_procedures” which are parallel only because they are accessing extended data link etc.
I’m not aware of any event that will prevent a SQLID from being monitored if it qualifies for monitoring (executes in parallel or has an execution time greater than 5 seconds). However, you can add an /*+ NO_MONITOR */ hint to the SQL statement, which would prevent it from being monitored.
I have a script that works well to generate the SQL Monitor reports for the SQL_ID and particular SQL_EXEC_ID I provide as parameters.
But, for some unknown reason, for some SQL_IDs it simply does not work when I choose the ACTIVE option. It generates the HTML output that looks just the same format as the ones that work, but when I open in the browser, it only shows the Oracle logo as it something was broken.
It seems it happens to more complex queries, but I don’t know why.
My script does:
set lines 32767 trimout on trimspool on head off feed off echo off timing off termout off long 1000000 longchunksize 1000000 ver off pages 0
sql_id => a.sql_id,
sql_exec_start => a.sql_exec_start,
sql_exec_id => a.sql_exec_id,
from gv$sql_monitor a
I have uploaded a sample output report that I could not open:
Do you know if I am hitting some limit or maybe a bug? The database version is 18c.
I believe you may be hitting a bug (28649388) where the compression used on very large SQL Monitor reports messes up the headers, which results in the report not displaying.
This bug is fixed in 18C RU 18.4.
To confirm this is the problem you are encountering, you can set an event to disable compression when you generate the report using the following statement
alter session set events = ’emx_control compress_xml=none’
great Maria, you nailed it as usual. After setting the event at the session level the report was generated successfully.
Thanks very much,
thank you for this post, very useful.
If i understand, it’s not possible to have historical sql monitoring on a database 11g, because the view dba_hist_reports does not exists in 11g, and OEM cannot display the info of the v$sql_monitor view, true ?
You are correct. historical SQL monitor is only available from 220.127.116.11 onwards.