Mauro Pagano's Blog


4 Comments

SQLTXPLAIN vs SQLd360, differences and similarities

When talking tools to help with SQL Tuning the question on SQLTXPLAIN vs SQLd360 comes out very often. “What’s the difference?”, “Which one should I use?”, “Why another tool?” are probably the top ones 🙂

I’ll try to make a fair and (as much as possible) unbiased comparison between the two on some general aspects. If you want to know of any specific area just let me know in the comments.

Aspect SQLTXPLAIN SQLd360
Installation Requires installation. Creates two schemas that serve as metadata and code repository, they need to be present every time the tool is executed No installation, at the end of the execution no evidence is left behind
Source code Mainly PL/SQL, unwrapped (clear text). Code is Oracle property SQL scripts, clear text, true open source released under GPL. Code available on GitHub
Main Goal Automate every aspect of diagnostic collection for SQL tuning Same as SQLT, but less “obsessive” 🙂
Main Focus Execution plan generation, why CBO generated plan X,Y,Z (includes custom testcase, very flexible) Execution plan generation (but a little less than SQLT) and how plans execute at runtime
Main advantage Includes every bit of info needed to drill into CBO decisions, unlikely the need to go back and collect anything else Makes diagnostic consumption faster and easier because of data visualization, lots of focus on runtime
Main disadvantage Requires installation. Not much focus on execution time (“same plan different performance” type of cases harder to deal with) Might miss some information under rare circumstances

Outside the comparison table, what we use today: SQLd360
Reason(s):

  1. No installation.
  2. Root Cause Analysis of CBO decisions usually belongs to Oracle Support (for the most part), SQL Tuning in the real world focuses on easier aspects of plan generation, thus lower need for strong focus on it.
  3. Performance data available in Oracle grows by release, charting allows quick and way more effective consumption of large amount of such data.


4 Comments

SQL Monitoring, Flamegraph and Execution Plan Temperature 2.0

Two of the things that I like the most about SQL Monitoring reports are the ability to quickly spot where in the execution plan the time is spent (Activity% column, thank you ASH) and the fact you can collapse part of the plan. Too bad the two don’t “work” together meaning if you collapse a part of the plan the Activity% is not rolled up at the collapsed level. I understand why it works that way (it might be confusing otherwise) but I’d still like to be able to collapsed a node and get a “subtree Activity%” so I know if that subtree is something I should be worry about or not (kind of…).

Pause that for a second, Flamegraph on the other hand is really good at showing  how much (in %) a “subtree” (function + its callees) accounts overall, Luca Canali did an (as usual) awesome job here with something similar, but a bit more sophisticated than just execution plan. Unfortunately the fact the elements are not sorted on time (I think most “Oracle people” tend to expect time on the X-axis) combined with the kind of random color used for each bar make this representation a little less intuitive than I’d like it to be for execution plans (don’t get me wrong, I still love this representation a lot!).

SQLd360 represents execution plans as tree since one of the first releases and for over a year the color of the nodes represent a heatmap of where time has been spent (same as Activity% in SQL Monitoring), starting from a couple releases ago there are a couple improvements I hope (like to hear you feedback on this) fixed the shortcomings of both SQL Monitoring and Flamegraph when it comes to digest execution plans.
Each step in the execution plan can be collapsed (this was since day 1) and now its color will change to represent the color the node would have had if all the time the SQL spent on the node itself plus its child steps was spent on the node itself, on expand the colors go back to original.
Also when mouse overing the node a new piece of info will show up in the popup, a “subtree impact” that represent exactly what % of time the node plus its children account for.

node.png

For example from the picture above step 16 accounted for 15.52% of the time, while all the subtree under the node (including it too) accounted for 39.6% of the time. Expanding the node the color will go back to a “lighter shade orange”.

I think one of the main advantage of this approach will be the ability to consume large execution plans quickly, collapsing large parts of them and focusing on just what matters, how those parts of the plan interacts with each others, etc etc.

As usual feedbacks, corrections, ideas are MORE than welcome! 🙂

NB: I rewrote the SQL that is under the tree representation to make it more readable but just to make sure I didn’t break the functionality as of now there are two plan trees per plan (and per Top N exec), the one “with subtree” implements what just described above. The old representation will go away down the road, when I’m sure the new SQL is solid enough.


Leave a comment

Little help in creating SQL Plan Baselines

One of the main design goals behind SQLd360 is to have no installation nor “evidence” left in the database, i.e. there is no SQLd360 repository in the database while there is a SQLTXPLAIN one (this isn’t necessarily bad, it’s just a different approach).

As a consequence several little things SQLT provided are gone with SQLd360, for example few years ago (it’s been disabled by default for a while) SQLT generated a script to create a “custom” SQL Profile for the best performing plan it identified, something similar was happening for SQL Plan Baselines stored in SQL Tuning Set for quick implementation of baselines down the road.

There is a new standalone script shipped with SQLd360, sql/sqld360_create_sql_baseline.sql, that aims at providing the same functionality of SPM+STS provided by SQLT, using the idea of “no evidence left by default”.
The script is NOT executed as part of SQLd360, it can (and needs to) be manually executed at any time, which in turns means there is no need to run SQLd360 for this script to work.
The script requires three parameters:

  • SQL ID for the SQL of interest
  • Oracle Pack available in this database ([N]one, [D]iagnostic or [T]uning)
  • Plan Hash Value you wish to “freeze”

You can provide the parameters directly inline to SQL*Plus (assuming you know the PHV you want to enforce) or just enter them one by one when prompted, the script will list all the PHV it can find around (memory + history) with their performance, helping you in case you don’t recall which PHV you want.

The script does just two things:

  1. It create a SQL Tuning Set (named s_<<SQLID>>_<<PHV>>) and loads the plan details into it, the goal is to “freeze” it in time so info don’t get lost if the plan is aged out of memory, purged from AWR, etc.
  2. Provide copy&paste instructions to create a Baseline based on the plan in the STS either in the current system or in a remote one. The script DOES NOT execute such steps, only prints them at screen. This way you can read, understand, digest and validate them before YOU execute them.

Hopefully it will make it a little bit easier to play with SPM.

Feedbacks, correction, recommendations are welcome as usual!


Leave a comment

Presentations on Slideshare

Every once in a while I get asked if I can email the PPT for a session that I delivered. I always say YES (of course) so I figure why not be proactive and upload the material fot the presentations I delivered over the last several months. Under the “Pages” section on the right side of the page there is a new link “Presentations” that takes you to Slideshare.

It’s my first experience with Slideshare and I’m pretty sure I made mistakes along the way so if you see something wrong just let me know (and let me know how to fix it PLEASE 🙂 )

The list of presentation is probably incomplete so if you attended one and see that I forgot to upload it just let me know and I’ll fix that. Also every session comes with trace files / dumps / testcases built to support the investigations but I found no easy to way upload them so I’d still rely on the old “provided upon request, via email” for them.

 


2 Comments

Which Observations would you like to see in SQLd360?

SQLd360 v1617 finally includes a new “Observations” section (section 1F) that I pushed back for long, very long 🙂

If you are not familiar with SQLTXPLAIN then consider the reports in this section like “yellow flags”, specific items that are not necessarily causing direct troubles to your SQL but are still questionable and need further investigation / verification / blessing for such item to be kept in place.

There are many reasons why I pushed back for a long time, first one being the results can be easily misleading and make you believe the underlying cause is A while maybe it’s Z. Another reason is most of the observations ended up just being informative with no action taken against them, still you had to scroll hundreds of them.
Because of what just said, combined with the need to keep the checks “useful”, the current list of observations is intentionally short (and incomplete as of now), it includes only observations for:

  • Some system-wide settings, e.g. CBO parameters, OFE version, etc
  • Few plan and cursor-specific information, e.g. index referenced in some plan is now missing
  • Table statistics, e.g. partition that are empty according to stats

The list won’t grow much based on my ideas for the same reason it’s short now, I don’t want to implement checks I believe are important when 99% of the people don’t care about them.

That’s why this blog post, I need your feedback and ideas to implement what you care about 🙂
Let me know what you would like to have in the observation section and I’ll work on it!
Just keep in mind the goal is to keep that section relatively fast so super-complex checks that take 10 mins to execute are off the list.

Note: v1617 also turns off a couple of less-used features like TCB and AWR reports by default (can easily be enabled back via config file) so don’t be surprised if they don’t show up in column 5.


1 Comment

eAdam, SQLd360 hidden gem

The title is actually VERY wrong! eAdam isn’t a gem hidden inside SQLd360, it’s a standalone tool developed by Carlos Sierra and it’s been around for way longer than SQLd360. You can read more about eAdam here but in short its goal is to export AWR data in a portable way that can be restored in another database, something like a raw version of AWR Warehouse (kind of).

Every time you run SQLd360, the tool collects a reduced version of eAdam just for ASH data (both GV$ and DBA_HIST) for the SQL of interest, packs the result into the zip file and links it into the main page under column 5, “eAdam ASH”. The reason for doing so is SQLd360 has tons of reports built on top of the most important columns of ASH but what if you want to query another column that is not present in any report? With eAdam you basically have the whole ASH for this SQL ID to do all the data mining you want!

I’m writing this post because I realized I never advertised this functionality much and every time I talk about it with somebody, he/she looks at me like “what are you talking about? I’ve never seen it”.

So let me show you how easy it is to load eAdam data coming from SQLd360 into a target database! I’m assuming you already have eAdam installed (if not then just follow the first two steps in “Instructions – Staging Database” from this link)

  1. Grab file NNNN_sqld360_<<hostname_hash>>_<<sql_id>>_5_eadam_ash.tar from inside SQLd360, it will be one of the last files.
  2. Place it into <<path to eAdam>>/stage_system and just run eadam_load.sql as the eAdam user you created during the installation.

Done!

You now have two tables in your eAdam schema called GV_ACTIVE_SESSION_HISTORY_S and DBA_HIST_ACTIVE_SESS_HIST_S with all the ASH data for your SQL ID!

 


Leave a comment

Top Executions SQL Monitoring style reports in SQLd360

I think SQL Monitoring is an amazing tool when it comes to SQL Tuning but I often find that for one reason or another the report is almost never around for post-mortem investigation.
Historical SQL Monitoring reports have been introduced in 12c but still the decision to collect or no the report for the SQL ID we are interested in depends on several factors we have no control on after the issue happened 😦

SQLd360 tried to alleviate this “issue” including ASH-based charts that provided similar information, those have been available for a long time in the Plan Details page, organized by Plan Hash Value (PHV).
The main difference between SQL Monitoring (SM) and SQLd360 is the scope. SM provides info for a single execution while SQLd360 aggregated info from all the executions active at a specific point in time. Info for recent executions are (V$ACTIVE_SESSION_HISTORY) are aggregated by minute while historical executions (DBA_HIST_ACTIVE_SESS_HISTORY) get aggregated by hour.
That section of the SQLd360 looks like this:

Screen Shot 2016-01-14 at 9.46.54 AM.png

 

Starting SQLd360 v1601 a new set of reports is provided for the Top-N executions per PHV, where a “top execution” is one of those with the highest number of samples.
The goal is to replicate, as close as possible, the SQL Monitoring functionalities using just the ASH data, which tend to be around for much longer than a SQL Monitoring report 🙂

The data is not aggregated so the granularity is 1s for samples from memory (V$) and 10s for samples from ASH (DBA_HIST).
With this level of granularity combined with the non aggregation other types of data visualizations make sense, like i.e. timelines to identify when a specific section of the execution plan was active (the which section is the “bad guy” can be answered from the tree chart), that’s the “Plan Step IDs timeline” in the figure below that will need its own blog post 😀 .
This new section of the report looks like this:

Screen Shot 2016-01-14 at 9.58.46 AM.png

So i.e. for each execution we can see the active sessions (plural in case of PX and not “Average Active Session” since there is no timeframe aggregation) with associated CPU/wait events over time, just like in SQL Monitoring (to be fair SQL Monitoring is able to provide sub-second details, which are not available in ASH).

Screen Shot 2016-01-14 at 10.04.50 AM.png

Hopefully you’ll find this new section useful, specially when ASH is all you got 😀

Final note: the number Top-N executions is configurable in file sql/sqld360_00_config.sql altering the value for sqld360_conf_num_top_execs (default is 3).

As usual feedback, comments, suggestions are all more than welcome!

 


8 Comments

Execution plan tree temperature

 

During the Xmas holidays I made several additions to SQLd360 I had on my TODO list for quite a while, I’ll try to blog about the most important ones in the coming days.

Something I wanted to do for a long time was to make understanding execution plan easier, I hope the tree representation introduced here achieved such goal.

SQLd360 v1601 takes this chart a step further, marking nodes with different colors depending on how often such execution plan step shows up in ASH. Basically depending on “how hot” (the temperature) each step is a color between yellow and red is used to color the node, making it easier to determine in which section of the plan you should put your attention.
All those steps that never show up in ASH are represented in white.

So in example a silly SQL like the following takes 21 secs in my database

select count(*) 
  from (select rownum n1 from dual connect by rownum <= 10000),   
       (select rownum n1 from dual connect by rownum <= 100000)

------------------------------------------------------------------------
| Id  | Operation                         | Name | E-Rows | Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |      |        |     4 (100)|
|   1 |  SORT AGGREGATE                   |      |      1 |            |
|   2 |   MERGE JOIN CARTESIAN            |      |      1 |     4   (0)|
|   3 |    VIEW                           |      |      1 |     2   (0)|
|   4 |     COUNT                         |      |        |            |
|   5 |      CONNECT BY WITHOUT FILTERING |      |        |            |
|   6 |       FAST DUAL                   |      |      1 |     2   (0)|
|   7 |    BUFFER SORT                    |      |      1 |     4   (0)|
|   8 |     VIEW                          |      |      1 |     2   (0)|
|   9 |      COUNT                        |      |        |            |
|  10 |       CONNECT BY WITHOUT FILTERING|      |        |            |
|  11 |        FAST DUAL                  |      |      1 |     2   (0)|
------------------------------------------------------------------------

18 ASH samples captured the execution on the BUFFER SORT step and 3 samples captured the SORT AGGREGATE.

The execution plan tree temperature looks like this

Screen Shot 2016-01-06 at 5.56.28 PM

Hopefully this will make it easier for people who don’t look into execution plans all the day to quickly spot where they should focus their attention 😀


1 Comment

Histograms come histograms go

Actually the title is likely incorrect, in the sense that I think histograms don’t “go” once they come. Or at least I’ve never seen a case where consistently gathering statistics using method_opt “SIZE AUTO” makes a histogram go away sometime after the same syntax made it show up (while it’s common for a histogram to just show up sometime in the future because of column usage).

Recently I’ve seen several systems where custom statistics gathering procedures were coupled with the automatic stats gathering job (voluntarily or not is another story… 😀 ) triggering all sort of weirdness in the statistics. Reason was a syntax mismatch between the custom job (method_opt “SIZE 1”) and the automatic job (default is method_opt “SIZE AUTO”).

A reliable way to figure out if histograms popped up / disappeared is to look at WRI$_OPTSTAT_HISTGRM_HISTORY, too bad this table is usually large and expensive to access (and not partitioned).
Another table is available though, table WRI$_OPTSTAT_HISTHEAD_HISTORY, which is the historical version of the dictionary table HIST_HEAD$ (1 row per histogram) but while in the dictionary there are BUCKET_CNT and ROW_CNT columns (so it’s easy to spot a histogram) such columna are gone once the info are stored in WRI$. Wouldn’t it be nicer to just have a column in WRI$_OPTSTAT_HISTHEAD_HISTORY saying “histogram was in place” ?

Let’s see if we can figure out a (approximate) way that can save us a trip to WRI$_OPTSTAT_HISTGRM_HISTORY.

Table WRI$_OPTSTAT_HISTHEAD_HISTORY has a column FLAGS that is populated using the following expression in 11.2.0.4 (extracted from a 10046 trace of DBMS_STATS.GATHER_TABLE_STATS)


bitand(h.spare2,7) + 8 + decode(h.cache_cnt,0,0,64)

while it’s tricky and risky to guess what SPARE2 might store it should be pretty easy to play with CACHE_CNT, we just need to gather stats with and without histograms to see how the value changes.

SQL> create table test_flag (n1 number);
SQL> insert into test_flag select mod(rownum,200) n1 from dual connect by rownum <= 10000; SQL> insert into test_flag select * from test_flag;
--repeat it a few times to make the data grow
SQL> commit;
SQL> exec dbms_stats.gather_table_stats('MPAGANO','TEST_FLAG',method_opt=>'FOR ALL COLUMNS SIZE 254');
SQL> select obj#, col#, bucket_cnt, row_cnt, cache_cnt, null_cnt, sample_size from sys.hist_head$ where obj# in (select object_id from dba_objects where object_name = 'TEST_FLAG');

      OBJ#       COL# BUCKET_CNT    ROW_CNT  CACHE_CNT   NULL_CNT SAMPLE_SIZE
---------- ---------- ---------- ---------- ---------- ---------- -----------
     24897          1       5487        200         10          0        5487

SQL> exec dbms_stats.gather_table_stats('MPAGANO','TEST_FLAG',method_opt=>'FOR ALL COLUMNS SIZE 1');
SQL> select obj#, col#, bucket_cnt, row_cnt, cache_cnt, null_cnt, sample_size from sys.hist_head$ where obj# in (select object_id from dba_objects where object_name = 'TEST_FLAG');

      OBJ#       COL# BUCKET_CNT    ROW_CNT  CACHE_CNT   NULL_CNT SAMPLE_SIZE
---------- ---------- ---------- ---------- ---------- ---------- -----------
     24897          1          1          0          0          0     2560000

In the first run we gathered stats with “SIZE AUTO” and a 200 buckets histogram was created, sample size ~=5500 is not surprising and it matches with AUTO_SAMPLE_SIZE used. In this case CACHE_CNT was 10.

In the second run we gathered stats with “SIZE 1” and no histogram was created. In this case CACHE_CNT was 0.

I couldn’t find with 100% accuracy what CACHE_CNT represents (actually if any reader knows it please comment this post) but my educated guess is that’s number of entries to be cached in the row cache; unfortunately this is really just a guess since I didn’t even find a parameter to change this value (again, whoever knows better please let me know).

I ran multiple tests and it seems like CACHE_CNT has a value larger than 0 only when a histogram is present (assuming my guess of what is means is accurate), thus the value of WRI$_OPTSTAT_HISTHEAD_HISTORY.FLAG is bumped by 64 only when a histogram is present.
With this info in mind it’s now pretty easy to spot if a histogram was present in the past just looking at  WRI$_OPTSTAT_HISTHEAD_HISTORY.FLAG without having to access the larger WRI$_OPTSTAT_HISTGRM_HISTORY.

SQLd360 v1526 has a new column HAD_HISTOGRAM added to the Column Statistics Version report built exactly on the assumptions made here.

Feedbacks welcome as usual, especially if they can prove wrong what I just said 😀


12 Comments

Introducing Pathfinder, is there a better plan for my SQL?

Pathfinder is a new free tool that provides an easy way to execute a SQL statement under multiple optimizer environments in order to generate different execution plans, potentially discovering better plans. The tool can also be used to quickly identify workarounds for wrong result bugs as well as slow parse issues.

Pathfinder uses the same brute-force approach of SQLT XPLORE, executing the SQL for every single CBO parameter and fix_control present in the database, with no installation required. This make Pathfinder easy to run in any environment, including a production one (assuming you understand *WELL* what the tool does, how it works and what it means for your database).

Each test adds approximately 1 second overhead to the time the SQL takes to complete and the amount of tests considered is pretty high, in 11.2.0.4 it’s around 1100 and around 1500 in 12.1.0.2, thus I suggest to use Pathfinder on SQLs that take at most a few seconds to run (or just be ready to leave Pathfinder run for a loooong time).

The tool executes the SQL statement present in file script.sql (provided), just modify the script and replace the seeded SQL with the one you want to execute. In the same script you can also add ALTER SESSION commands that will be executed before the desired SQL, this is helpful in case you want to influence the analysis providing a different starting point.

To execute the tool just download it from the Download section on the right side of this page (or from here, also the tool will be released as standalone script in the same zip file as SQLd360) and follow these steps:

  1. Unzip pathfinder-master.zip, navigate to the root pathfinder directory, and connect as SYS to the database.
    $ unzip pathfinder-master.zip
    $ cd pathfinder-master
    $ sqlplus / as sysdba
  2. Open file script.sql and add your SQL in there. Make sure to add the mandatory comment /* ^^pathfinder_testid */. The file name must be script.sql, if you wish to change the name then just ping me.
  3. Execute pathfinder.sql and provide the connect string to connect as the user that is supposed to run the script.
  4. Unzip output file pathfinder_<dbname>_<date>.zip into a directory on your PC and review the results starting from file 00001_pathfinder_<dbname>_<date>_index.html

SQL> @pathfinder

Parameter 1:
Full connect string of the database to run the SQL into
If the database is remote or a PDB then you must include
the TNS alias i.e. scott/tiger@orcl

Enter value for 1: mpagano/mpagano@orcl
mpagano/mpagano@orcl
Building Pathfinder driver scripts
Connected.

1) "pathfinder_{ 20151026_1906 (00001)" 19:06:40 BASELINE

2) "pathfinder_{ 20151026_1906 (00002)" 19:06:42 "_add_stale_mv_to_dependency_list" = FALSE

.....
File pathfinder_orcl_20151023_1256.zip created.

For each test Pathfinder will show the setting considered as well as some basic statistics like Plan Hash Value, Elapsed Time, CPU Time, Buffer Gets and Rows processed. Also two links are present, one points to the details of the execution plan generated while the other points to the details of V$SQL.

The main page will look something like this:

Screen Shot 2015-10-26 at 9.18.44 PM

Pathfinder also considers the effect of Cardinality Feedback executing the SQL multiple times until the plan stops changing (or CFB gives up after the fifth parse), for all those settings that lead to a first execution plan different than the baseline.
This is why for some settings you will see a Test# with an additional digit, the “reparse” number:

Screen Shot 2015-10-26 at 9.22.23 PM

In example for Test# 117 above the CBO generated a different plan (PHV 1837274416) than the baseline (PHV 1838229974) and Cardinality Feedback kicked in 3 times generating a different plan each time, until the 3rd parse when the CBO ended up with the same plan as the first execution (and no different plan was generated after).

This is the first release of the tool so I expect it to be far from perfect but I’ve already used it several times with success. Hopefully with time (and your feedbacks :D) the tool will get better and better.

I hope you enjoy it and please don’t hesitate to get in touch with me for feedbacks, suggestions and bugs!!!