Mauro Pagano's Blog


2 Comments

Which Observations would you like to see in SQLd360?

SQLd360 v1617 finally includes a new “Observations” section (section 1F) that I pushed back for long, very long🙂

If you are not familiar with SQLTXPLAIN then consider the reports in this section like “yellow flags”, specific items that are not necessarily causing direct troubles to your SQL but are still questionable and need further investigation / verification / blessing for such item to be kept in place.

There are many reasons why I pushed back for a long time, first one being the results can be easily misleading and make you believe the underlying cause is A while maybe it’s Z. Another reason is most of the observations ended up just being informative with no action taken against them, still you had to scroll hundreds of them.
Because of what just said, combined with the need to keep the checks “useful”, the current list of observations is intentionally short (and incomplete as of now), it includes only observations for:

  • Some system-wide settings, e.g. CBO parameters, OFE version, etc
  • Few plan and cursor-specific information, e.g. index referenced in some plan is now missing
  • Table statistics, e.g. partition that are empty according to stats

The list won’t grow much based on my ideas for the same reason it’s short now, I don’t want to implement checks I believe are important when 99% of the people don’t care about them.

That’s why this blog post, I need your feedback and ideas to implement what you care about🙂
Let me know what you would like to have in the observation section and I’ll work on it!
Just keep in mind the goal is to keep that section relatively fast so super-complex checks that take 10 mins to execute are off the list.

Note: v1617 also turns off a couple of less-used features like TCB and AWR reports by default (can easily be enabled back via config file) so don’t be surprised if they don’t show up in column 5.


Leave a comment

How to find file and block# to dump in Exadata

Probably because of my past with Oracle Support, I find myself collecting traces and dumps on a regular basis. For example every time I see a row source operation that I would expect to perform only multiblock reads (e.g. Full Table Scan) doing single block ones I collect some block dumps to understand why that is happening. It’s just a matter of looking at the raw SQL trace file (or P1/P2 from ASH for that matter) for the unexpected single block reads, grab file# and block# and dump the block.

Single block reads in Exadata become “cell single block physical read” and the P1/P2 for the wait event don’t show the file#/block# but rather then cellhash# and diskhash# where the data came from. This is a little annoying to me because I can’t just grab cellhash#/diskhash# (plus bytes, that matches with the block size being a single block read) and dump that AS FAR AS I KNOW (corrections are very welcome here).

The way I usually work it around is using with ASM info enabling KFKIO tracing (trace[KFKIO] using the 11g new syntax) before running the “guilty SQL” so that each read now looks like this

...
2016-05-05 19:05:41.351021 : KFK_IODONE_OSS:iop:0x7f1841034000 bufp:0x1124ae000 op:1 au:120555 dsk:0x2 err:1 osderr:0 ossp:0x7f1841166fc8 ossp->errcode_oss_result:(nil) io elapsed time: 0 usec time waited on io: 0 usec
2016-05-05 19:05:41.351052 : kfk_iodone_oss_retry::in iop:0x7f1841034000
2016-05-05 19:05:41.351062 : kfk_iodone_oss_retry::out retry:0
2016-05-05 19:05:41.351071 : KFK:IODONE:iop:0x7f1841034000 wt:1 sstype:1 mastp:(nil) result:1 path:o/192.168.12.5/DATA_CD_01_enkx3cel03 au_offset:2678784 operation:1 reason:3347 iosize:8192 bufp:0x1124ae000
WAIT #139742149463208: nam='cell single block physical read' ela= 346 cellhash#=379339958 diskhash#=3782402265 bytes=8192 obj#=115376 tim=1462493141351085
...

What I care about is:

  • path:o/192.168.12.5/DATA_CD_01_enkx3cel03 -> where the data is stored
  • au:120555 -> the ASM allocation unit where my block is stored in
  • au_offset:2678784 -> how far in the allocation unit my block is

using this info I can go to the ASM instance (reason why ASM instance is becaused  X$KFFXP isn’t exposed in the RDBMS instance) and using the following SQL extract file name and block number

SELECT a.name file_name, 
       xnum_kffxp * (&&au_size./&&db_block_size.) + &&au_offset./&&db_block_size. block_num
  FROM x$kffxp x, 
       v$asm_alias a
 WHERE (x.disk_kffxp, x.group_kffxp) IN (SELECT number_kfdsk, grpnum_kfdsk 
                                           FROM x$kfdsk 
                                          WHERE path_kfdsk = '&&path.')
   AND x.au_kffxp = &&au.
   AND x.number_kffxp = a.file_number
   AND x.incarn_kffxp = a.file_incarnation;

The SQL will ask for 5 parameters, 3 of which are already available while the other two are:

  • au_size -> Allocation Unit size -> V$ASM_DISKGROUP.ALLOCATION_UNIT_SIZE
  • db_block_size -> the DB_BLOCK_SIZE

So for example using the values from above plus 4194304 au_size and 8192 bytes block size the result will be

FILE_NAME                                            BLOCK_NUM
-------------------------------------------------- -----------
USERS.1155.861540195                                   3946823

Disclaimer: the SQL could take a little while since it needs to scan the list of extents.

I know it’s a little convoluted but as of now I haven’t found a more efficient way to identify file/block to dump starting from cellhash#/diskhash#.

Goes without saying I’d love to hear smarter ways to get it done🙂

UPDATE: Stefan Koehler (who is awesome on top of being a friend) made a very valid point on Twitter, event 10200 can lead to the same info without having to go to the ASM instance. I discarded the event when I started looking at this because it was a little too verbose but then I ended up with KFKIO that is even more verbose!  :-(
Stupid me and thanks to Stefan!

An output of event 10200 looks like this

ktrget2(): started for block  0x0005 : 0x05fc3947 objd: 0x0001c2b0
...
WAIT #139650546118344: nam='cell single block physical read' ela= 380 cellhash#=379339958 diskhash#=3782402265 bytes=8192 obj#=115376 tim=1462544482448025

where 05fc3947 is the DBA, 100415815 in decimal, and using DBMS_UTILITY we can extract file and block# from it

SELECT dbms_utility.data_block_address_file(100415815) file_n,
       dbms_utility.data_block_address_block(100415815) block_n
  FROM dual;

    FILE_N    BLOCK_N
---------- ----------
        23    3946823


1 Comment

eAdam, SQLd360 hidden gem

The title is actually VERY wrong! eAdam isn’t a gem hidden inside SQLd360, it’s a standalone tool developed by Carlos Sierra and it’s been around for way longer than SQLd360. You can read more about eAdam here but in short its goal is to export AWR data in a portable way that can be restored in another database, something like a raw version of AWR Warehouse (kind of).

Every time you run SQLd360, the tool collects a reduced version of eAdam just for ASH data (both GV$ and DBA_HIST) for the SQL of interest, packs the result into the zip file and links it into the main page under column 5, “eAdam ASH”. The reason for doing so is SQLd360 has tons of reports built on top of the most important columns of ASH but what if you want to query another column that is not present in any report? With eAdam you basically have the whole ASH for this SQL ID to do all the data mining you want!

I’m writing this post because I realized I never advertised this functionality much and every time I talk about it with somebody, he/she looks at me like “what are you talking about? I’ve never seen it”.

So let me show you how easy it is to load eAdam data coming from SQLd360 into a target database! I’m assuming you already have eAdam installed (if not then just follow the first two steps in “Instructions – Staging Database” from this link)

  1. Grab file NNNN_sqld360_<<hostname_hash>>_<<sql_id>>_5_eadam_ash.tar from inside SQLd360, it will be one of the last files.
  2. Place it into <<path to eAdam>>/stage_system and just run eadam_load.sql as the eAdam user you created during the installation.

Done!

You now have two tables in your eAdam schema called GV_ACTIVE_SESSION_HISTORY_S and DBA_HIST_ACTIVE_SESS_HIST_S with all the ASH data for your SQL ID!

 


5 Comments

Introducing TUNAs360 – Tuning with Active Sessions without Diagnostic Pack

 

I’m happy to introduce a new free tool that allows to get initial insights about a database using eDB360/SQLd360 like charts without Diagnostic Pack!!!

(TUN)ing with (A)ctive (s)essions, aka TUNAs360, requires no installation and no parameter when executed; it observes the workload for few minutes and then collects a set of reports on such load.

The idea behind the tool is to help getting started in those situations where there is a concern with database performance but the reason is unknown, the classic “the database is slow but I don’t know why”.
In case further investigation is necessary then eDB360 or SQLd360 can be leveraged.

Once executed TUNAs360 provides several reports for the whole database (cluster and node specific) plus additional reports to help drill into the details of the top 5 sessions and top 5 SQLs during the time the load was observed.

The tool belongs to the “360 family” so it has the same look and many of the charts of its bigger brothers, the main page looks like this

Screen Shot 2016-03-21 at 8.00.05 PM

To execute the tool:

  1. Download the tool from the link on the right side of the screen.
  2. Unzip file tunas360-master.zip
  3. Navigate into tunas360-master folder
  4. Connect as a DBA user and run script tunas360.sql

The tool will run for a few minutes and it will generate a zip file named tunas360_<dbname>_<host>_YYYYMMDD_HH24MI.zip.

As every other tool on its first version I don’t expect it to be perfect so please please please let me know if you run into problems so that I can fix them😀

Feedbacks and suggestions are welcome as usual too!!!

 

 


Leave a comment

Top Executions SQL Monitoring style reports in SQLd360

I think SQL Monitoring is an amazing tool when it comes to SQL Tuning but I often find that for one reason or another the report is almost never around for post-mortem investigation.
Historical SQL Monitoring reports have been introduced in 12c but still the decision to collect or no the report for the SQL ID we are interested in depends on several factors we have no control on after the issue happened😦

SQLd360 tried to alleviate this “issue” including ASH-based charts that provided similar information, those have been available for a long time in the Plan Details page, organized by Plan Hash Value (PHV).
The main difference between SQL Monitoring (SM) and SQLd360 is the scope. SM provides info for a single execution while SQLd360 aggregated info from all the executions active at a specific point in time. Info for recent executions are (V$ACTIVE_SESSION_HISTORY) are aggregated by minute while historical executions (DBA_HIST_ACTIVE_SESS_HISTORY) get aggregated by hour.
That section of the SQLd360 looks like this:

Screen Shot 2016-01-14 at 9.46.54 AM.png

 

Starting SQLd360 v1601 a new set of reports is provided for the Top-N executions per PHV, where a “top execution” is one of those with the highest number of samples.
The goal is to replicate, as close as possible, the SQL Monitoring functionalities using just the ASH data, which tend to be around for much longer than a SQL Monitoring report🙂

The data is not aggregated so the granularity is 1s for samples from memory (V$) and 10s for samples from ASH (DBA_HIST).
With this level of granularity combined with the non aggregation other types of data visualizations make sense, like i.e. timelines to identify when a specific section of the execution plan was active (the which section is the “bad guy” can be answered from the tree chart), that’s the “Plan Step IDs timeline” in the figure below that will need its own blog post😀 .
This new section of the report looks like this:

Screen Shot 2016-01-14 at 9.58.46 AM.png

So i.e. for each execution we can see the active sessions (plural in case of PX and not “Average Active Session” since there is no timeframe aggregation) with associated CPU/wait events over time, just like in SQL Monitoring (to be fair SQL Monitoring is able to provide sub-second details, which are not available in ASH).

Screen Shot 2016-01-14 at 10.04.50 AM.png

Hopefully you’ll find this new section useful, specially when ASH is all you got😀

Final note: the number Top-N executions is configurable in file sql/sqld360_00_config.sql altering the value for sqld360_conf_num_top_execs (default is 3).

As usual feedback, comments, suggestions are all more than welcome!

 


7 Comments

Execution plan tree temperature

 

During the Xmas holidays I made several additions to SQLd360 I had on my TODO list for quite a while, I’ll try to blog about the most important ones in the coming days.

Something I wanted to do for a long time was to make understanding execution plan easier, I hope the tree representation introduced here achieved such goal.

SQLd360 v1601 takes this chart a step further, marking nodes with different colors depending on how often such execution plan step shows up in ASH. Basically depending on “how hot” (the temperature) each step is a color between yellow and red is used to color the node, making it easier to determine in which section of the plan you should put your attention.
All those steps that never show up in ASH are represented in white.

So in example a silly SQL like the following takes 21 secs in my database

select count(*) 
  from (select rownum n1 from dual connect by rownum <= 10000),   
       (select rownum n1 from dual connect by rownum <= 100000)

------------------------------------------------------------------------
| Id  | Operation                         | Name | E-Rows | Cost (%CPU)|
------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |      |        |     4 (100)|
|   1 |  SORT AGGREGATE                   |      |      1 |            |
|   2 |   MERGE JOIN CARTESIAN            |      |      1 |     4   (0)|
|   3 |    VIEW                           |      |      1 |     2   (0)|
|   4 |     COUNT                         |      |        |            |
|   5 |      CONNECT BY WITHOUT FILTERING |      |        |            |
|   6 |       FAST DUAL                   |      |      1 |     2   (0)|
|   7 |    BUFFER SORT                    |      |      1 |     4   (0)|
|   8 |     VIEW                          |      |      1 |     2   (0)|
|   9 |      COUNT                        |      |        |            |
|  10 |       CONNECT BY WITHOUT FILTERING|      |        |            |
|  11 |        FAST DUAL                  |      |      1 |     2   (0)|
------------------------------------------------------------------------

18 ASH samples captured the execution on the BUFFER SORT step and 3 samples captured the SORT AGGREGATE.

The execution plan tree temperature looks like this

Screen Shot 2016-01-06 at 5.56.28 PM

Hopefully this will make it easier for people who don’t look into execution plans all the day to quickly spot where they should focus their attention😀


Leave a comment

OR-expansion and Parallel Execution

DISCLAIMER😀 : probably because of my previous job but I REALLY don’t like to write blog posts about potential or confirmed bugs in the Oracle code. This time is no exception, the focus is on the curious plan and not the bugs around it.

I have a SQL with an OR chain that undergoes OR-expansion transformation, executed in parallel. In the real case where I noticed it both decision (OR-expansion and parallel) were made by the optimizer, I’ll force them both to keep the testcase very simple.

Here is the single table:

drop table t1;
create table t1 as select * from dba_objects;
exec dbms_stats.gather_table_stats(user,'T1');
create index t_idx on t1(status);

and my SQL is

select /*+ USE_CONCAT(@SEL$1) PARALLEL(4) */ owner, count(*) mycount
  from t1 a
 where status between 'A' and 'B' or status = 'VALID'
 group by owner;

The USE_CONCAT hint will force the OR-expansion, it works in this case even though it’s VERY hard to manually write it properly (specially after it was enhanced to be semantically correct, bug fix 8429273).

The execution plan is way more complex than I would expect

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |          |    15 |   195 |    47   (7)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                  |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)            | :TQ20002 |    15 |   195 |            |          |  Q2,02 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                 |          |    15 |   195 |            |          |  Q2,02 | PCWP |            |
|   4 |     PX RECEIVE                   |          |    15 |   195 |            |          |  Q2,02 | PCWP |            |
|   5 |      PX SEND HASH                | :TQ20001 |    15 |   195 |            |          |  Q2,01 | P->P | HASH       |
|   6 |       HASH GROUP BY              |          |    15 |   195 |            |          |  Q2,01 | PCWP |            |
|   7 |        CONCATENATION             |          |       |       |            |          |  Q2,01 | PCWP |            |
|   8 |         PX BLOCK ITERATOR        |          |       |       |            |          |  Q2,01 | PCWC |            |
|*  9 |          TABLE ACCESS FULL       | T1       | 10385 |   131K|    23   (5)| 00:00:01 |  Q2,01 | PCWP |            |
|  10 |         BUFFER SORT              |          |       |       |            |          |  Q2,01 | PCWC |            |
|  11 |          PX RECEIVE              |          |  8072 |   102K|    23   (5)| 00:00:01 |  Q2,01 | PCWP |            |
|  12 |           PX SEND ROUND-ROBIN    | :TQ20000 |  8072 |   102K|    23   (5)| 00:00:01 |        | S->P | RND-ROBIN  |
|  13 |            BUFFER SORT           |          |    15 |   195 |            |          |        |      |            |
|  14 |             PX COORDINATOR       |          |       |       |            |          |        |      |            |
|  15 |              PX SEND QC (RANDOM) | :TQ10000 |  8072 |   102K|    23   (5)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|  16 |               BUFFER SORT        |          |    15 |   195 |            |          |  Q1,00 | PCWP |            |
|  17 |                PX BLOCK ITERATOR |          |  8072 |   102K|    23   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|* 18 |                 TABLE ACCESS FULL| T1       |  8072 |   102K|    23   (5)| 00:00:01 |  Q1,00 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - filter("STATUS"<='B' AND "STATUS">='A')
  18 - filter("STATUS"='VALID' AND (LNNVL("STATUS"<='B') OR LNNVL("STATUS">='A')))

The most surprising part IMHO is the second DFO tree under step 14.
Basically the first branch of the CONCATENATION is executed under one DFO tree while the second under another DFO tree.
The main side effect of this choice is a likely inferior performance, mostly caused by:

  1. The slaves that execute step 17-18 will have to send the data to the QC at step 14 that in turn will distribute the data again to the other PX processes. Basically the execute will be P->S->P.
  2. Having two DFO trees means that each of them can potentially be downgraded, further slowing down the performance.

This plan seems to be the consequence of the fix for Wrong Result bug 9732434 (superseded by 10411726, superseded by 17006727) . Strange enough the fix for 9732434 has a fix_control to turn the fix off (WR fixes, as well as ORA-600/7445 don’t have reasons to be turned off usually), probably just some small parts rather than the whole fix.

Turning the fix_control for 9732434 OFF (OPT_PARAM(‘_fix_control’ ‘9732434:0’)) then the plan has a more familiar shape

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |          |    15 |   195 |    47   (7)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR           |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)     | :TQ10001 |    15 |   195 |            |          |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY          |          |    15 |   195 |            |          |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE            |          |    15 |   195 |            |          |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH         | :TQ10000 |    15 |   195 |            |          |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY       |          |    15 |   195 |            |          |  Q1,00 | PCWP |            |
|   7 |        CONCATENATION      |          |       |       |            |          |  Q1,00 | PCWP |            |
|   8 |         PX BLOCK ITERATOR |          |       |       |            |          |  Q1,00 | PCWC |            |
|*  9 |          TABLE ACCESS FULL| T1       | 10385 |   131K|    23   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|  10 |         PX BLOCK ITERATOR |          |  8072 |   102K|    23   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|* 11 |          TABLE ACCESS FULL| T1       |  8072 |   102K|    23   (5)| 00:00:01 |  Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   9 - filter("STATUS"<='B' AND "STATUS">='A')
  11 - filter("STATUS"='VALID' AND (LNNVL("STATUS"<='B') OR LNNVL("STATUS">='A'))) 

So if you have some SQLs that use OR-expansion and run parallel you may want to consider taking a closer look at them.

Follow

Get every new post delivered to your Inbox.

Join 859 other followers