Mauro Pagano's Blog


1 Comment

Histograms come histograms go

Actually the title is likely incorrect, in the sense that I think histograms don’t “go” once they come. Or at least I’ve never seen a case where consistently gathering statistics using method_opt “SIZE AUTO” makes a histogram go away sometime after the same syntax made it show up (while it’s common for a histogram to just show up sometime in the future because of column usage).

Recently I’ve seen several systems where custom statistics gathering procedures were coupled with the automatic stats gathering job (voluntarily or not is another story… 😀 ) triggering all sort of weirdness in the statistics. Reason was a syntax mismatch between the custom job (method_opt “SIZE 1”) and the automatic job (default is method_opt “SIZE AUTO”).

A reliable way to figure out if histograms popped up / disappeared is to look at WRI$_OPTSTAT_HISTGRM_HISTORY, too bad this table is usually large and expensive to access (and not partitioned).
Another table is available though, table WRI$_OPTSTAT_HISTHEAD_HISTORY, which is the historical version of the dictionary table HIST_HEAD$ (1 row per histogram) but while in the dictionary there are BUCKET_CNT and ROW_CNT columns (so it’s easy to spot a histogram) such columna are gone once the info are stored in WRI$. Wouldn’t it be nicer to just have a column in WRI$_OPTSTAT_HISTHEAD_HISTORY saying “histogram was in place” ?

Let’s see if we can figure out a (approximate) way that can save us a trip to WRI$_OPTSTAT_HISTGRM_HISTORY.

Table WRI$_OPTSTAT_HISTHEAD_HISTORY has a column FLAGS that is populated using the following expression in 11.2.0.4 (extracted from a 10046 trace of DBMS_STATS.GATHER_TABLE_STATS)


bitand(h.spare2,7) + 8 + decode(h.cache_cnt,0,0,64)

while it’s tricky and risky to guess what SPARE2 might store it should be pretty easy to play with CACHE_CNT, we just need to gather stats with and without histograms to see how the value changes.

SQL> create table test_flag (n1 number);
SQL> insert into test_flag select mod(rownum,200) n1 from dual connect by rownum <= 10000; SQL> insert into test_flag select * from test_flag;
--repeat it a few times to make the data grow
SQL> commit;
SQL> exec dbms_stats.gather_table_stats('MPAGANO','TEST_FLAG',method_opt=>'FOR ALL COLUMNS SIZE 254');
SQL> select obj#, col#, bucket_cnt, row_cnt, cache_cnt, null_cnt, sample_size from sys.hist_head$ where obj# in (select object_id from dba_objects where object_name = 'TEST_FLAG');

      OBJ#       COL# BUCKET_CNT    ROW_CNT  CACHE_CNT   NULL_CNT SAMPLE_SIZE
---------- ---------- ---------- ---------- ---------- ---------- -----------
     24897          1       5487        200         10          0        5487

SQL> exec dbms_stats.gather_table_stats('MPAGANO','TEST_FLAG',method_opt=>'FOR ALL COLUMNS SIZE 1');
SQL> select obj#, col#, bucket_cnt, row_cnt, cache_cnt, null_cnt, sample_size from sys.hist_head$ where obj# in (select object_id from dba_objects where object_name = 'TEST_FLAG');

      OBJ#       COL# BUCKET_CNT    ROW_CNT  CACHE_CNT   NULL_CNT SAMPLE_SIZE
---------- ---------- ---------- ---------- ---------- ---------- -----------
     24897          1          1          0          0          0     2560000

In the first run we gathered stats with “SIZE AUTO” and a 200 buckets histogram was created, sample size ~=5500 is not surprising and it matches with AUTO_SAMPLE_SIZE used. In this case CACHE_CNT was 10.

In the second run we gathered stats with “SIZE 1” and no histogram was created. In this case CACHE_CNT was 0.

I couldn’t find with 100% accuracy what CACHE_CNT represents (actually if any reader knows it please comment this post) but my educated guess is that’s number of entries to be cached in the row cache; unfortunately this is really just a guess since I didn’t even find a parameter to change this value (again, whoever knows better please let me know).

I ran multiple tests and it seems like CACHE_CNT has a value larger than 0 only when a histogram is present (assuming my guess of what is means is accurate), thus the value of WRI$_OPTSTAT_HISTHEAD_HISTORY.FLAG is bumped by 64 only when a histogram is present.
With this info in mind it’s now pretty easy to spot if a histogram was present in the past just looking at  WRI$_OPTSTAT_HISTHEAD_HISTORY.FLAG without having to access the larger WRI$_OPTSTAT_HISTGRM_HISTORY.

SQLd360 v1526 has a new column HAD_HISTOGRAM added to the Column Statistics Version report built exactly on the assumptions made here.

Feedbacks welcome as usual, especially if they can prove wrong what I just said 😀


12 Comments

Introducing Pathfinder, is there a better plan for my SQL?

Pathfinder is a new free tool that provides an easy way to execute a SQL statement under multiple optimizer environments in order to generate different execution plans, potentially discovering better plans. The tool can also be used to quickly identify workarounds for wrong result bugs as well as slow parse issues.

Pathfinder uses the same brute-force approach of SQLT XPLORE, executing the SQL for every single CBO parameter and fix_control present in the database, with no installation required. This make Pathfinder easy to run in any environment, including a production one (assuming you understand *WELL* what the tool does, how it works and what it means for your database).

Each test adds approximately 1 second overhead to the time the SQL takes to complete and the amount of tests considered is pretty high, in 11.2.0.4 it’s around 1100 and around 1500 in 12.1.0.2, thus I suggest to use Pathfinder on SQLs that take at most a few seconds to run (or just be ready to leave Pathfinder run for a loooong time).

The tool executes the SQL statement present in file script.sql (provided), just modify the script and replace the seeded SQL with the one you want to execute. In the same script you can also add ALTER SESSION commands that will be executed before the desired SQL, this is helpful in case you want to influence the analysis providing a different starting point.

To execute the tool just download it from the Download section on the right side of this page (or from here, also the tool will be released as standalone script in the same zip file as SQLd360) and follow these steps:

  1. Unzip pathfinder-master.zip, navigate to the root pathfinder directory, and connect as SYS to the database.
    $ unzip pathfinder-master.zip
    $ cd pathfinder-master
    $ sqlplus / as sysdba
  2. Open file script.sql and add your SQL in there. Make sure to add the mandatory comment /* ^^pathfinder_testid */. The file name must be script.sql, if you wish to change the name then just ping me.
  3. Execute pathfinder.sql and provide the connect string to connect as the user that is supposed to run the script.
  4. Unzip output file pathfinder_<dbname>_<date>.zip into a directory on your PC and review the results starting from file 00001_pathfinder_<dbname>_<date>_index.html

SQL> @pathfinder

Parameter 1:
Full connect string of the database to run the SQL into
If the database is remote or a PDB then you must include
the TNS alias i.e. scott/tiger@orcl

Enter value for 1: mpagano/mpagano@orcl
mpagano/mpagano@orcl
Building Pathfinder driver scripts
Connected.

1) "pathfinder_{ 20151026_1906 (00001)" 19:06:40 BASELINE

2) "pathfinder_{ 20151026_1906 (00002)" 19:06:42 "_add_stale_mv_to_dependency_list" = FALSE

.....
File pathfinder_orcl_20151023_1256.zip created.

For each test Pathfinder will show the setting considered as well as some basic statistics like Plan Hash Value, Elapsed Time, CPU Time, Buffer Gets and Rows processed. Also two links are present, one points to the details of the execution plan generated while the other points to the details of V$SQL.

The main page will look something like this:

Screen Shot 2015-10-26 at 9.18.44 PM

Pathfinder also considers the effect of Cardinality Feedback executing the SQL multiple times until the plan stops changing (or CFB gives up after the fifth parse), for all those settings that lead to a first execution plan different than the baseline.
This is why for some settings you will see a Test# with an additional digit, the “reparse” number:

Screen Shot 2015-10-26 at 9.22.23 PM

In example for Test# 117 above the CBO generated a different plan (PHV 1837274416) than the baseline (PHV 1838229974) and Cardinality Feedback kicked in 3 times generating a different plan each time, until the 3rd parse when the CBO ended up with the same plan as the first execution (and no different plan was generated after).

This is the first release of the tool so I expect it to be far from perfect but I’ve already used it several times with success. Hopefully with time (and your feedbacks :D) the tool will get better and better.

I hope you enjoy it and please don’t hesitate to get in touch with me for feedbacks, suggestions and bugs!!!


3 Comments

Scalar Subquery Caching and read consistency

Every time I discuss this topic with somebody I get the “what are you talking about?” look on his/her face so this is just one of those “repetita iuvant” posts to make sure we (including me first) don’t get burned by it 😀

Each SQL executed from PL/SQL can run at a different SCN thus “see” different data depending when the SQL is started. It is known, documented and easy to prove.

In session 1:
declare
 l_sal number;
begin
 select sal
   into l_sal
   from emp
  where empno = 7369;
 dbms_output.put_line('First salary: '||l_sal);
 dbms_lock.sleep(10);
 select sal
   into l_sal
   from emp
  where empno = 7369;
 dbms_output.put_line('Second salary: '||l_sal);
end;
/

In session 2 while the previous block is still running:
update emp set sal = 0 where empno = 7369;
commit;

Result will be
First salary: 800
Second salary: 0

The same applies when a SQL calls a PL/SQL program with in turn calls SQLs.
This is documented and easy to prove as well plus a nice presentation from Bryn Llewellyn (PL/SQL Product Manager, I think) is available here. I’ll skip the steps to keep the post short.

So using a SQL calling PL/SQL calling SQL is a big NO if the table accessed by the recursive SQLs aren’t static (read only?) or if you are not willing to get weird results like the following where salary for the same employee is different within the same resultset (and I’ve no idea who would be willing 😀 )

create or replace function get_salary(n1 number) return number is
l_sal number;
begin 
  dbms_lock.sleep(1);
  select sal
    into l_sal
    from emp
   where empno = n1;
  return l_sal;
end;
/
select empno, get_salary(empno) sal 
  from (select /*+ NO_MERGE */ * from emp where empno in (7369, 7499, 7521)) a, 
       (select /*+ NO_MERGE */ rownum from dual connect by rownum <5) b;

     EMPNO	  SAL
---------- ----------
      7369	  800
      7499	 1600
      7521	 1250
      7369	    0
      7499	    0
      7521	    0
      7369	    0
      7499	    0
      7521	    0
      7369	    0
      7499	    0
      7521	    0

Elapsed: 00:00:12.01

Scalar Subquery Caching is a caching mechanism to cache the result of a function call, thus potentially avoiding a call to the function for the same input values and so improving performance. The mechanism is explained in Bryn presentation, Tim Hall video on youtube and lots of other places so no need to add much here 😉

Would Scalar Subquery Caching make my recursive SQL read-consistent to its “parent” SQL?
To be honest I don’t see why it should but I thought about it up to the point of starting to make up weird hypothesis with no foundation so better cut it short and test!

In session 1:
select empno, (select myfunc(empno) from dual) sal 
  from (select /*+ NO_MERGE */ * from emp where empno in (7369, 7499, 7521)) a, 
       (select /*+ NO_MERGE */ rownum from dual connect by rownum <5) b;

In session 2 (need to be quick this time) 
update emp set sal = 0 where empno in (7369, 7499, 7521); 
commit;

and the result in session 1:
     EMPNO	  SAL
---------- ----------
      7369	 1000
      7499	 1000
      7521	    0
      7369	 1000
      7499	 1000
      7521	    0
      7369	 1000
      7499	 1000
      7521	    0
      7369	 1000
      7499	 1000
      7521	    0

Elapsed: 00:00:03.00

The answer is “of course NO, what were you (me) thinking?!?”.
The previous output shows how the scalar subquery caching indeed allowed to call get_salary less times (once per distinct value of EMPNO, 3 in this case) and the elapsed drop from 12 to 3 but the value cached is just whatever the first execution of the function calculated for that EMPNO.


Leave a comment

Little big change in SQLd360

Just a super-quick note on SQLd360, starting with version v1522 (released a couple days ago), CSV and TEXT formats are disabled by default so a report that looked like this before
Screen Shot 2015-08-20 at 8.36.47 AM
will now look like this
Screen Shot 2015-08-20 at 8.37.22 AM
Nothing to be alarmed of, it’s expected!
In case you want the two formats back you can just edit file sql/sqld360_00_config.sql.

The main advantage of this change should be a significant reduction in the time it takes to collect the report (feedbacks welcome as usual) 😀
Also the number of rows per report is computed differently, which should save some extra time!

PS: the same change is in eDB360 v1526


1 Comment

UNUSABLE indexes and TRUNCATE

This is one of those “I knew it works this way, why do I keep falling for it?” so hopefully blogging about it will keep my memory fresh 😀

Starting 11gR2 when you mark an index or an index partition UNUSABLE the segment is dropped (unless you are on 12c and use ONLINE), this is the intended behavior.

SQL> create table t_unusable (n1 number, n2 number);
SQL> create index t_unusable_idx on t_unusable(n1);
SQL> insert into t_unusable values (1,2);  -- this is just to materialize the segment
SQL> select index_name, segment_created from user_indexes where index_name = 'T_UNUSABLE_IDX';

INDEX_NAME		       SEG
------------------------------ ---
T_UNUSABLE_IDX		       YES

SQL> alter index t_unusable_idx unusable;
SQL> select index_name, segment_created from user_indexes where index_name = 'T_UNUSABLE_IDX';

INDEX_NAME		       SEG
------------------------------ ---
T_UNUSABLE_IDX		       NO

SQL> select object_id, data_object_id from user_objects where object_name = 'T_UNUSABLE_IDX';

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
     70989

(*)

This is a nice way to speed up large data loads since it will reduce REDO generated plus it’s faster to rebuild an index than to maintain it online.
What I keep forgetting is that if you truncate the table (or the partition) the segment is recreated, hence your data load process will end up maintaining the index online 😦

SQL> truncate table t_unusable;
SQL> select index_name, segment_created from user_indexes where index_name = 'T_UNUSABLE_IDX';

INDEX_NAME		       SEG
------------------------------ ---
T_UNUSABLE_IDX		       YES

SQL> select object_id, data_object_id from user_objects where object_name = 'T_UNUSABLE_IDX';

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
     70989	    70992

----- begin tree dump
leaf: 0x1024b7b 16927611 (0: nrow: 0 rrow: 0)
----- end tree dump

Conclusion (aka note to self): if you plan to use TRUNCATE + mark indexes UNUSABLE remember to TRUNCATE first and mark UNUSABLE after 😀

(*) there is no TREEDUMP here since the trace raised ORA-600 [25027]


5 Comments

Concurrent INSERT APPEND into the same table

The main reason I decided to have a blog was to share all the small things I learn every day while experimenting but I’ve to admit I fell short on my initial intention, hopefully I’ll do better in the coming months, starting with this super-quick post.

I always knew that concurrent (aka multiple sessions at the same time) INSERT /*+ APPEND */ into the same target table was not possible even if the target table was partitioned. Here is an example of a SQL

create table test_target (n1 number, n2 number)
partition by range(n1)
(partition p1 values less than (5),
 partition p2 values less than (10));

insert /*+ append */ into test_target select mod(rownum,5), rownum from dual connect by rownum <= 10;

Reason for the concurrency being not feasible is the first session acquires an enqueue TM in X mode on the whole table (so every other session will have to wait) even if the inserted data only goes into a specific partition.
In the following extract SID 306 (my session) is holding the enq TM in X mode on object id 138542, which is the table even though only partition P1 has been populated.

SQL> select * from v$lock where sid = 306;

       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
       306 TX     131102      11102          6          0         29          0          0
       306 TM     138542          0          6          0         29          0          3
       306 AE        133          0          4          0         86          0          3

SQL> select object_id, object_name, object_type from user_objects where object_id = 138542;

 OBJECT_ID OBJECT_NAME     OBJECT_TYPE
---------- --------------- -----------------------
    138542 TEST_TARGET     TABLE

If in another session (SID 186) I try to run an INSERT /*+ APPEND */ to populated only P2 then the execution will sit and wait for SID 306 to release the enqueue on object id 138542, as shown in the first row from V$LOCK where 186 wants to grab an enq TM in X mode for object id 138542

SQL> insert /*+ append */ into test_target select mod(rownum,5)+5, rownum from dual connect by rownum  select * from v$lock where sid in (306,186) order by sid;

       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
       186 TM     138542          0          0          6         94          0          3
       186 AE        133          0          4          0        104          0          3
       306 AE        133          0          4          0        640          0          3
       306 TM     138542          0          6          0        583          1          3
       306 TX     131102      11102          6          0        583          0          0 

Nothing new so far, it confirms what I expected.
What I didn’t know is that if “extended partition syntax” is used to specify the target partition (and if you specify the wrong one you get a correct ORA-14401) then the X lock is held on the partition and not the table (it’s held in SX on the table).
Here is the example

SID 306
insert /*+ append */ into test_target partition (p1) select mod(rownum,5), rownum from dual connect by rownum <= 10;

SID 186
insert /*+ append */ into test_target partition (p2) select mod(rownum,5)+5, rownum from dual connect by rownum  select * from v$lock where sid in (306,186) and type = 'TM' order by sid;

       SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK     CON_ID
---------- -- ---------- ---------- ---------- ---------- ---------- ---------- ----------
       186 TM     138542          0          3          0         11          0          3
       186 TM     138544          0          6          0         11          0          3
       306 TM     138543          0          6          0         35          0          3
       306 TM     138542          0          3          0         35          0          3

SQL> select object_id, object_name, object_type from user_objects where object_id in (138542, 138543, 138544);

 OBJECT_ID OBJECT_NAME    OBJECT_TYPE
---------- -------------- -----------------------
    138544 TEST_TARGET    TABLE PARTITION
    138543 TEST_TARGET    TABLE PARTITION
    138542 TEST_TARGET    TABLE

So in case you know in advance where the data is going (which is generally true specially for range/interval partition) then you can have multiple concurrent loads into the same target table.
I’m not recommending it, just saying it’s possible 😀


Leave a comment

How to quickly identify if a SQL running in parallel has been downgraded using SQLd360

This is just a quick note on how to leverage a couple reports in order to identify if a PX has been downgraded. Those reports have been around for quite some time so this isn’t about any new feature but rather how to use what’s already there 😉

An execution is downgraded when the number of PX slaves used by the SQL at runtime is lower than the requested number, there are few reasons why it can happen with the most common one being the lack of available PX slaves at the time the SQL starts. Such a downgrade can cause from small to catastrophic performance degradation depending on the severity of the downgrades (measured in %) since the CBO generates an execution plan expecting to have some specific horsepower (aka number of PX slaves) at disposal but then such horsepower is reduced. It’s like planning how long it will take to drive from A to B with a Ferrari and then go to the garage and find out you only have a Jetta 😀

SQLd360 aims at making it easier (and quicker) to identify if an execution was downgraded, let’s see how.

The requested DoP for a SQL is stored in the OTHER_XML column, for each PHV/source SQLd360 provides in the “SQL Performance Summary” the MIN and MAX DoP requested, which is likely to stay stable overtime (exception made for Adaptive Degree Policy or Adaptive Multi User, which aren’t that common anyway).
Here is an example of a SQL with a requested DoP of 4:

Screen Shot 2015-08-06 at 8.58.09 AM

Starting from 11gR2 the DoP the SQL executed with can be extracted from column PX_FLAGS in ASH (TRUNC(px_flags / 2097152) as already reported by Randolf here) so for each execution that made it into ASH SQLd360 reports the “execution DoP”.
Here is the output from report “Elapsed Time for Recent Execs”:

Screen Shot 2015-08-06 at 9.18.43 AM

Column MAX_PX_DEGREE is the MAX(DoP) this specific execution used (see note below about why MAX).
Column NUM_PROCESSES is the distinct number of ASH sampled sessions that collaborated to this execution (up to 2*DoP).

From the report above it’s easy to spot how the execution from July 30 was indeed executed at the requested DoP (4) while the one from August 6 was downgraded (requested DoP 4, execution DoP 2).

Note:  ASH reports the DoP by DFO tree so if the plan had multiple DFO trees with different DoP (uncommon but possible) just keep in mind the number you see in the report is the MAX.


5 Comments

AutoDOP in 12c, what’s new and what’s old

The New Features guide for 12c reports enhancements to AutoDOP but unfortunately there is not much details so let’s try to dig a little to find out which are (some of) those enhancements.

The focus on this blog post is on how the Degree of Parallelism (DoP) is computed for three SQLs in 11.2.0.4 vs 12.1.0.2 (so not considering ADAPTIVE degree policy) reading a single table, joins will come in a separate post.

First requirement to play with AutoDOP is to have IO Calibration statistics in place (UPDATE: the requirement is in 11.2.0.x, in 12c default values are used when IO Calibration are missing, 200MB/s is the default for MAX_PMBPS).
Since all my tests are on VMs on my personal machine the IO response time is very unstable so for the sake of consistent results I’ll manually set the IO Calibration stats (AFAIK this isn’t recommended, also the INSERT requires an instance restart to take effect).
We’ll use 10 as value for PMBPS (Max MBPS during parallel scan).

SQL> desc dba_rsrc_io_calibrate
 Name                  Null?    Type
 --------------------- -------- --------------
 START_TIME                     TIMESTAMP(6)
 END_TIME                       TIMESTAMP(6)
 MAX_IOPS                       NUMBER
 MAX_MBPS                       NUMBER
 MAX_PMBPS                      NUMBER
 LATENCY                        NUMBER
 NUM_PHYSICAL_DISKS             NUMBER

SQL> insert into dba_rsrc_io_calibrate values (systimestamp, systimestamp, 10000, 5, 10, 5, 10);
SQL> commit;

And here are the three tables with 10k, 50k and 16M  rows each used in the tests.
Each row is created to account for pretty much 1 block (in a 8k block_size database) in the first two tables so those tables will grow large in size even with just a few rows, the third table is designed to be very small.

SQL> drop table t10k purge;
SQL> drop table t50k purge;
SQL> drop table t16m purge;
SQL> create table t10k (n1 number, c1 char(2000), c2 char(2000), c3 char(2000));
SQL> create table t50k (n1 number, c1 char(2000), c2 char(2000), c3 char(2000));
SQL> create table t16m (n1 number, c1 char(1), c2 char(1), c3 char(1));
SQL> insert into t10k select rownum, 'a','b','c' from dual connect by rownum <= 10000;
SQL> insert into t50k select rownum, 'a','b','c' from dual connect by rownum <= 50000;
SQL> insert into t16m select rownum, 'a', 'b', 'c' from dual connect by rownum <= 250000; 
SQL> insert into t16m select * from t16m;  
SQL> /  
SQL> /  
SQL> /
SQL> /
SQL> /  
SQL> exec dbms_stats.gather_table_stats(user,'T10K');
SQL> exec dbms_stats.gather_table_stats(user,'T50K');
SQL>  exec dbms_stats.gather_table_stats(user,'T16M');
SQL> select table_name, num_rows, blocks from user_tables where table_name like 'T%' order by 2;
TABLE_NAME                       NUM_ROWS     BLOCKS
------------------------------ ---------- ----------
T10K                                10000      10097
T50K                                50000      50897
T16M				 16000000      35472

We’ll use three SQLs to check different scenarios and how they show up in the 10053 trace.
EXPLAIN PLAN has been used just to force a hard parse each time (it won’t have any “negative” side-effect here).

explain plan for select /* Q1 */ * from t10k;   
explain plan for select /* Q2 */ * from t50k;   
explain plan for select /* Q3 */ count(distinct n1) from t16m;

As we already known (nice post here) in 11.2.0.x the DoP under AutoDOP is based on “IO cost” for the operation and the calculation is performed for each table individually at the time the CBO calculates cardinality and best access method for the table (aka you’ll find it under SINGLE TABLE ACCESS PATH section of the 10053).

##### Q1 #####

The CBO decides to execute Q1 in serial in both versions, let’s look into the details.
From 11.2.0.4 trace, the first 4 lines in green are the input values used to perform the math, the next two are about In-Memory Parallel Execution (nothing to do with In-Memory Option) and the last three lines are the most interesting ones.

SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for T10K[T10K] 
  Table: T10K  Alias: T10K
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.00  Non Adjusted: 10000.00
  Access Path: TableScan
    Cost:  2737.52  Resp: 2737.52  Degree: 0
      Cost_io: 2736.00  Cost_cpu: 74005180
      Resp_io: 2736.00  Resp_cpu: 74005180
kkeCostToTime: using io calibrate stats 
 maxmbps=5(MB/s) maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=79(MB) time=7897(ms)
AutoDOP: Consider caching for T10K[T10K](obj#70600) 
cost:2737.52 blkSize:8192 objSize:10097.00 marObjSize:9592.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:YES
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to NO
AutoDOP: Table/Index(#70600) access (scan) cost=2737.52 estTime=7896.70 unit=10000.00 dop=2 -> maxdop=2
  Best:: AccessPath: TableScan
         Cost: 2737.52  Degree: 1  Resp: 2737.52  Card: 10000.00  Bytes: 0

The following is based on my understanding and tests so it could easily be wrong.

The first “AutoDOP: parallel operation is set to NO” refers to the lack of PARALLEL hint in the SQL, if you add the hint then the message becomes YES prefixed by another one with the degree specified in the hint (if any).

The second “AutoDOP: parallel operation is set to NO” is based on the fact this SQL is not “expensive enough” to qualify for parallel execution, basically the E-Time is lower than PARALLEL_MIN_TIME_THRESHOLD.

The last line shows the details of the math. In 11.2.0.4 the DoP is indeed computed based on how large is the segment (tot_io_size=79MB) and how fast we can read from disk (maxpmbps=10MB/s) so 79 (MB) / 10 (MB/s) = (time=) 7897ms. The value is then divided by 10k (unit=, not sure where the value comes from since it doesn’t seem related to PARALLEL_MIN_TIME_THRESHOLD even though it does represent 10s) and the result (0.7) considered as DoP.
DoP 0.7 means this execution will be serial.
The value of dop=2 -> maxdop=2 is a little misleading here (maybe it’s a default in the code), PX is rejected here and the SQL goes serial.
The trace confirms it just few lines below

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  2736.00  Cost_cpu: 74005179.68
  Card:     10000.00  Bytes:    60070000.00
  Cost:     2737.52  Est_time:  7897ms
kkopqSetDopReason: Reason why we chose this DOP is: no expensive parallel operation.
No parallel scan operations
kkopqCombineDop: Dop:1 Hint:no 
Query: compute:yes forced:no  computedDop:2 forceDop:0
kkopqSetDopReason: Reason why we chose this DOP is: parallel threshold.
Serial phase is inexpensive (2738), stick to it.
Costing completed. DOP chosen: 1.

Here we can see the DoP chosen was 1 because no operation was expensive enough to warrant for PX as well as the E-Time for the SQL (7.8s) is below the PARALLEL_MIN_TIME_THRESHOLD (10s).

Nothing new so far, let’s look into the 10053 from 12.1.0.2 for the same SQL, Q1 as see how is that different.

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T10K[T10K]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: T10K  Alias: T10K
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   2736.000000
  Scan CPU Cost (Disk) =   74005179.680000
  Total Scan IO  Cost  =   2736.000000 (scan (Disk))
                       =   2736.000000
  Total Scan CPU  Cost =   74005179.680000 (scan (Disk))
                       =   74005179.680000
  Access Path: TableScan
    Cost:  2737.397801  Resp: 2737.397801  Degree: 0
      Cost_io: 2736.000000  Cost_cpu: 74005180
      Resp_io: 2736.000000  Resp_cpu: 74005180
  Best:: AccessPath: TableScan
         Cost: 2737.397801  Degree: 1  Resp: 2737.397801  Card: 10000.000000  Bytes: 0.000000

The first difference is the AutoDOP computation does not happen during the access path decision, that’s why no AutoDOP tag is present in the previous extract.

The calculations are performed at the end of the join selection for the query block (no join in this case and just one query block) and here is where the major changes shows up (steps are marked with different colors to make it easier to reference them):

kkecComputeAPDopCS: CPUtabBytes: 82714624.000000 CPUtabRows: 10000.000000  rowSize: 6007
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 82714624.000000 - Time: 78.882812
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 10000.000000 - Time: 10.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 0.007888
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 0.007888 Rate: 1000.000000
AutoDOP: Consider caching for T10K[T10K](obj#138272)
cost:2737.397801 blkSize:8192 objSize:10097.00 marObjSize:9592.15 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:YES
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s)
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes)
 tot_io_size=79(MB) time=7896(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to NO
AutoDOP: Table/Index(#138272) access (scan) cost=2737.397801 estTime=7896.34 unit=10000.00 dop=2.000 -> maxdop=2
kkecComputeAPDop: IO Dop: 2.000000 - CPU Dop: 0.007888

The first big difference is the AutoDOP in 12c also includes the CPU DoP for the operations in the query block, including not only those operations that have an IO cost (i.e. a FTS) but also those operations that have just a CPU cost (i.e. a Join).
The way the operations are normalized is via “processing rates”, in 12c there is a new view that provides to the CBO with such numbers, V$OPTIMIZER_PROCESSING_RATE.

SQL> select operation_name, to_number(manual_value) manual, to_number(calibration_value) calibration, to_number(default_value) default_ from v$optimizer_processing_rate order by 1;

OPERATION_NAME                MANUAL CALIBRATION   DEFAULT_
------------------------- ---------- ----------- ----------
AGGR                             560                   1000
ALL                                                     200
CPU                                                     200
CPU_ACCESS                                              200
CPU_AGGR                                                200
CPU_BYTES_PER_SEC                560                   1000
CPU_FILTER                                              200
CPU_GBY                                                 200
CPU_HASH_JOIN                                           200
CPU_IMC_BYTES_PER_SEC                                  2000
CPU_IMC_ROWS_PER_SEC                                2000000
CPU_JOIN                                                200
CPU_NL_JOIN                                             200
CPU_RANDOM_ACCESS                                       200
CPU_ROWS_PER_SEC                                    1000000
CPU_SEQUENTIAL_ACCESS                                   200
CPU_SM_JOIN                                             200
CPU_SORT                                                200
HASH                                                    200
IO                               560                     10
IO_ACCESS                        560                     10
IO_BYTES_PER_SEC                                         10
IO_IMC_ACCESS                                          1000
IO_RANDOM_ACCESS                3000                     10
IO_ROWS_PER_SEC                                     1000000
IO_SEQUENTIAL_ACCESS             560                     10
MEMCMP                                                  500
MEMCPY                           560                   1000

Values can be manually set using DBMS_STATS.SET_PROCESSING_RATE but they are silently ignored until the hidden parameter _OPTIMIZER_PROC_RATE_SOURCE is not set to MANUAL. Also they can be gathered using DBMS_STATS.GATHER_PROCESSING_RATE.
The value for an operation is inherited from its parent in case is missing (the hierarchy is visible in X$OPTIM_CALIB_STATS where each STATID_KKECSTATS has its parent is PSTATID_KKECSTATS).

The first line in green (two sections above) provides info about the table we want to scan: size in bytes, number of rows and avg row lenght.
The lines in orange is where the CPU DoP is computed, my understanding is here the CPU DoP is the greatest value (expressed in ms) between the “size in bytes / CPU_BYTES_PER_SEC (expressed in MB)” and “number of rows / CPU_ROWS_PER_SEC”, so in this case it’s 10 vs 78.8 = 78.8, which is then divided by 10k (unit=) giving a CPU DoP of 0.0078. The values used for the computation are also reported in the OTHER_XML column.
The lines in blue are the same as the 11.2.0.4 one, where the IO DoP is computed (again with the misleading value of 2).
The final line in red is where IO DoP (misleading here) and CPU DoP are combined together.

As for the 11.2.0.4 case, this SQL doesn’t quality for parallel execution because too fast (the second NO in the blue lines).

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  2736.000000  Cost_cpu: 74005180
  Card:     10000.000000  Bytes:    60070000.000000
  Cost:     2737.397801  Est_time:  7896ms
kkopqSetDopReason: Reason why we chose this DOP is: no expensive parallel operation.
No parallel scan operations
kkopqCombineDop: Dop:1 Hint:no
Query: compute:yes forced:no  scanDop:2 cpuDop:1 forceDop:0
kkopqSetDopReason: Reason why we chose this DOP is: parallel threshold.
Serial phase is inexpensive (2737.397801), stick to it.
Costing completed. DOP chosen: 1.

##### Q2 #####

Let’s now focus on Q2, which runs on the larger table T50K.
From 11.2.0.4

SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for T50K[T50K] 
  Table: T50K  Alias: T50K
    Card: Original: 50000.000000  Rounded: 50000  Computed: 50000.00  Non Adjusted: 50000.00
  Access Path: TableScan
    Cost:  13793.68  Resp: 13793.68  Degree: 0
      Cost_io: 13786.00  Cost_cpu: 372959932
      Resp_io: 13786.00  Resp_cpu: 372959932
kkeCostToTime: using io calibrate stats 
 maxmbps=5(MB/s) maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=398(MB) time=39789(ms)
AutoDOP: Consider caching for T50K[T50K](obj#70601) 
cost:13793.68 blkSize:8192 objSize:50897.00 marObjSize:48352.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:NO
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#70601) access (scan) cost=13793.68 estTime=39789.45 unit=10000.00 dop=3 -> maxdop=3
  Best:: AccessPath: TableScan
         Cost: 13793.68  Degree: 1  Resp: 13793.68  Card: 50000.00  Bytes: 0

Table this time is around 398MB so 398 / 10 = 39789ms which is larger than 10 seconds (PARALLEL_MIN_TIME_THRESHOLD) hence the SQL is candidate to run using PX, reported in the second “AutoDOP: parallel operation is set to YES” (again this is a guess). The DoP is computed as usual with time/unit that is 3.9 truncated to 3.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  13786.00  Cost_cpu: 372959931.68
  Card:     50000.00  Bytes:    300400000.00
  Cost:     13793.68  Est_time:  39789ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  computedDop:3 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (13794)
Signal reparse with DOP 3.

the SQL is then re-parsed and the plan with DoP 3 is found as cheaper than the serial one, hence accepted.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  5105.93  Cost_cpu: 3888888.89
  Card:     50000.00  Bytes:    300400000.00
  Cost:     5106.01  Est_time:  14729ms
Comparing plan with dop=3 and plan with dop=1:
  dop=3 io=5105.93 cpu=3888889 cost=5106.01 card=50000 bytes=300400000 -> est=14729ms, scaled cost=7659.01
  dop=1 io=13786.00 cpu=372959932 cost=13793.68 card=50000 bytes=300400000 -> est=39789ms, scaled cost=13793.68
Plan with dop 3 is better. Scalability: 50
Current plan with dop=3 is better than best plan with dop=1
Costing completed. DOP chosen: 3.

Again, nothing new so far. Let’s now take a look at 12.1.0.2 for Q2:

kkecComputeAPDopCS: CPUtabBytes: 416948224.000000 CPUtabRows: 50000.000000  rowSize: 6008
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 416948224.000000 - Time: 397.632812
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 50000.000000 - Time: 50.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 0.039763
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 0.039763 Rate: 1000.000000
AutoDOP: Consider caching for T50K[T50K](obj#138273) 
cost:13793.044423 blkSize:8192 objSize:50897.00 marObjSize:48352.15 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:NO
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=398(MB) time=39788(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#138273) access (scan) cost=13793.044423 estTime=39787.63 unit=10000.00 dop=3.000 -> maxdop=3
kkecComputeAPDop: IO Dop: 3.000000 - CPU Dop: 0.039763

Little higher CPU DoP but still extremely low (0.039) and same IO DoP as 11.2.0.4, hence same DoP computed here as well.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  13786.000000  Cost_cpu: 372959932
  Card:     50000.000000  Bytes:    300400000.000000
  Cost:     13793.044423  Est_time:  39788ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:3 cpuDop:1 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (13793)
Signal reparse with DOP 3.
.....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  5105.925926  Cost_cpu: 3888889
  Card:     50000.000000  Bytes:    300400000.000000
  Cost:     5105.999379  Est_time:  14729ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:3 cpuDop:1 forceDop:0
Adopt new plan with dop=3
Costing completed. DOP chosen: 3.

So the new algorithm lead to the same DoP (which is good, no surprises) because the CPU DoP was too little to make a dent.

##### Q3 #####

Let’s now look into Q3 that is where things start to change and get a little more tricky (to make things easier to report I turned off _optimizer_distinct_agg_transform).

From 11.2.0.4

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T16M[T16M]
  Table: T16M  Alias: T16M
    Card: Original: 16000000.000000  Rounded: 16000000  Computed: 16000000.00  Non Adjusted: 16000000.00
  Access Path: TableScan
    Cost:  9697.62  Resp: 9697.62  Degree: 0
      Cost_io: 9643.00  Cost_cpu: 2653501900
      Resp_io: 9643.00  Resp_cpu: 2653501900
kkeCostToTime: using io calibrate stats
 maxmbps=5(MB/s) maxpmbps=10(MB/s)
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes)
 tot_io_size=280(MB) time=27974(ms)
AutoDOP: Consider caching for T16M[T16M](obj#70603)
cost:9697.62 blkSize:8192 objSize:35597.00 marObjSize:33817.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:NO
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#70603) access (scan) cost=9697.62 estTime=27973.89 unit=10000.00 dop=2 -> maxdop=2
  Best:: AccessPath: TableScan
         Cost: 9697.62  Degree: 1  Resp: 9697.62  Card: 16000000.00  Bytes: 0

so IO DoP is 2 and E-Time larger than 10 secs so we decide to go PX

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  9643.00  Cost_cpu: 2653501899.68
  Card:     1.00  Bytes:    5.00
  Cost:     9697.62  Est_time:  27974ms
kkopqCombineDop: Dop:2 Hint:no
Query: compute:yes forced:no  computedDop:2 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (9698)
Signal reparse with DOP 2.
....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 2.
  Cost_io:  5357.22  Cost_cpu: 1333333333.33
  Card:     1.00  Bytes:    5.00
  Cost:     5384.67  Est_time:  15533ms
Comparing plan with dop=2 and plan with dop=1:
  dop=2 io=5357.22 cpu=1333333333 cost=5384.67 card=1 bytes=5 -> est=15533ms, scaled cost=5384.67
  dop=1 io=9643.00 cpu=2653501900 cost=9697.62 card=1 bytes=5 -> est=27974ms, scaled cost=9697.62
Plan with dop 2 is better. Scalability: 50
Current plan with dop=2 is better than best plan with dop=1
Costing completed. DOP chosen: 2.

so 11.2.0.4 re-parsed the SQL and decided to go with DoP = 2 because of the IO DoP.

In 12.1.0.2 this time the CPU DoP makes a difference, from the 10053

kkecComputeAPDopCS: CPUtabBytes: 290586624.000000 CPUtabRows: 16000000.000000  rowSize: 5
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 290586624.000000 - Time: 277.125000
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 16000000.000000 - Time: 16000.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 1.600000
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 1.600000 Rate: 1000.000000
AutoDOP: Consider caching for T16M[T16M](obj#138277) 
cost:9658.102216 blkSize:8192 objSize:35472.00 marObjSize:33698.40 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:NO
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=279(MB) time=27860(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#138277) access (scan) cost=9658.102216 estTime=27859.91 unit=10000.00 dop=2.000 -> maxdop=2
kkecComputeAPDop: IO Dop: 2.000000 - CPU Dop: 1.600000
Transfer optimizer annotations for T16M[T16M]
kkecComputeGbyObyAggrDOP: rowSize: 5.000000
kkecComputeGbyObyAggrDOP: gbyRows: 16000000.000000 gbyBytes: 80000000.000000
kkecComputeGbyObyAggrDOP: obyRows: 16000000.000000 obyBytes: 80000000.000000
kkecComputeGbyObyAggrDOP: before aggr Gby - IO Dop: 0.000000  CPU Dop: 0.000000
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 80000000.000000 - Time: 76.293945
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 16000000.000000 - Time: 16000.000000
kkecComputeGbyObyAggrDOP: Basic level Aggr DOP 1.600000
kkecComputeGbyObyAggrDOP: Gby - IO Dop: 0.000000 CPU Dop: 1.600000 GbyRate: 1000.000000
kkecComputeGbyObyAggrDOP: Oby - IO Dop: 0.000000 CPU Dop: 0.000000  ObyRate: 0.000000

the important parts from the extract above are a CPU DoP of 1.6 and an IO DoP of 2 because of the access to table T16M  plus a Aggr DoP of 1.6 because of the count(distinct ).
Since E-Time > 10 secs we decide to go PX and trigger a reparse

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  9608.000000  Cost_cpu: 2652611720
  Card:     1.000000  Bytes:    5.000000
  Cost:     9658.102216  Est_time:  27860ms
kkopqCombineDop: Dop:2 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:1 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (9658)
Signal reparse with DOP 2.

Now this is where things get a little tricky and my understanding could easily be even more wrong than before 😀

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 2.
  Cost_io:  5337.777778  Cost_cpu: 1333333333
  Card:     1.000000  Bytes:    5.000000
  Cost:     5362.961620  Est_time:  15470ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:3 forceDop:0
Adopt new plan with dop=2
Signal reparse with DOP 3.
....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  3558.518519  Cost_cpu: 888888889
  Card:     1.000000  Bytes:    5.000000
  Cost:     3575.307747  Est_time:  10313ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:3 forceDop:0
Adopt new plan with dop=3
Costing completed. DOP chosen: 3.

I omitted the details of the DoP trying to keep this extremely long post just a little shorter.
The three green lines in the previous two sections show the IO DoP and the CPU DoP, the IO DoP is consistently 2 but the CPU DoP starts as 1 and then it becomes 3.
Starting from the first re-parse the IO DoP is not computed anymore as it would provide no benefit since the table T16M is still large the same, the CBO is carrying on annotations (some details of what an annotation is is explained here, but nothing to do with that bug) on the pre-computed DoPs

Transfer optimizer annotations for T16M[T16M]
AutoDOP: dop(io):2.000
AutoDOP: rate(cpu):1000.000000 dop(cpu):1.600

and I think at this point the Aggr DoP (1.6) is added to the CPU DoP from T16M to come up with a CPU DoP of 3.
I’ve to admit I’m not sure why this isn’t done at the first iteration, maybe just to save time since a large IO DoP is enough “to get started” with a reparse.

This is where the new AutoDOP starts to behave differently and return a different DoP for a SQL (3 instead of 2).

I haven’t seen enough real life cases on AutoDOP in 12c to say if it works better than before but for sure it seems more aware of the whole plan and it factors in additional steps (i.e. those CPU only) trying to come up with a more accurate DoP.
I suspect most of the SQLs won’t have a different DoP just because most of the large parallel execution are constrained by IO scan rather than large CPU operations but it’s exactly on those environments with large sorts or aggregation that probably the new AutoDOP can surprise you, for the good or the bad 😉


1 Comment

How to get a SQL Testcase in a single step

Regardless of the reason why you are playing with a specific SQL (tune the SQL, avoid an ORA error, work around a wrong result, etc) it’s always helpful to be able to reproduce the same behavior in some other environment so that you can break everything with no impact on the application or the users.

Creating a SQL Testcase requires put together quite a lot of info and it can be very time consuming, SQLTXPLAIN (SQLT) does an amazing job at that (details here) on top of a lot of other things. Anyway SQLT isn’t always around plus it’s not just a TC generator, SQLT comes with a lot of other info and it might take some time to extract them.

Starting 11gR1 Oracle introduced Testcase Builder (TCB) as part of the Oracle Database Fault Diagnosability Infrastructure (ADRCI, DBMS_HM and DBMS_SQLDIAG just to keep it simple). Basically it’s a set of APIs to generate a testcase starting from either a SQL ID or a SQL text.

The easiest way to extract a TC is to use EXPORT_SQL_TESTCASE passing a SQL_ID.

SQL> select sql_id, sql_text from v$sql where sql_text = 'select * from test_tcb';

SQL_ID        SQL_TEXT
------------- -------------------------------
czq6v85ut47uq select * from test_tcb

SQL> var c1 clob

SQL> begin
 dbms_sqldiag.export_sql_testcase(directory => 'TCB_DIR',
                                  sql_id    => 'czq6v85ut47uq', 
                                  testcase  => :c1);
end;
/

All the files will have a system generated common prefix (since we didn’t specify any value for parameter testcase_name). The number of files change depending on the value for the parameters, we used all the default here so the list will look more or less like this:

oratcb1_000657400001dpexp.dmp	oratcb1_000657400001sql.xml
oratcb1_000657400001dpexp.log	oratcb1_000657400001ssimp.sql
oratcb1_000657400001dpexp.sql	oratcb1_000657400001.trc
oratcb1_000657400001dpimp.sql	oratcb1_000657400001ts.xml
oratcb1_000657400001main.xml	oratcb1_000657400001xplf.sql
oratcb1_000657400001ol.xml	oratcb1_000657400001xplo.sql
oratcb1_000657400001prmimp.sql	oratcb1_000657400001xpls.sql
oratcb1_000657400001README.txt	oratcb1_000657400001xpl.txt
oratcb1_000657400001smrpt.html

The most important ones are

  1. the README – it includes instructions on how to import the TC
  2. the main.xml – it has the reference/map to every other file
  3. the dpexp.dmp – this is the expdp of our objects (and eventually data)
  4. the xpl*.sql – those scripts allow to reply (explain or run) our SQL with/without outline

Now it’s just a matter of moving the files to another enviornment and import the TC back.

SQL> begin
 dbms_sqldiag.import_sql_testcase(directory => 'TCB_DIR',
                                  filename  =>'oratcb1_000657400001main.xml');
end;
/

SQL> select * from test_tcb;  -- or any of the xpl*.sql files

no rows selected

TCB also has the ability to provide some other useful info, in details:

  • it can export data (also just a sample of it) and PL/SQL package bodies if needed (they are usually not), both are not exported by default.
  • it can extract runtime info like Dynamic Sampling, Dynamic Plan, list of binds, etc
  • it can extract AWR reports and SQL Monitors for the SQL
  • it can export the history of the statistics since day X
  • it can run the SQL N times and capture runtime info at the end of each exec

Cherry on the cake, SQLd360 exports automatically the TC using TCB since the version v1501 so there is no additional step to perform, just need to execute SQLd360 😀

I’ve been using TCB every time I needed a TC for a while now and so far it always worked well for me, just a couple “problems” worth mentioning are

  • the ctrlOptions parameter allows a lot of flexibility but it doesn’t raise any error if the syntax is wrong (this is expected) so you don’t realize you messed up the syntax until the TC is generated and the info you wanted are missing
  • even though there is a parameter ignoreStorage in the IMPORT_SQL_TESTCASE, the storage clauses are not ignored if the mail.xml references the list of tablespaces (and it always does). The workaround is to edit the main.xml and remove the reference to the tablespaces file (ts.xml)

Least and absolutely not last, TCB has the ability to extract info directly from the kernel, which are not exposed to any SQL and PL/SQL friendly interface so it sounds like TCB will be *THE* way to extract TC in the future so better get familiar with it now 😉