Mauro Pagano's Blog


5 Comments

AutoDOP in 12c, what’s new and what’s old

The New Features guide for 12c reports enhancements to AutoDOP but unfortunately there is not much details so let’s try to dig a little to find out which are (some of) those enhancements.

The focus on this blog post is on how the Degree of Parallelism (DoP) is computed for three SQLs in 11.2.0.4 vs 12.1.0.2 (so not considering ADAPTIVE degree policy) reading a single table, joins will come in a separate post.

First requirement to play with AutoDOP is to have IO Calibration statistics in place (UPDATE: the requirement is in 11.2.0.x, in 12c default values are used when IO Calibration are missing, 200MB/s is the default for MAX_PMBPS).
Since all my tests are on VMs on my personal machine the IO response time is very unstable so for the sake of consistent results I’ll manually set the IO Calibration stats (AFAIK this isn’t recommended, also the INSERT requires an instance restart to take effect).
We’ll use 10 as value for PMBPS (Max MBPS during parallel scan).

SQL> desc dba_rsrc_io_calibrate
 Name                  Null?    Type
 --------------------- -------- --------------
 START_TIME                     TIMESTAMP(6)
 END_TIME                       TIMESTAMP(6)
 MAX_IOPS                       NUMBER
 MAX_MBPS                       NUMBER
 MAX_PMBPS                      NUMBER
 LATENCY                        NUMBER
 NUM_PHYSICAL_DISKS             NUMBER

SQL> insert into dba_rsrc_io_calibrate values (systimestamp, systimestamp, 10000, 5, 10, 5, 10);
SQL> commit;

And here are the three tables with 10k, 50k and 16M  rows each used in the tests.
Each row is created to account for pretty much 1 block (in a 8k block_size database) in the first two tables so those tables will grow large in size even with just a few rows, the third table is designed to be very small.

SQL> drop table t10k purge;
SQL> drop table t50k purge;
SQL> drop table t16m purge;
SQL> create table t10k (n1 number, c1 char(2000), c2 char(2000), c3 char(2000));
SQL> create table t50k (n1 number, c1 char(2000), c2 char(2000), c3 char(2000));
SQL> create table t16m (n1 number, c1 char(1), c2 char(1), c3 char(1));
SQL> insert into t10k select rownum, 'a','b','c' from dual connect by rownum <= 10000;
SQL> insert into t50k select rownum, 'a','b','c' from dual connect by rownum <= 50000;
SQL> insert into t16m select rownum, 'a', 'b', 'c' from dual connect by rownum <= 250000; 
SQL> insert into t16m select * from t16m;  
SQL> /  
SQL> /  
SQL> /
SQL> /
SQL> /  
SQL> exec dbms_stats.gather_table_stats(user,'T10K');
SQL> exec dbms_stats.gather_table_stats(user,'T50K');
SQL>  exec dbms_stats.gather_table_stats(user,'T16M');
SQL> select table_name, num_rows, blocks from user_tables where table_name like 'T%' order by 2;
TABLE_NAME                       NUM_ROWS     BLOCKS
------------------------------ ---------- ----------
T10K                                10000      10097
T50K                                50000      50897
T16M				 16000000      35472

We’ll use three SQLs to check different scenarios and how they show up in the 10053 trace.
EXPLAIN PLAN has been used just to force a hard parse each time (it won’t have any “negative” side-effect here).

explain plan for select /* Q1 */ * from t10k;   
explain plan for select /* Q2 */ * from t50k;   
explain plan for select /* Q3 */ count(distinct n1) from t16m;

As we already known (nice post here) in 11.2.0.x the DoP under AutoDOP is based on “IO cost” for the operation and the calculation is performed for each table individually at the time the CBO calculates cardinality and best access method for the table (aka you’ll find it under SINGLE TABLE ACCESS PATH section of the 10053).

##### Q1 #####

The CBO decides to execute Q1 in serial in both versions, let’s look into the details.
From 11.2.0.4 trace, the first 4 lines in green are the input values used to perform the math, the next two are about In-Memory Parallel Execution (nothing to do with In-Memory Option) and the last three lines are the most interesting ones.

SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for T10K[T10K] 
  Table: T10K  Alias: T10K
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.00  Non Adjusted: 10000.00
  Access Path: TableScan
    Cost:  2737.52  Resp: 2737.52  Degree: 0
      Cost_io: 2736.00  Cost_cpu: 74005180
      Resp_io: 2736.00  Resp_cpu: 74005180
kkeCostToTime: using io calibrate stats 
 maxmbps=5(MB/s) maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=79(MB) time=7897(ms)
AutoDOP: Consider caching for T10K[T10K](obj#70600) 
cost:2737.52 blkSize:8192 objSize:10097.00 marObjSize:9592.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:YES
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to NO
AutoDOP: Table/Index(#70600) access (scan) cost=2737.52 estTime=7896.70 unit=10000.00 dop=2 -> maxdop=2
  Best:: AccessPath: TableScan
         Cost: 2737.52  Degree: 1  Resp: 2737.52  Card: 10000.00  Bytes: 0

The following is based on my understanding and tests so it could easily be wrong.

The first “AutoDOP: parallel operation is set to NO” refers to the lack of PARALLEL hint in the SQL, if you add the hint then the message becomes YES prefixed by another one with the degree specified in the hint (if any).

The second “AutoDOP: parallel operation is set to NO” is based on the fact this SQL is not “expensive enough” to qualify for parallel execution, basically the E-Time is lower than PARALLEL_MIN_TIME_THRESHOLD.

The last line shows the details of the math. In 11.2.0.4 the DoP is indeed computed based on how large is the segment (tot_io_size=79MB) and how fast we can read from disk (maxpmbps=10MB/s) so 79 (MB) / 10 (MB/s) = (time=) 7897ms. The value is then divided by 10k (unit=, not sure where the value comes from since it doesn’t seem related to PARALLEL_MIN_TIME_THRESHOLD even though it does represent 10s) and the result (0.7) considered as DoP.
DoP 0.7 means this execution will be serial.
The value of dop=2 -> maxdop=2 is a little misleading here (maybe it’s a default in the code), PX is rejected here and the SQL goes serial.
The trace confirms it just few lines below

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  2736.00  Cost_cpu: 74005179.68
  Card:     10000.00  Bytes:    60070000.00
  Cost:     2737.52  Est_time:  7897ms
kkopqSetDopReason: Reason why we chose this DOP is: no expensive parallel operation.
No parallel scan operations
kkopqCombineDop: Dop:1 Hint:no 
Query: compute:yes forced:no  computedDop:2 forceDop:0
kkopqSetDopReason: Reason why we chose this DOP is: parallel threshold.
Serial phase is inexpensive (2738), stick to it.
Costing completed. DOP chosen: 1.

Here we can see the DoP chosen was 1 because no operation was expensive enough to warrant for PX as well as the E-Time for the SQL (7.8s) is below the PARALLEL_MIN_TIME_THRESHOLD (10s).

Nothing new so far, let’s look into the 10053 from 12.1.0.2 for the same SQL, Q1 as see how is that different.

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T10K[T10K]
  SPD: Return code in qosdDSDirSetup: NOCTX, estType = TABLE
  Table: T10K  Alias: T10K
    Card: Original: 10000.000000  Rounded: 10000  Computed: 10000.000000  Non Adjusted: 10000.000000
  Scan IO  Cost (Disk) =   2736.000000
  Scan CPU Cost (Disk) =   74005179.680000
  Total Scan IO  Cost  =   2736.000000 (scan (Disk))
                       =   2736.000000
  Total Scan CPU  Cost =   74005179.680000 (scan (Disk))
                       =   74005179.680000
  Access Path: TableScan
    Cost:  2737.397801  Resp: 2737.397801  Degree: 0
      Cost_io: 2736.000000  Cost_cpu: 74005180
      Resp_io: 2736.000000  Resp_cpu: 74005180
  Best:: AccessPath: TableScan
         Cost: 2737.397801  Degree: 1  Resp: 2737.397801  Card: 10000.000000  Bytes: 0.000000

The first difference is the AutoDOP computation does not happen during the access path decision, that’s why no AutoDOP tag is present in the previous extract.

The calculations are performed at the end of the join selection for the query block (no join in this case and just one query block) and here is where the major changes shows up (steps are marked with different colors to make it easier to reference them):

kkecComputeAPDopCS: CPUtabBytes: 82714624.000000 CPUtabRows: 10000.000000  rowSize: 6007
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 82714624.000000 - Time: 78.882812
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 10000.000000 - Time: 10.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 0.007888
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 0.007888 Rate: 1000.000000
AutoDOP: Consider caching for T10K[T10K](obj#138272)
cost:2737.397801 blkSize:8192 objSize:10097.00 marObjSize:9592.15 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:YES
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s)
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes)
 tot_io_size=79(MB) time=7896(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to NO
AutoDOP: Table/Index(#138272) access (scan) cost=2737.397801 estTime=7896.34 unit=10000.00 dop=2.000 -> maxdop=2
kkecComputeAPDop: IO Dop: 2.000000 - CPU Dop: 0.007888

The first big difference is the AutoDOP in 12c also includes the CPU DoP for the operations in the query block, including not only those operations that have an IO cost (i.e. a FTS) but also those operations that have just a CPU cost (i.e. a Join).
The way the operations are normalized is via “processing rates”, in 12c there is a new view that provides to the CBO with such numbers, V$OPTIMIZER_PROCESSING_RATE.

SQL> select operation_name, to_number(manual_value) manual, to_number(calibration_value) calibration, to_number(default_value) default_ from v$optimizer_processing_rate order by 1;

OPERATION_NAME                MANUAL CALIBRATION   DEFAULT_
------------------------- ---------- ----------- ----------
AGGR                             560                   1000
ALL                                                     200
CPU                                                     200
CPU_ACCESS                                              200
CPU_AGGR                                                200
CPU_BYTES_PER_SEC                560                   1000
CPU_FILTER                                              200
CPU_GBY                                                 200
CPU_HASH_JOIN                                           200
CPU_IMC_BYTES_PER_SEC                                  2000
CPU_IMC_ROWS_PER_SEC                                2000000
CPU_JOIN                                                200
CPU_NL_JOIN                                             200
CPU_RANDOM_ACCESS                                       200
CPU_ROWS_PER_SEC                                    1000000
CPU_SEQUENTIAL_ACCESS                                   200
CPU_SM_JOIN                                             200
CPU_SORT                                                200
HASH                                                    200
IO                               560                     10
IO_ACCESS                        560                     10
IO_BYTES_PER_SEC                                         10
IO_IMC_ACCESS                                          1000
IO_RANDOM_ACCESS                3000                     10
IO_ROWS_PER_SEC                                     1000000
IO_SEQUENTIAL_ACCESS             560                     10
MEMCMP                                                  500
MEMCPY                           560                   1000

Values can be manually set using DBMS_STATS.SET_PROCESSING_RATE but they are silently ignored until the hidden parameter _OPTIMIZER_PROC_RATE_SOURCE is not set to MANUAL. Also they can be gathered using DBMS_STATS.GATHER_PROCESSING_RATE.
The value for an operation is inherited from its parent in case is missing (the hierarchy is visible in X$OPTIM_CALIB_STATS where each STATID_KKECSTATS has its parent is PSTATID_KKECSTATS).

The first line in green (two sections above) provides info about the table we want to scan: size in bytes, number of rows and avg row lenght.
The lines in orange is where the CPU DoP is computed, my understanding is here the CPU DoP is the greatest value (expressed in ms) between the “size in bytes / CPU_BYTES_PER_SEC (expressed in MB)” and “number of rows / CPU_ROWS_PER_SEC”, so in this case it’s 10 vs 78.8 = 78.8, which is then divided by 10k (unit=) giving a CPU DoP of 0.0078. The values used for the computation are also reported in the OTHER_XML column.
The lines in blue are the same as the 11.2.0.4 one, where the IO DoP is computed (again with the misleading value of 2).
The final line in red is where IO DoP (misleading here) and CPU DoP are combined together.

As for the 11.2.0.4 case, this SQL doesn’t quality for parallel execution because too fast (the second NO in the blue lines).

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  2736.000000  Cost_cpu: 74005180
  Card:     10000.000000  Bytes:    60070000.000000
  Cost:     2737.397801  Est_time:  7896ms
kkopqSetDopReason: Reason why we chose this DOP is: no expensive parallel operation.
No parallel scan operations
kkopqCombineDop: Dop:1 Hint:no
Query: compute:yes forced:no  scanDop:2 cpuDop:1 forceDop:0
kkopqSetDopReason: Reason why we chose this DOP is: parallel threshold.
Serial phase is inexpensive (2737.397801), stick to it.
Costing completed. DOP chosen: 1.

##### Q2 #####

Let’s now focus on Q2, which runs on the larger table T50K.
From 11.2.0.4

SINGLE TABLE ACCESS PATH 
  Single Table Cardinality Estimation for T50K[T50K] 
  Table: T50K  Alias: T50K
    Card: Original: 50000.000000  Rounded: 50000  Computed: 50000.00  Non Adjusted: 50000.00
  Access Path: TableScan
    Cost:  13793.68  Resp: 13793.68  Degree: 0
      Cost_io: 13786.00  Cost_cpu: 372959932
      Resp_io: 13786.00  Resp_cpu: 372959932
kkeCostToTime: using io calibrate stats 
 maxmbps=5(MB/s) maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=398(MB) time=39789(ms)
AutoDOP: Consider caching for T50K[T50K](obj#70601) 
cost:13793.68 blkSize:8192 objSize:50897.00 marObjSize:48352.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:NO
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#70601) access (scan) cost=13793.68 estTime=39789.45 unit=10000.00 dop=3 -> maxdop=3
  Best:: AccessPath: TableScan
         Cost: 13793.68  Degree: 1  Resp: 13793.68  Card: 50000.00  Bytes: 0

Table this time is around 398MB so 398 / 10 = 39789ms which is larger than 10 seconds (PARALLEL_MIN_TIME_THRESHOLD) hence the SQL is candidate to run using PX, reported in the second “AutoDOP: parallel operation is set to YES” (again this is a guess). The DoP is computed as usual with time/unit that is 3.9 truncated to 3.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  13786.00  Cost_cpu: 372959931.68
  Card:     50000.00  Bytes:    300400000.00
  Cost:     13793.68  Est_time:  39789ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  computedDop:3 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (13794)
Signal reparse with DOP 3.

the SQL is then re-parsed and the plan with DoP 3 is found as cheaper than the serial one, hence accepted.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  5105.93  Cost_cpu: 3888888.89
  Card:     50000.00  Bytes:    300400000.00
  Cost:     5106.01  Est_time:  14729ms
Comparing plan with dop=3 and plan with dop=1:
  dop=3 io=5105.93 cpu=3888889 cost=5106.01 card=50000 bytes=300400000 -> est=14729ms, scaled cost=7659.01
  dop=1 io=13786.00 cpu=372959932 cost=13793.68 card=50000 bytes=300400000 -> est=39789ms, scaled cost=13793.68
Plan with dop 3 is better. Scalability: 50
Current plan with dop=3 is better than best plan with dop=1
Costing completed. DOP chosen: 3.

Again, nothing new so far. Let’s now take a look at 12.1.0.2 for Q2:

kkecComputeAPDopCS: CPUtabBytes: 416948224.000000 CPUtabRows: 50000.000000  rowSize: 6008
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 416948224.000000 - Time: 397.632812
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 50000.000000 - Time: 50.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 0.039763
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 0.039763 Rate: 1000.000000
AutoDOP: Consider caching for T50K[T50K](obj#138273) 
cost:13793.044423 blkSize:8192 objSize:50897.00 marObjSize:48352.15 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:NO
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=398(MB) time=39788(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#138273) access (scan) cost=13793.044423 estTime=39787.63 unit=10000.00 dop=3.000 -> maxdop=3
kkecComputeAPDop: IO Dop: 3.000000 - CPU Dop: 0.039763

Little higher CPU DoP but still extremely low (0.039) and same IO DoP as 11.2.0.4, hence same DoP computed here as well.

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  13786.000000  Cost_cpu: 372959932
  Card:     50000.000000  Bytes:    300400000.000000
  Cost:     13793.044423  Est_time:  39788ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:3 cpuDop:1 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (13793)
Signal reparse with DOP 3.
.....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  5105.925926  Cost_cpu: 3888889
  Card:     50000.000000  Bytes:    300400000.000000
  Cost:     5105.999379  Est_time:  14729ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:3 cpuDop:1 forceDop:0
Adopt new plan with dop=3
Costing completed. DOP chosen: 3.

So the new algorithm lead to the same DoP (which is good, no surprises) because the CPU DoP was too little to make a dent.

##### Q3 #####

Let’s now look into Q3 that is where things start to change and get a little more tricky (to make things easier to report I turned off _optimizer_distinct_agg_transform).

From 11.2.0.4

SINGLE TABLE ACCESS PATH
  Single Table Cardinality Estimation for T16M[T16M]
  Table: T16M  Alias: T16M
    Card: Original: 16000000.000000  Rounded: 16000000  Computed: 16000000.00  Non Adjusted: 16000000.00
  Access Path: TableScan
    Cost:  9697.62  Resp: 9697.62  Degree: 0
      Cost_io: 9643.00  Cost_cpu: 2653501900
      Resp_io: 9643.00  Resp_cpu: 2653501900
kkeCostToTime: using io calibrate stats
 maxmbps=5(MB/s) maxpmbps=10(MB/s)
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes)
 tot_io_size=280(MB) time=27974(ms)
AutoDOP: Consider caching for T16M[T16M](obj#70603)
cost:9697.62 blkSize:8192 objSize:35597.00 marObjSize:33817.15 bufSize:37240.00 affPercent:80 smallTab:NO affinitized:NO
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#70603) access (scan) cost=9697.62 estTime=27973.89 unit=10000.00 dop=2 -> maxdop=2
  Best:: AccessPath: TableScan
         Cost: 9697.62  Degree: 1  Resp: 9697.62  Card: 16000000.00  Bytes: 0

so IO DoP is 2 and E-Time larger than 10 secs so we decide to go PX

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  9643.00  Cost_cpu: 2653501899.68
  Card:     1.00  Bytes:    5.00
  Cost:     9697.62  Est_time:  27974ms
kkopqCombineDop: Dop:2 Hint:no
Query: compute:yes forced:no  computedDop:2 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (9698)
Signal reparse with DOP 2.
....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 2.
  Cost_io:  5357.22  Cost_cpu: 1333333333.33
  Card:     1.00  Bytes:    5.00
  Cost:     5384.67  Est_time:  15533ms
Comparing plan with dop=2 and plan with dop=1:
  dop=2 io=5357.22 cpu=1333333333 cost=5384.67 card=1 bytes=5 -> est=15533ms, scaled cost=5384.67
  dop=1 io=9643.00 cpu=2653501900 cost=9697.62 card=1 bytes=5 -> est=27974ms, scaled cost=9697.62
Plan with dop 2 is better. Scalability: 50
Current plan with dop=2 is better than best plan with dop=1
Costing completed. DOP chosen: 2.

so 11.2.0.4 re-parsed the SQL and decided to go with DoP = 2 because of the IO DoP.

In 12.1.0.2 this time the CPU DoP makes a difference, from the 10053

kkecComputeAPDopCS: CPUtabBytes: 290586624.000000 CPUtabRows: 16000000.000000  rowSize: 5
kkecComputeAPDopCS: Basic Stats Level
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 290586624.000000 - Time: 277.125000
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 16000000.000000 - Time: 16000.000000
kkecComputeAPDopCS: Before Filter Preds - CPU Dop: 1.600000
kkecComputeAPDopCS: Final Filter Predicate Bytes: 0.000000 Rows: 0.000000
kkecComputeAPDop: Final -  CPU Dop: 1.600000 Rate: 1000.000000
AutoDOP: Consider caching for T16M[T16M](obj#138277) 
cost:9658.102216 blkSize:8192 objSize:35472.00 marObjSize:33698.40 bufSize:41160.00 affPercent:80 smallTab:NO affinitized:NO
kkeCostToTime: using io calibrate stats maxpmbps=10(MB/s) 
 block_size=8192 mb_io_count=4 mb_io_size=30247 (bytes) 
 tot_io_size=279(MB) time=27860(ms)
AutoDOP:parallel operation is set to NO
AutoDOP:parallel operation is set to YES
AutoDOP: Table/Index(#138277) access (scan) cost=9658.102216 estTime=27859.91 unit=10000.00 dop=2.000 -> maxdop=2
kkecComputeAPDop: IO Dop: 2.000000 - CPU Dop: 1.600000
Transfer optimizer annotations for T16M[T16M]
kkecComputeGbyObyAggrDOP: rowSize: 5.000000
kkecComputeGbyObyAggrDOP: gbyRows: 16000000.000000 gbyBytes: 80000000.000000
kkecComputeGbyObyAggrDOP: obyRows: 16000000.000000 obyBytes: 80000000.000000
kkecComputeGbyObyAggrDOP: before aggr Gby - IO Dop: 0.000000  CPU Dop: 0.000000
kkecComputeDOP: Proc Rate: 1048576000.000000 - Cost: 80000000.000000 - Time: 76.293945
kkecComputeDOP: Proc Rate: 1000000.000000 - Cost: 16000000.000000 - Time: 16000.000000
kkecComputeGbyObyAggrDOP: Basic level Aggr DOP 1.600000
kkecComputeGbyObyAggrDOP: Gby - IO Dop: 0.000000 CPU Dop: 1.600000 GbyRate: 1000.000000
kkecComputeGbyObyAggrDOP: Oby - IO Dop: 0.000000 CPU Dop: 0.000000  ObyRate: 0.000000

the important parts from the extract above are a CPU DoP of 1.6 and an IO DoP of 2 because of the access to table T16M  plus a Aggr DoP of 1.6 because of the count(distinct ).
Since E-Time > 10 secs we decide to go PX and trigger a reparse

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 1.
  Cost_io:  9608.000000  Cost_cpu: 2652611720
  Card:     1.000000  Bytes:    5.000000
  Cost:     9658.102216  Est_time:  27860ms
kkopqCombineDop: Dop:2 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:1 forceDop:0
Serial plan is expensive enough to be a candidate for parallelism (9658)
Signal reparse with DOP 2.

Now this is where things get a little tricky and my understanding could easily be even more wrong than before 😀

AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 2.
  Cost_io:  5337.777778  Cost_cpu: 1333333333
  Card:     1.000000  Bytes:    5.000000
  Cost:     5362.961620  Est_time:  15470ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:3 forceDop:0
Adopt new plan with dop=2
Signal reparse with DOP 3.
....
AUTO DOP PLANS EVALUATION
***************************************
Compilation completed with Dop: 3.
  Cost_io:  3558.518519  Cost_cpu: 888888889
  Card:     1.000000  Bytes:    5.000000
  Cost:     3575.307747  Est_time:  10313ms
kkopqCombineDop: Dop:3 Hint:no 
Query: compute:yes forced:no  scanDop:2 cpuDop:3 forceDop:0
Adopt new plan with dop=3
Costing completed. DOP chosen: 3.

I omitted the details of the DoP trying to keep this extremely long post just a little shorter.
The three green lines in the previous two sections show the IO DoP and the CPU DoP, the IO DoP is consistently 2 but the CPU DoP starts as 1 and then it becomes 3.
Starting from the first re-parse the IO DoP is not computed anymore as it would provide no benefit since the table T16M is still large the same, the CBO is carrying on annotations (some details of what an annotation is is explained here, but nothing to do with that bug) on the pre-computed DoPs

Transfer optimizer annotations for T16M[T16M]
AutoDOP: dop(io):2.000
AutoDOP: rate(cpu):1000.000000 dop(cpu):1.600

and I think at this point the Aggr DoP (1.6) is added to the CPU DoP from T16M to come up with a CPU DoP of 3.
I’ve to admit I’m not sure why this isn’t done at the first iteration, maybe just to save time since a large IO DoP is enough “to get started” with a reparse.

This is where the new AutoDOP starts to behave differently and return a different DoP for a SQL (3 instead of 2).

I haven’t seen enough real life cases on AutoDOP in 12c to say if it works better than before but for sure it seems more aware of the whole plan and it factors in additional steps (i.e. those CPU only) trying to come up with a more accurate DoP.
I suspect most of the SQLs won’t have a different DoP just because most of the large parallel execution are constrained by IO scan rather than large CPU operations but it’s exactly on those environments with large sorts or aggregation that probably the new AutoDOP can surprise you, for the good or the bad 😉


2 Comments

Poor man SQL Monitor with SQLd360…kind of!

One of the most interesting section of a SQL Monitor report is IMHO the Metrics page where we can see on a timeline the impact / requirement on an execution in terms of CPU, IOPS, MBPS, PGA and TEMP.
Starting 11gR2, we can get something similar (not exactly the same) from ASH, this is from DBA_HIST_ACTIVE_SESS_HISTORY

Screen Shot 2015-04-27 at 9.37.15 AM

For each TM_DELTA_TIME we know how much CPU and DB Time has been spent, as well as how many RW IOPS and RW and Interconnect (M)BPS per DELTA_TIME. The metrics are accumulated and reported at the time of the ASH sample “over the Delta Time” that roughly matches with the ASH sampling one, so ie. from V$ACTIVE_SESSION_HISTORY we are able to see how many RW IOPS per second (since the DELTA_TIME is roughly going to be the second) are accounted by the session we focus on. Also each time the ASH sample is taken the PGA and TEMP consumed by the session at that time is tracked.

Starting SQLd360 v1511 those columns are now collected and aggregated/charted over time so we can look at the single consumption via SQL Monitor (or from raw ASH data) and at the same time evaluate the historical trend from SQLd360. The PlansAnalysis page in SQLd360 now looks something like this

Screen Shot 2015-04-27 at 10.54.29 AM

Other new additions since the last post (v1507) are:

  • SQLd360 now uses a configuration file to determine how many days to collect info for and which “major” external APIs to collect / ignore
  • Several reports have been rewritten to better represent info from Parallel Executions
  • New sections provide info on
    • Fix Controls (default and non-default) settings
    • NLS settings
    • Adaptive Cursor Sharing info
    • SQL Monitoring info
    • Partition Statistics history info
    • SQLs with the same Force Matching Signature

plus as usual bug fixes here and there.

Thanks to everybody who reported issues and provided feedback for new features! 🙂


Leave a comment

SQLd360 and ORA-7445 [evaopn3()]

A few kind people reported to me that SQLd360 disconnected in the middle of the execution. Looking into the details the cause was an ORA-7445 evaopn3() but such errors are a little tricky to track down because they only show up with a combination of faulty plan (that’s the bug) with specific data, that’s why the error never reproduced in my labs as well as at several other folks’ systems.

Galo Balda was kind and patience enough to report the error and test my suspects (Thanks Galo!!!) so we were able to identify a way to avoid the error.

The ORA-7445 was caused by bug 12672969 (I suggest you check the details in MOS, the bug is pretty “common”) and starting today SQLd360 works it around so that your execution shouldn’t disconnect anymore!

If you tried SQLd360 in the past but ignored it because of the unexpected disconnect then I suggest you give it one more shot now that the error should be taken care!


1 Comment

Trouble reading an execution plan? Maybe SQLd360 can help!

Last week I was in Dallas, TX for Hotsos 2015 and I had a few questions on how to read an execution plan so I thought why not including in SQLd360 an alternative representation of the execution plan, maybe one that is easier to understand and closer to the academic explanation of the execution plan as a tree.

The main improvements for version v1504 (released yesterday, March 8th) are around execution plans analysis, there is a new link “Plans analysis” in the Plans section with a few new reports, all organized by PHV.
Basically this new page will show N columns where N is the number of distinct plans for the SQL we focus on and for each PHV we have

  • Plan Tree
  • Average elapsed time per execution (recent and historical) from ASH
  • Top 15 wait events
  • Top 15 objects accessed
  • Top 15 execution plan step id / operation

The plan tree is an organizational chart where the execution plan is represented as a tree upside-down where the first operation executed is the first leaf on the extreme left and the next operation is the next left (walking from left to right) and where each couple (or more) nodes are combined together by their parent node.

Ie. in the following execution plan the first step is #5 but somebody might be tricked thinking it’s #13 (or #7) since more indentedScreen Shot 2015-03-09 at 10.21.40 PM

while looking at the new chart is way easier to understand that step #5 is indeed the first one executedplan

Hopefully you will like it too!!

Other small additions for v1504 are Optimizer system statistics, Segments/Objects information and fixes here and there that (hopefully) reduce the number of bugs in the code 🙂


71 Comments

SQLd360, SQL diagnostics collection made faster

Another tool to help with SQL Tuning? Yes! Hopefully with some advantage over what you used until today 🙂

Collecting diagnostics around a SQL is not fun, if you’ve been there you know that. It requires to extract info from several different places, the amount is high and it’s time-sensitive aka you should do it as quick as possible so that things around the SQL don’t change!

SQLd360 is a free tool that provides a 360-degree overview around a SQL statement. The output is a single zip file that allows offline analysis, trend analysis and detailed root-cause investigations. The content of the zip is a set of small reports, each specific to an area of interest, easy to navigate through an index file.

SQLd360 doesn’t require any installation and can be executed by any user that has access to dictionary views. The tool can be used by DBAs, Developers, SysAdmin, etc to investigate several aspects of the SQL, not only in terms of isolated execution but also in the context of the overall database impact, it can also help with trend analysis.
The list of contents is going to grow (fast) with time so make sure you use the latest version.

SQLd360 works successfully in 10g, 11g and 12c, specifically on Linux/Unix platforms, it has not been tested on Windows.

If what described so far sounds familiar then you are probably right. SQLd360 has been created to complement eDB360 tool created by Carlos Sierra. The design is similar but at the same time different having to deal with different challenges.
SQLd360 is also included in eDB360 starting version v1504.

Download

SQLd360 is free, you can download it from either GitHub or from the link under the Download section on the sidebar of this blog.

Instructions

Download SQLd360 and review the README.txt file included.
Unzip sqld360-master.zip on the database server and navigate to the master directory.
Connect to the database as any other user with access to dictionary views (SYS not recommended) and start sqld360.sql

SQLd360 has two input parameters

  1. SQL ID – SQL ID for which SQLd360 will collect diagnostics.
  2. License – SQLd360 extracts information from both AWR (licensed by Oracle under the Diagnostics Pack) and SQL Monitoring repository (part of the Oracle Tuning Pack). This parameter accepts one of 3 values: “T”, “D” or “N”. If your database is licensed under the Oracle Tuning Pack, enter then the value of “T”. If your database is not licensed to use the Oracle Tuning Pack but it is licensed to use the Oracle Diagnostics Pack, enter “D” then. If your site is not licensed on any of these two Oracle Packs, enter “N” then. Be aware that a value of “N” reduces substantially the content and value of the output. Thus the preferred parameter value is “T” (Oracle Tuning Pack).

Sample

# unzip sqld360-master.zip
# cd sqld360-master
# sqlplus / as sysdba
SQL> @sqld360.sql 0vy6pt4krb3gm T

SQLd360 usually completes within a few minutes but it can take longer in case the amount of data to analyze in AWR/ASH is significant. The output file can reach up to 100Mb so make sure the directory where the tool is executed from has enough free space.

Feedback
For any feedback please post on this blog or just email me directly.


2 Comments

Dynamic Sampling just got better

12c brought several new features in the SQL performance area and the most advertised one is probably Adaptive Query Optimization. The Adaptive Plans feature has been described in a lot of details (specially the adaptive join methods part) and most of the focus in Adaptive Statistics has been on SQL Plan Directives and Automatic Re-optimization, not much has been said about Dynamic Statistics, aka the new dynamic sampling (DS)
Since DS is also helpful outside of the whole Adaptive Query Optimization area so let’s just focus on it.

The original DS would kick in “by default” in absence of object statistics, increasing the level would make the CBO use it even under a few other conditions (details here).
Starting 11.2 (ER 7452863) DS can dynamically auto-adjust its own level at a query level (hence kicking in even when it would have not) when the SQL runs in parallel and large tables are involved.
Basically the old DS usually does a good job in helping the CBO get good estimations *for single tables*

  • in absence of stats
  • when the level is “high” (from 4 up)
  • when the SQL runs in parallel and tables are large

What just said implies DS doesn’t help much if

  • the table has stats but the level is default
  • the SQL runs in serial
  • we are estimating anything else but single table selectivity

The “new” dynamic sampling has been introduced in 12.1.0.1 and backported to 11.2.0.4 and it’s “hidden” behind the new level 11 or AUTO (value AUTO is accepted only in 12c) and it brings a lot of very interesting features

  • DS can kick in automatically when the CBO realizes it doesn’t have too solid estimations
  • DS can be applied on joins, group by and whole query blocks (haven’t looked too much yet into the query blocks one)
  • DS results can be cached using result cache and exported transparently using TCB

Let’s play a little with an example, test is from a 12.1.0.2.
Two tables with uniform data distribution and strong correlation across columns within the same table and between tables.

SQL&gt; create table t1 as select mod(rownum, 50) n1, mod(rownum, 50) n2 from dual connect by rownum &lt;= 1000;
SQL&gt; create table t2 as select mod(rownum, 50) n1, mod(rownum, 50) n2 from dual connect by rownum &lt;= 1000;
SQL&gt; exec dbms_stats.gather_table_stats(user,'T1');
SQL&gt; exec dbms_stats.gather_table_stats(user,'T2');

By default a join selectivity/cardinality would be underestimated since there is correlation but nothing to inform the CBO about it

SQL&gt; select count(*) from t1,t2 where t1.n1 = t2.n1 and t1.n2 = t2.n2;

Plan hash value: 906334482

---------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |
---------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |
|*  2 |   HASH JOIN         |      |      1 |   1000 |  20000 |  &lt;---
|   3 |    TABLE ACCESS FULL| T1   |      1 |   1000 |   1000 |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   1000 |   1000 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(&quot;T1&quot;.&quot;N1&quot;=&quot;T2&quot;.&quot;N1&quot; AND &quot;T1&quot;.&quot;N2&quot;=&quot;T2&quot;.&quot;N2&quot;)

With such a simple join condition we can use a column group to fix the estimation and indeed it works fine

SQL&gt;exec dbms_stats.gather_table_stats(user,'T1', method_opt =&gt; 'FOR COLUMNS (N1,N2) SIZE AUTO');
SQL&gt;exec dbms_stats.gather_table_stats(user,'T2', method_opt =&gt; 'FOR COLUMNS (N1,N2) SIZE AUTO');

SQL&gt; select count(*) from t1,t2 where t1.n1 = t2.n1 and t1.n2 = t2.n2;

---------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |
---------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |
|*  2 |   HASH JOIN         |      |      1 |  20000 |  20000 |
|   3 |    TABLE ACCESS FULL| T1   |      1 |   1000 |   1000 |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   1000 |   1000 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(&quot;T1&quot;.&quot;N1&quot;=&quot;T2&quot;.&quot;N1&quot; AND &quot;T1&quot;.&quot;N2&quot;=&quot;T2&quot;.&quot;N2&quot;)

Anyway a column group can only be used with equality conditions (a column group is basically a hash of all the columns in the group) so as soon as we change our SQL then we are back on the understimated selectivity/cardinality

SQL&gt; select count(*) from t1,t2 where t1.n1 = t2.n1 and t1.n2 &gt; t2.n2;

---------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |
---------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |
|*  2 |   HASH JOIN         |      |      1 |   9800 |      0 |
|   3 |    TABLE ACCESS FULL| T1   |      1 |   1000 |   1000 |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   1000 |   1000 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(&quot;T1&quot;.&quot;N1&quot;=&quot;T2&quot;.&quot;N1&quot;)
       filter(&quot;T1&quot;.&quot;N2&quot;&gt;&quot;T2&quot;.&quot;N2&quot;)

So let’s turn on the new DS and check how it helps

alter session set optimizer_dynamic_sampling = 11;
select count(*) from t1,t2 where t1.n1 = t2.n1 and t1.n2 &gt; t2.n2;

---------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |
---------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |      1 |
|   1 |  SORT AGGREGATE     |      |      1 |      1 |      1 |
|*  2 |   HASH JOIN         |      |      1 |      1 |      0 |
|   3 |    TABLE ACCESS FULL| T1   |      1 |   1000 |   1000 |
|   4 |    TABLE ACCESS FULL| T2   |      1 |   1000 |   1000 |
---------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access(&quot;T1&quot;.&quot;N1&quot;=&quot;T2&quot;.&quot;N1&quot;)
       filter(&quot;T1&quot;.&quot;N2&quot;&gt;&quot;T2&quot;.&quot;N2&quot;)

Note
-----
   - dynamic statistics used: dynamic sampling (level=AUTO)

Notice how the join cardinality is now very accurate (and as a consequence we switched to HJ).
From the 10053 and 10046 (I wrapped around the 10046 to avoid a long scrollbar)

Join Card:  9800.000000 = outer (1000.000000) * inner (1000.000000) * sel (0.009800)
Join cardinality for HJ/SMJ (no post filters):  20000.000000, outer: 1000.000000, inner: 1000.000000, sel: 0.009800
&gt;&gt; Join Card adjusted from 9800.000000 to 1.000000 due to adaptive dynamic sampling, prelen=2
Adjusted Join Cards: adjRatio=0.000102 cardHjSmj=1.000000 cardHjSmjNPF=2.040816 cardNlj=1.000000 cardNSQ=1.000000 cardNSQ_na=9800.000000
Join Card - Rounded: 1 Computed: 1.000000


PARSING IN CURSOR #140546644531144 len=332 dep=1 uid=118 oct=3 lid=118 tim=173271358375 hv=1119371978 ad='c6e539a0' sqlid='f814kux1bhhqa'
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel result_cache(snapshot=3600) */ 
SUM(C1) FROM (SELECT /*+ qb_name(&quot;innerQuery&quot;) NO_INDEX_FFS( &quot;T1#1&quot;)  */ 1 AS C1 
FROM &quot;T2&quot; &quot;T2#0&quot;, &quot;T1&quot; &quot;T1#1&quot; WHERE (&quot;T1#1&quot;.&quot;N1&quot;=&quot;T2#0&quot;.&quot;N1&quot;) AND (&quot;T1#1&quot;.&quot;N2&quot;&gt;&quot;T2#0&quot;.&quot;N2&quot;)) innerQuery

Notice how the recursive DS SQL joins the two tables together, this wasn’t possible before.
At the same time we want to avoid triggering those SQLs over and over so the result is also cached

SELECT id, type, creation_timestamp, block_count, column_count, pin_count, row_count, name
  FROM V$RESULT_CACHE_OBJECTS
 WHERE cache_id = 'b1ytyjnksv3up3yjtn35bq96jx';  2    3  

	    ID TYPE       CREATION_ BLOCK_COUNT COLUMN_COUNT  PIN_COUNT  ROW_COUNT NAME
---------- ---------- --------- ----------- ------------ ---------- ---------- --------------------------------------------------------------------------------------------------------------------------------
	     4 Result     03-DEC-14 	      1	           1	      0	         1 SELECT /* DS_SVC */ /*+ cursor_sharing_exact dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) re

The previous test shows how the new DS can help in more complex scenario (joins, group by, etc) than before but then at the same time it means more time could potentially be spent on DS. For this reason the recursive SQL gets a limited amount of time after which the execution aborts as well as a default cap for tables to join.
Those conditions as well as the result cache usage can be configured playing with the following parameters

NAME				            DESCRIPTION									         SESSION_VALUE
------------------------------- ---------------------------------------------------- --------------
_optimizer_ads_max_table_count	maximum number of tables in a join under ADS		 0
_optimizer_ads_time_limit	    maximum time limit (seconds) under ADS				 0
_optimizer_ads_use_result_cache use result cache for ADS queries					 TRUE

Adaptive Plans can help in avoid picking up an incorrect join method but the join order cannot be changed at runtime but getting a more accurate estimation at parse time could encourage a more efficient join order, that’s why I like the new DS so much.

IMHO this new feature has a lot of potential because it helps in addressing exactly those areas (joins, group by) where it’s particularly challenging for the CBO to get consistently accurate estimations.