Mauro Pagano's Blog


3 Comments

Quick and dirty Pathfinder (or SQLT XPLORE)

The other day I was working on a SQL with an odd plan (JPPD with pushed predicate not on the driver table inside the view) when as a test I flipped OFE back one version and got the plan I was expecting, this is (one of) the typical use case(s) for Pathfinder or SQLT XPLORE.

I didn’t have a reproducible testcase and while creating it is always a good thing (IMHO), I was working in a lower environment that gave me a degree of freedom in testing things. I knew exactly which release to go from (12.1.0.1 -> 12.1.0.2) so I wrote a few lines of PL/SQL to implement a smaller Pathfinder. The idea was to let it run while I was working on a testcase to emulate the problem (and maybe run full blown Pathfinder on it).

The goal of this post isn’t really to provide a working set of files but rather to give you an idea of what those tools do so that you have a better understanding of what to expect from them 🙂
Here is the code, comments later

set serveroutput on timing off feed off veri off
spo driver.sql
exec dbms_output.put_line('set echo on');
begin

  for i in (select bugno fix_control, CASE WHEN value = 0 THEN 1 ELSE 0 END new_value
              from v$system_fix_control
             where optimizer_feature_enable = '&&1.'
               and value in (0,1)) LOOP

        dbms_output.put_line('CONN <>/<>@<>');
        dbms_output.put_line(q'[ALTER SESSION SET "_fix_control" = ']'||i.fix_control||':'||i.new_value||''';');
        dbms_output.put_line('@q');
        dbms_output.put_line('@x');

   end loop;

end;
/                                                                                                                                                                                                               
spo off
spo result.txt
@driver.sql
spo off

The idea is simple (and partially flawed), iterate over the fixes that have been introduced in the version you are testing on and write a “test” for each of them (to be tested individually), one at a time. Two support scripts are q.sql  that includes SQL to test (with binds if needed) and x.sql to extract the plan from DBMS_XPLAN.DISPLAY_CURSOR.
The flawed idea raises from the fact the SQL does not consider changes delivered under parameters (usually features rather than fixes) as well as it ignores those fixes that are configurable more than [ON/OFF]. Also there are some fixes with questionable versions tracked under OFE, this screws up the test as well.
Both of those conditions explained above make up the dirty part 🙂

The block above creates another script, driver.sql, that is than executed. Between 12.1.0.1 and 12.1.0.2 there were 107 fixes.

The CONNECT over and over is an overkill in this case, but experience on XPLORE and Pathfinder taught me there are some parameter changes that don’t trigger a new parse (even though they should) or they don’t get completely “cleaned” when setting them back (what just described isn’t very accurate though so I wouldn’t mind somebody scolding me for it!).

Pathfinder does something very similar, except it iterates on every parameter related to CBO plus all fix_controls, thus executing a much larger number of tests. On top of that it takes into account Cardinality Feedback, executing the SQL multiple times until the plan stop changing (assuming it changes).

Feedbacks, correction, etc welcome as usual.

Oh btw, those few lines of code helped me find out the fix I was after 😀


Leave a comment

Bushy Joins – a closer look

When 12.2 came out most of the (optimizer) focus was around SPD and how to avoid the challenges from 12.1. Still 12.2 introduced several (less acclaimed) optimizations including “Bushy Join” transformation, which is interesting since (I think, corrections welcome) Bushy Join concept isn’t necessarily tied to query transformation in general, especially before 12.2 (some reference about “manual” bushy joins here and here) or in other RDBMS (a manual example on SQL Server here).
Anyway being the CBO way of improving our code query transformations here we go again.

There isn’t much on the internet about Bushy Joins and 12.2 beside this article so I decided to take a closer look. All the tests are from a 12.2 vanilla installation with bushy joins enabled

SQL> @hparam bushy
NAME                              DESCRIPTION                         SESSION_VA
--------------------------------- ----------------------------------- ----------
_optimizer_bushy_cost_factor      cost factor for bushy join          100       
_optimizer_bushy_fact_dim_ratio   bushy join dimension to fact ratio  20        
_optimizer_bushy_fact_min_size    minimumm fact size for bushy join   100000    
_optimizer_bushy_join             enables bushy join                  ON        

and the DDL to create the objects are the following

create table f1 as select a.* from dba_objects a, (select 1 from dual connect by rownum <= 2);
create table d1 as select object_id, object_type from dba_objects;
create table f2 as select a.* from dba_objects a, (select 1 from dual connect by rownum <= 2);
create table d2 as select object_id, object_type from dba_objects;

exec dbms_stats.gather_table_stats(user,'F1');
exec dbms_stats.gather_table_stats(user,'D1');
exec dbms_stats.gather_table_stats(user,'D2');
exec dbms_stats.gather_table_stats(user,'F2');

create index f1_idx1 on f1(object_id);
create index f2_idx1 on f2(object_id);
create index f1_idx2 on f1(object_type);
create index f2_idx2 on f2(object_type);

select table_name, num_rows from user_tables where table_name like 'F_' or table_name like 'D_';

TABLE_NAME      NUM_ROWS
------------- ----------
F1                147200
D1                 73601
F2                147204
D2                 73603

The DUAL to duplicate the number of rows in DBA_OBJECTS is just to have more than 100k rows in the two fact tables F1 and F2 (also indexes *IDX1 are never used in my examples but I created them so in the spirit of full disclosure I included them).

select f1.*, f2.* 
  from f1, f2, d1, d2 
 where f1.object_type = f2.object_type 
   and d1.object_type = f1.object_type 
   and f2.object_type = d2.object_type 
   and d1.object_id = 123 
   and d2.object_id = 456;

---------------------------------------------------------------------
| Id |Operation                      |Name             | Rows | Cost|
---------------------------------------------------------------------
|   0|SELECT STATEMENT               |                 |  208K|  414|
|*  1| HASH JOIN                     |                 |  208K|  414|
|   2|  NESTED LOOPS                 |                 | 3132 |  207|
|   3|   NESTED LOOPS                |                 | 3132 |  207|
|*  4|    TABLE ACCESS FULL          |D1               |    1 |   58|
|*  5|    INDEX RANGE SCAN           |F1_IDX2          | 3132 |    9|
|   6|   TABLE ACCESS BY INDEX ROWID |F1               | 3132 |  148|
|   7|  VIEW                         |VW_BUSHY_A9E4AA31| 3132 |  207|
|   8|   NESTED LOOPS                |                 | 3132 |  207|
|   9|    NESTED LOOPS               |                 | 3132 |  207|
|* 10|     TABLE ACCESS FULL         |D2               |    1 |   58|
|* 11|     INDEX RANGE SCAN          |F2_IDX2          | 3132 |    9|
|  12|    TABLE ACCESS BY INDEX ROWID|F2               | 3132 |  148|
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("F1"."OBJECT_TYPE"="ITEM_1")
   4 - filter("D1"."OBJECT_ID"=123)
   5 - access("D1"."OBJECT_TYPE"="F1"."OBJECT_TYPE")
  10 - filter("D2"."OBJECT_ID"=456)
  11 - access("F2"."OBJECT_TYPE"="D2"."OBJECT_TYPE")

From the execution plan D1 and F1 are joined together first and the result is then joined with [the result of the join between D2 and F2].
In this case this is a good idea since the two large fact tables are joined together without any dimension in between, thus the filtering introduced by the two dimensions cannot be applied before a large (and filtered) fact table is joined with another large (and not filtered) fact table. Btw what I just said is a bit incorrect since a merge join cartesian between the two dimensions wouldn’t be a too bad idea in this case (it’s actually what the CBO does once Bushy Joins are disabled).

Let’s take a look under the hood, aka 10053 trace 🙂
As mentioned before Bushy Joins is a CBO transformation, plugged into CBQT framework.
For some more details about the whole transformation stuff you can look here
(Notice I skipped a few lines in the output just to make it shorter, the skipped content was either blank or not important here)
The first interesting bit is the query block is copied (1066), it’s nothing new but it’s worth noticing. Also interesting are lines 1074 and 1075 with the same message and a bit of grammar puzzle.

 1064 BJ: Checking validity for bushy join for query block SEL$1 (#1)
 1066 Registered qb: SEL$1 0x22686940 (COPY SEL$1)
 1071 ****************************************
 1072  Cost-Based Bushy Join
 1073 ****************************************
 1074 BJ: Checking validity of bushy join for query block SEL$1 (#1)
 1075 BJ: Checking validity for bushy join for query block SEL$1 (#1)

In this case a linear search type (details here) is used for the Bushy Joins, starting from the transformation NOT applied. Notice all the four tables are involved at this point (0,0,0,0) and they don’t seem to be “qualified” yet, also this is a bit different than usual since the 4 placeholder here seem to be for tables and not for query blocks (like in other transformations, for example subquery unnesting).
The interesting part is at the end of the costing (physical optimizer) the tables are “identified” as dimensions and facts, including validating stats and structure (the “sructure” typo isn’t mine 😛 ).

 1082 BJ: Starting iteration 1, state space = (0,0,0,0) : (0,0,0,0)
 1083 BJ: Original query
 (physical optimizer here)
 3787 BJ: fact stats valid: F2 [F2]
 3788 BJ: dim stats valid: D2 [D2]
 3789 BJ: dim structure valid: D2 [D2]
 3790 BJ: fact sructure valid: F2 [F2]
 3791 BJ: fact stats valid: F1 [F1]
 3792 BJ: dim stats valid: D1 [D1]
 3793 BJ: dim structure valid: D1 [D1]
 3794 BJ: fact sructure valid: F1 [F1]
 3798 BJ: Updated best state, Cost = 1063.107718

At this point something interesting happens, the CBQT framework (my guess here, easily wrong) starts to focus only on a subset of the objects to consider for the search space / transformation, notice the (4,3).
I couldn’treliably match those 4 and 3 with something in the 10053 (they aren’t query block numbers since there is only one starting query block SEL$1) but an educated guess is 4 is the “substree D1,F1” while 3 is “substree D2,F2”.

 3799 BJ: Starting iteration 2, state space = (4,3) : (0,1)
 3800 Registered qb: SEL$A9E4AA31 0x22425a70 (QUERY BLOCK TABLES CHANGED SEL$1)
 3804   signature (): qb_name=SEL$A9E4AA31 nbfros=3 flg=0
 3805     fro(0): flg=0 objn=83772 hint_alias="D1"@"SEL$1"
 3806     fro(1): flg=0 objn=83771 hint_alias="F1"@"SEL$1"
 3807     fro(2): flg=5 objn=0 hint_alias="VW_BUSHY_A9E4AA31"@"SEL$A9E4AA31"
 3808
 3809 Registered qb: SEL$9F959E4D 0x22420780 (SPLIT/MERGE QUERY BLOCKS SEL$A9E4AA31)
 3813   signature (): qb_name=SEL$9F959E4D nbfros=2 flg=0
 3814     fro(0): flg=0 objn=83774 hint_alias="D2"@"SEL$1"
 3815     fro(1): flg=0 objn=83773 hint_alias="F2"@"SEL$1"
 3816
 3817 Registered qb: SEL$F04E7C56 0x22425a70 (QUERY BLOCK HAS BUSHY JOIN SEL$1; SEL$1; LIST)
 3821   signature (): qb_name=SEL$F04E7C56 nbfros=3 flg=0
 3822     fro(0): flg=0 objn=83772 hint_alias="D1"@"SEL$1"
 3823     fro(1): flg=0 objn=83771 hint_alias="F1"@"SEL$1"
 3824     fro(2): flg=1 objn=0 hint_alias="VW_BUSHY_A9E4AA31"@"SEL$A9E4AA31"

and the transformed SQL becomes (after reintroducind the crappy “select *” I have in my SQL and doing a little formatting), which is nothing surprising if you looked at the previous linked articles.

 3828 SELECT *  
        FROM  (SELECT * 
                 FROM "MPAGANO"."F2" "F2",
                      "MPAGANO"."D2" "D2" 
                WHERE "D2"."OBJECT_ID"=456 
                  AND "F2"."OBJECT_TYPE"="D2"."OBJECT_TYPE") "VW_BUSHY_A9E4AA31",
              "MPAGANO"."F1" "F1",
              "MPAGANO"."D1" "D1" 
       WHERE "F1"."OBJECT_TYPE"="VW_BUSHY_A9E4AA31"."ITEM_1" 
         AND "D1"."OBJECT_TYPE"="F1"."OBJECT_TYPE" 
         AND "D1"."OBJECT_ID"=123

Then the classic approach of any other transformation under CBQT is used (how cool is that? 🙂 ), some more details on how to interpret all this is in the link provided above.

 4617 BJ: Updated best state, Cost = 413.807609
 4618 BJ: Starting iteration 3, state space = (4,3) : (1,0)
 5430 BJ: Not update best state, Cost = 2148.874633
 5431 BJ: Starting iteration 4, state space = (4,3) : (1,1)
 6182 BJ: Not update best state, Cost = 560.862165
 6183 BJ: transformed final query

So Bushy Join transformation will be applied, in details “grouping” together just F2 and D2 (that is (4,3) = (0,1)) since the cost was the lowest, 413.
Just for the sake of completeness here is the transformed SQL (formatted and injected “*” again to shorten it) when both “subtrees” go under Bush Joins

 5458 SELECT * 
        FROM  (SELECT * 
                 FROM "MPAGANO"."F1" "F1",
                      "MPAGANO"."D1" "D1" 
                WHERE "D1"."OBJECT_ID"=123 
                  AND "D1"."OBJECT_TYPE"="F1"."OBJECT_TYPE") "VW_BUSHY_B144F3C9", 
              (SELECT * 
                 FROM "MPAGANO"."F2" "F2",
                      "MPAGANO"."D2" "D2" 
                WHERE "D2"."OBJECT_ID"=456 
                  AND "F2"."OBJECT_TYPE"="D2"."OBJECT_TYPE") "VW_BUSHY_A9E4AA31" 
       WHERE "VW_BUSHY_B144F3C9"."ITEM_1"="VW_BUSHY_A9E4AA31"."ITEM_1"

Few other things worth mentioning below.

From the 10053 it seems Star Transformation is an interleaved transformation for Bushy Joins, even though in my case it was never considered, likely due to the simple nature of my SQL

The transformation can be controlled by the BUSHY_JOIN hint and the syntax is (surprisingly) pretty trivial for simple SQL statement. You can provide within parenthesis the grouping you want Oracle to apply as parameter of the hint. For example in order to force the transformation on both F2,D2 and F1,D1 I can use BUSHY_JOIN((d1 f1) (d2 f2)). Extended and more accurate syntax (including target query block as well as source query block for each table) would be better, but still this is much easier than write an CONCAT hint 🙂

There are several “heuristic” to make Bushy Joins considered, for example the size of the fact table (100k rows according to the parameters above, just guessing here), the number of tables the fact needs to be joined to (another 2 at least), the size ratio between dimensions and fact, etc. I don’t have a complete list but I assume the 10053 would list the reasoning for ignoring Bushy Joins for your SQL (as it did in my case when I looked into this transformation the first time).

There is a lot of guessing in this blog post so if anybody has any correction please let me know and I’ll be happy to make adjustments (and very happy to learn!).
I’m not sure why the feature is disable by default, maybe it’s not super-solid yet, but I think it can have some nice impact on specific cases.


4 Comments

SQLTXPLAIN vs SQLd360, differences and similarities

When talking tools to help with SQL Tuning the question on SQLTXPLAIN vs SQLd360 comes out very often. “What’s the difference?”, “Which one should I use?”, “Why another tool?” are probably the top ones 🙂

I’ll try to make a fair and (as much as possible) unbiased comparison between the two on some general aspects. If you want to know of any specific area just let me know in the comments.

Aspect SQLTXPLAIN SQLd360
Installation Requires installation. Creates two schemas that serve as metadata and code repository, they need to be present every time the tool is executed No installation, at the end of the execution no evidence is left behind
Source code Mainly PL/SQL, unwrapped (clear text). Code is Oracle property SQL scripts, clear text, true open source released under GPL. Code available on GitHub
Main Goal Automate every aspect of diagnostic collection for SQL tuning Same as SQLT, but less “obsessive” 🙂
Main Focus Execution plan generation, why CBO generated plan X,Y,Z (includes custom testcase, very flexible) Execution plan generation (but a little less than SQLT) and how plans execute at runtime
Main advantage Includes every bit of info needed to drill into CBO decisions, unlikely the need to go back and collect anything else Makes diagnostic consumption faster and easier because of data visualization, lots of focus on runtime
Main disadvantage Requires installation. Not much focus on execution time (“same plan different performance” type of cases harder to deal with) Might miss some information under rare circumstances

Outside the comparison table, what we use today: SQLd360
Reason(s):

  1. No installation.
  2. Root Cause Analysis of CBO decisions usually belongs to Oracle Support (for the most part), SQL Tuning in the real world focuses on easier aspects of plan generation, thus lower need for strong focus on it.
  3. Performance data available in Oracle grows by release, charting allows quick and way more effective consumption of large amount of such data.


7 Comments

Simple script to remove system-generated column groups

There seem to be a lot of interest (at least on Twitter and at OUG conferences) about Oracle recommendation to install a couple patches on top of 12.1.0.2, in order to emulate 12.2 behavior when it comes to SQL Plan Directives (details here, need MOS account).
One of the things SQL Plan Directives do is trigger column groups (CG) creation.
Column groups are virtual columns representing a hash (SYS_OP_COMBINED_HASH) of the multiple table columns they are defined on (that’s why only equality conditions can be satisfied by CG) and they have an ugly long system-generated name. According to the DECODE in ALL_STAT_EXTENSIONS user-generated GC get a SYS_STU prefix in the name while system-generated one get SYS_STS. As far as I could tell only SPD-triggered CGs are named SYS_STS%(corrections are very welcome here), even those created as consequence of using DBMS_STATS.SEED_COL_USAGE have SYS_STU% name.

The other day somebody asked how to remove those CG in case one wanted to “start fresh” after applying the mentioned 12.1.0.2 patches. I wrote a little script that was by no mean intended to be exhaustive (or fully tested) but was good way to get started removing SPD-triggered CGs so I figured I would share, the script starts from the assumption only SPD-triggered CGs have a SYS_STS% name.
The script does NOT remove the CGs by itself, it just creates another script that include the DROP in there so that you can read, digest and only then execute it manually. Also another script is created, just to put the CG back in place (just the definition, no stats are gathered) in case some are indeed needed. Little side effect is since you put them in place manually then the names become SYS_STU% and not SYS_STS%.

Code below

PRO
PRO usage @drop_extended_stats.sql connected as the user that owns the table
PRO and pass the table name when requested.
PRO
DEF current_table = '&&table_name.'
SET SERVEROUTPUT ON VERI OFF FEED OFF TIMING OFF
SPO drop_extended_stats_&&current_table._driver.sql
BEGIN
 FOR i IN (SELECT extension_name, extension
             FROM user_stat_extensions
            WHERE table_name = UPPER('&&current_table.')
              AND extension_name LIKE 'SYS_STS%' -- SYS_STS are system generated from the DECODE in ALL_STAT_EXTENSIONS
           ) LOOP

   DBMS_OUTPUT.PUT_LINE('-- Dropping extension on '||i.extension);
   DBMS_OUTPUT.PUT_LINE('exec DBMS_STATS.DROP_EXTENDED_STATS(user, ''&&current_table.'', '''||i.extension||''');');

 END LOOP;

END;
/
SPO OFF
SPO create_extended_stats_&&current_table._driver.sql
BEGIN
 FOR i IN (SELECT extension_name, extension
             FROM user_stat_extensions
            WHERE table_name = UPPER('&&current_table.')
              AND extension_name LIKE 'SYS_STS%' -- SYS_STS are system generated from the DECODE in ALL_STAT_EXTENSIONS
           ) LOOP

   DBMS_OUTPUT.PUT_LINE('-- Dropping extension on '||i.extension);
   DBMS_OUTPUT.PUT_LINE('SELECT DBMS_STATS.CREATE_EXTENDED_STATS(user, ''&&current_table.'', '''||i.extension||''') FROM dual;');

 END LOOP;

END;
/
SPO OFF
SET SERVEROUTPUT OFF VERI ON FEED ON TIMING ON

UPDATE: just learned about this MOS note


4 Comments

SQL Monitoring, Flamegraph and Execution Plan Temperature 2.0

Two of the things that I like the most about SQL Monitoring reports are the ability to quickly spot where in the execution plan the time is spent (Activity% column, thank you ASH) and the fact you can collapse part of the plan. Too bad the two don’t “work” together meaning if you collapse a part of the plan the Activity% is not rolled up at the collapsed level. I understand why it works that way (it might be confusing otherwise) but I’d still like to be able to collapsed a node and get a “subtree Activity%” so I know if that subtree is something I should be worry about or not (kind of…).

Pause that for a second, Flamegraph on the other hand is really good at showing  how much (in %) a “subtree” (function + its callees) accounts overall, Luca Canali did an (as usual) awesome job here with something similar, but a bit more sophisticated than just execution plan. Unfortunately the fact the elements are not sorted on time (I think most “Oracle people” tend to expect time on the X-axis) combined with the kind of random color used for each bar make this representation a little less intuitive than I’d like it to be for execution plans (don’t get me wrong, I still love this representation a lot!).

SQLd360 represents execution plans as tree since one of the first releases and for over a year the color of the nodes represent a heatmap of where time has been spent (same as Activity% in SQL Monitoring), starting from a couple releases ago there are a couple improvements I hope (like to hear you feedback on this) fixed the shortcomings of both SQL Monitoring and Flamegraph when it comes to digest execution plans.
Each step in the execution plan can be collapsed (this was since day 1) and now its color will change to represent the color the node would have had if all the time the SQL spent on the node itself plus its child steps was spent on the node itself, on expand the colors go back to original.
Also when mouse overing the node a new piece of info will show up in the popup, a “subtree impact” that represent exactly what % of time the node plus its children account for.

node.png

For example from the picture above step 16 accounted for 15.52% of the time, while all the subtree under the node (including it too) accounted for 39.6% of the time. Expanding the node the color will go back to a “lighter shade orange”.

I think one of the main advantage of this approach will be the ability to consume large execution plans quickly, collapsing large parts of them and focusing on just what matters, how those parts of the plan interacts with each others, etc etc.

As usual feedbacks, corrections, ideas are MORE than welcome! 🙂

NB: I rewrote the SQL that is under the tree representation to make it more readable but just to make sure I didn’t break the functionality as of now there are two plan trees per plan (and per Top N exec), the one “with subtree” implements what just described above. The old representation will go away down the road, when I’m sure the new SQL is solid enough.


Leave a comment

Little help in creating SQL Plan Baselines

One of the main design goals behind SQLd360 is to have no installation nor “evidence” left in the database, i.e. there is no SQLd360 repository in the database while there is a SQLTXPLAIN one (this isn’t necessarily bad, it’s just a different approach).

As a consequence several little things SQLT provided are gone with SQLd360, for example few years ago (it’s been disabled by default for a while) SQLT generated a script to create a “custom” SQL Profile for the best performing plan it identified, something similar was happening for SQL Plan Baselines stored in SQL Tuning Set for quick implementation of baselines down the road.

There is a new standalone script shipped with SQLd360, sql/sqld360_create_sql_baseline.sql, that aims at providing the same functionality of SPM+STS provided by SQLT, using the idea of “no evidence left by default”.
The script is NOT executed as part of SQLd360, it can (and needs to) be manually executed at any time, which in turns means there is no need to run SQLd360 for this script to work.
The script requires three parameters:

  • SQL ID for the SQL of interest
  • Oracle Pack available in this database ([N]one, [D]iagnostic or [T]uning)
  • Plan Hash Value you wish to “freeze”

You can provide the parameters directly inline to SQL*Plus (assuming you know the PHV you want to enforce) or just enter them one by one when prompted, the script will list all the PHV it can find around (memory + history) with their performance, helping you in case you don’t recall which PHV you want.

The script does just two things:

  1. It create a SQL Tuning Set (named s_<<SQLID>>_<<PHV>>) and loads the plan details into it, the goal is to “freeze” it in time so info don’t get lost if the plan is aged out of memory, purged from AWR, etc.
  2. Provide copy&paste instructions to create a Baseline based on the plan in the STS either in the current system or in a remote one. The script DOES NOT execute such steps, only prints them at screen. This way you can read, understand, digest and validate them before YOU execute them.

Hopefully it will make it a little bit easier to play with SPM.

Feedbacks, correction, recommendations are welcome as usual!


Leave a comment

Presentations on Slideshare

Every once in a while I get asked if I can email the PPT for a session that I delivered. I always say YES (of course) so I figure why not be proactive and upload the material fot the presentations I delivered over the last several months. Under the “Pages” section on the right side of the page there is a new link “Presentations” that takes you to Slideshare.

It’s my first experience with Slideshare and I’m pretty sure I made mistakes along the way so if you see something wrong just let me know (and let me know how to fix it PLEASE 🙂 )

The list of presentation is probably incomplete so if you attended one and see that I forgot to upload it just let me know and I’ll fix that. Also every session comes with trace files / dumps / testcases built to support the investigations but I found no easy to way upload them so I’d still rely on the old “provided upon request, via email” for them.