Tim Hall inspired me with his #ThanksOTN (or “OTN Appreciation Day“) campaign to continue my mini-series on leveraging SQL Developer Reports for DBA tasks. Today with visualizing historical System Statistics from Statspack for performance analyses.
#ThanksOTN I have three favourite Oracle features to rave about today:
- Analytic SQL
- SQL Developer (particularly the reports)!
As the name implies, Oracle’s System Statistics are a collection of metrics that give information about what the system (in this case, the RDBMS) has done so far (like, executing SQL or fetching data blocks from disk). They are mostly counters starting from the startup of the instance.
As of Oracle 18.104.22.168, there are 1178 of them, so it’s hard to visualize them all at once. :-) Let me start with my personal picks for most of my performance analysis cases:
The metrics I picked are all normalized to per second counts within the snapshot period. Short overview:
- execute count: Number of SQL executions
- parse count: Number of SQLs parsed; the ratio of executes to parses gives a first indication about the efficiency of the applications running on that DB. The absolute number correlates to your system’s CPU load, because parsing is a CPU-intensive task.
- recursive calls: So-called “background SQL” that is executed to support user SQL.
- user calls: The number of user calls to the Oracle kernel. The relation to recursive calls can give another efficiency indication, also the relation to the execute count.
- user commits: self-explanatory
- user rollbacks: self-explanatory; together with commits it gives me a picture about the transactional activity of the applications.
- user logons cumulative: self-explanatory; useful to discover “logon storms”, for instance.
This graph comes from a production VM running on two Xeon cores that sometimes shows reduced response times. I found a few points remarkable here:
- Just two cores can handle over 2,600 parses per second and over 3,300 SQL executes/s. That doesn’t seem too bad!
- The high number of parses seems to be the cause of high CPU load. In relation to the executes this looks like an inefficient application that doesn’t use bind variables.
- Sometimes there are far more rollbacks than commits. Although not necessarily linked to performance, that’s something I find interesting as I don’t see such a signature usually.
Defining The Report
From the first article in this mini-series you already know how to build a report. I don’t want to bore you, but for first-time readers I repeat a step here.
and set the initial properties for the report – choose “Graph” as report type.
In the “SQL” textarea enter the following:
-- Get System Statistics from Statspack WITH snaps AS( SELECT sn.snap_id , sn.snap_time , ROUND((sn.snap_time - LAG( sn.snap_time, 1 ) OVER( PARTITION BY sn.instance_number ORDER BY sn.snap_id )) *24*3600, 2 ) AS time_diff , sn.instance_number , sn.dbid FROM stats$snapshot sn WHERE sn.instance_number = ( SELECT instance_number FROM v$instance ) ) SELECT TO_CHAR( snap_time, 'dd.mm. hh24:mi' ) time , stat_name , ROUND( value_diff / time_diff, 2 )AS per_sec FROM ( SELECT sn.snap_id , ss.value , ( ss.value - LAG( ss.value, 1 ) OVER( PARTITION BY ss.name ORDER BY ss.snap_id )) AS value_diff , sn.time_diff , sn.snap_time , ss.name stat_name FROM stats$sysstat ss , snaps sn WHERE ss.name IN( 'user commits', 'user rollbacks', 'user calls', 'recursive calls', 'user logons cumulative', 'execute count', 'parse count (total)' ) AND sn.snap_id = ss.snap_id AND ss.dbid = sn.dbid AND ss.instance_number = sn.instance_number ) ORDER BY snap_time, stat_name
Now the SQL is entered, choose an appropriate graph type. In our case, all metrics can be displayed as separate lines. Choose “Line” as the graph type.
Under “Property” – “Y-Axis” check “Logarithmic Scale”. The metrics differ by several orders of magnitude; logarithmic display allows you to see smaller values to a better scale.
I mentioned this as number 1 on today’s favourite list. Why? Because in my humble opinion, analytic functions are still a widely underestimated feature!
For instance, in times before analytic SQL you had to query the snapshot tables twice or – in the following, inefficient example – three times to calculate the difference between values of different snapshots:
SELECT sn.snap_id , sn2.snap_time - sn.snap_time time_diff FROM stats$snapshot sn, stats$snapshot sn2 WHERE sn2.snap_id = (SELECT MIN(snap_id) FROM stats$snapshot sn3 WHERE sn3.snap_id > sn.snap_id) ORDER BY sn.snap_id;
Finding a previous value can now be done with LAG() over a rowset that is sorted by snap_id (using OVER). Piece of cake! :-)
And that wraps up today’s favourite Oracle features. And there are so much more that I surely can’t wait another year until next OTN Appreciation Day comes!
Get this report from my SQL Developer Repository on GitHub. So, in case you don’t want to walk through this article, feel free to download this report from there. If the single report doesn’t import correctly try to download the whole ZIP and import it from there. And if you’d like to add some enhancements to this report, feel free to contribute!
Do you have questions or suggestions? Then let me know in the comments section below!
“Our favourite feature is system statistics … system statistics and wait events … wait events and system statistics … our TWO favourite features are system statistics and wait events … and ruthless use of SQL Developer reporting … Our THREE favourite features are system statistics, wait events, ruthless use of SQL Developer reporting and an almost fanatical devotion to Statspack … Our FOUR … no … amongst our favourite features are such elements as system statistics, wait events … I’ll come in again.”