- What is ASH?
- AWR and Activity Types
- Average Active Sessions (this page)
- Graphical Visualizations
In the first 2 chapters of this series I queried the ASH data with respect to distinct sample times. While this is helpful maximizing the granularity of the analysis, it can sometimes be too much detail, too many data points for the eye to follow, or if run through a graphical engine, so many elements that the resulting chart becomes slow to render. It’s also possible the resolution of the resulting chart may be too low to adequately display all of the points distinctly; thus making the abundance of data redundant and useless.
Furthermore, sometimes what you’re looking for isn’t a detailed play-by-play of all activity but a general picture. You know there will be peaks and valleys in the system activity; but how busy is it on average?
Also, any one sample might be unusually high or unusually low, but looking at a wider view, within a given period the overall activity is what you’d expect.
All of these lead to the one of the most common aggregations across time with ASH data: Average Active Sessions (AAS.) The idea behind the calculation is simple enough, count up the sessions for each second of sampling and then divide that by the total number of seconds within the period. Thus yielding the average number of active sessions during an interval of time. The idea is simple and the math is simple for V$ACTIVE_SESSION_HISTORY because that view’s contents are sampled each second. DBA_HIST_ACTIVE_SESS_HISTORY is only populated with samples once each 10 seconds. Thus creating an average requires extrapolation for the missing time slices by multiplying the counts you do have by 10.
In the following example the active sessions counts are averaged over each minute (thus averaging over 60 seconds) within a half-hour span.
SQL> SELECT TRUNC(sample_time, 'mi') time, 2 ROUND(10 * COUNT(*) / 60) avg_sess_per_minute 3 FROM dba_hist_active_sess_history 4 WHERE sample_time >= TIMESTAMP '2018-09-15 13:00:00' 5 AND sample_time < TIMESTAMP '2018-09-15 13:30:00' 6 GROUP BY TRUNC(sample_time, 'mi') 7 ORDER BY 1; TIME AVG_SESS_PER_MINUTE 2018-09-15 13:00:00 5 2018-09-15 13:01:00 5 2018-09-15 13:02:00 3 2018-09-15 13:03:00 3 2018-09-15 13:04:00 5 2018-09-15 13:05:00 7 2018-09-15 13:06:00 5 2018-09-15 13:07:00 4 2018-09-15 13:08:00 3 2018-09-15 13:09:00 5 2018-09-15 13:10:00 4 2018-09-15 13:11:00 4 2018-09-15 13:12:00 7 2018-09-15 13:13:00 3 2018-09-15 13:14:00 3 2018-09-15 13:15:00 5 2018-09-15 13:16:00 8 2018-09-15 13:17:00 6 2018-09-15 13:18:00 10 2018-09-15 13:19:00 6 2018-09-15 13:20:00 9 2018-09-15 13:21:00 6 2018-09-15 13:22:00 6 2018-09-15 13:23:00 4 2018-09-15 13:24:00 3 2018-09-15 13:25:00 3 2018-09-15 13:26:00 4 2018-09-15 13:27:00 4 2018-09-15 13:28:00 7 2018-09-15 13:29:00 3 30 rows selected.
Another approach is to count the sessions for each sample and then apply averages outside. This way can be less efficient since it involves multiple aggregations; but it can be a simpler construction
SQL> WITH 2 sample_counts 3 AS 4 ( SELECT sample_time, COUNT(*) cnt 5 FROM dba_hist_active_sess_history 6 WHERE sample_time >= TIMESTAMP '2018-09-15 13:00:00' 7 AND sample_time < TIMESTAMP '2018-09-15 13:30:00' 8 GROUP BY sample_time) 9 SELECT TRUNC(sample_time, 'mi'), 10 ROUND(10 * SUM(cnt) / 60) avg_sess_per_minute 11 FROM sample_counts 12 GROUP BY TRUNC(sample_time, 'mi') 13 ORDER BY 1; TRUNC(SAMPLE_TIME,'MI') AVG_SESS_PER_MINUTE 2018-09-15 13:00:00 5 2018-09-15 13:01:00 5 2018-09-15 13:02:00 3 2018-09-15 13:03:00 3 2018-09-15 13:04:00 5 2018-09-15 13:05:00 7 2018-09-15 13:06:00 5 2018-09-15 13:07:00 4 2018-09-15 13:08:00 3 2018-09-15 13:09:00 5 2018-09-15 13:10:00 4 2018-09-15 13:11:00 4 2018-09-15 13:12:00 7 2018-09-15 13:13:00 3 2018-09-15 13:14:00 3 2018-09-15 13:15:00 5 2018-09-15 13:16:00 8 2018-09-15 13:17:00 6 2018-09-15 13:18:00 10 2018-09-15 13:19:00 6 2018-09-15 13:20:00 9 2018-09-15 13:21:00 6 2018-09-15 13:22:00 6 2018-09-15 13:23:00 4 2018-09-15 13:24:00 3 2018-09-15 13:25:00 3 2018-09-15 13:26:00 4 2018-09-15 13:27:00 4 2018-09-15 13:28:00 7 2018-09-15 13:29:00 3 30 rows selected.
Next, we’ll extend the previous query by adding a maximum value. It’s important to remember that DBA_HIST_ACTIVE_SESS_HISTORY is a collection of samples of samples. Thus the MAX-value seen within the samples could still be lower than the actual active session peak your system experienced. So, while these results should not be taken as absolutes, they can still offer an expanded, approximate view of what was happening.
SQL> WITH 2 sample_counts 3 AS 4 ( SELECT sample_time, COUNT(*) cnt 5 FROM dba_hist_active_sess_history 6 WHERE sample_time >= TIMESTAMP '2018-09-15 13:00:00' 7 AND sample_time < TIMESTAMP '2018-09-15 13:30:00' 8 GROUP BY sample_time) 9 SELECT TRUNC(sample_time, 'mi'), 10 ROUND(10 * SUM(cnt) / 60) avg_sess_per_minute, 11 MAX(cnt) max_sess_per_minute 12 FROM sample_counts 13 GROUP BY TRUNC(sample_time, 'mi') 14 ORDER BY 1; TRUNC(SAMPLE_TIME,'MI') AVG_SESS_PER_MINUTE MAX_SESS_PER_MINUTE 2018-09-15 13:00:00 5 9 2018-09-15 13:01:00 5 9 2018-09-15 13:02:00 3 3 2018-09-15 13:03:00 3 6 2018-09-15 13:04:00 5 6 2018-09-15 13:05:00 7 24 2018-09-15 13:06:00 5 6 2018-09-15 13:07:00 4 6 2018-09-15 13:08:00 3 3 2018-09-15 13:09:00 5 6 2018-09-15 13:10:00 4 6 2018-09-15 13:11:00 4 6 2018-09-15 13:12:00 7 24 2018-09-15 13:13:00 3 3 2018-09-15 13:14:00 3 3 2018-09-15 13:15:00 5 9 2018-09-15 13:16:00 8 24 2018-09-15 13:17:00 6 6 2018-09-15 13:18:00 10 27 2018-09-15 13:19:00 6 9 2018-09-15 13:20:00 9 27 2018-09-15 13:21:00 6 9 2018-09-15 13:22:00 6 6 2018-09-15 13:23:00 4 9 2018-09-15 13:24:00 3 3 2018-09-15 13:25:00 3 3 2018-09-15 13:26:00 4 6 2018-09-15 13:27:00 4 6 2018-09-15 13:28:00 7 24 2018-09-15 13:29:00 3 3 30 rows selected.
As with previous queries, these numbers can then be turned into a text-based bar chart to visualize the system activity. Here the average active sessions are charted with the high-watermark of maximum sessions indicated within each minute.
SQL> WITH 2 sample_counts 3 AS 4 ( SELECT sample_time, COUNT(*) cnt 5 FROM dba_hist_active_sess_history 6 WHERE sample_time >= TIMESTAMP '2018-09-15 13:00:00' 7 AND sample_time < TIMESTAMP '2018-09-15 13:30:00' 8 GROUP BY sample_time) 9 SELECT time, 10 RPAD('*', avg_sess_per_minute, '*') || 11 LPAD(']',max_sess_per_minute-avg_sess_per_minute) chart2 12 FROM ( SELECT TRUNC(sample_time, 'mi') time, 13 ROUND(10 * SUM(cnt) / 60) avg_sess_per_minute, 14 MAX(cnt) max_sess_per_minute 15 FROM sample_counts 16 GROUP BY TRUNC(sample_time, 'mi')) 17 ORDER BY 1; TIME CHART2 2018-09-15 13:00:00 ***** ] 2018-09-15 13:01:00 ***** ] 2018-09-15 13:02:00 *** 2018-09-15 13:03:00 *** ] 2018-09-15 13:04:00 *****] 2018-09-15 13:05:00 ******* ] 2018-09-15 13:06:00 *****] 2018-09-15 13:07:00 **** ] 2018-09-15 13:08:00 *** 2018-09-15 13:09:00 *****] 2018-09-15 13:10:00 **** ] 2018-09-15 13:11:00 **** ] 2018-09-15 13:12:00 ******* ] 2018-09-15 13:13:00 *** 2018-09-15 13:14:00 *** 2018-09-15 13:15:00 ***** ] 2018-09-15 13:16:00 ******** ] 2018-09-15 13:17:00 ****** 2018-09-15 13:18:00 ********** ] 2018-09-15 13:19:00 ****** ] 2018-09-15 13:20:00 ********* ] 2018-09-15 13:21:00 ****** ] 2018-09-15 13:22:00 ****** 2018-09-15 13:23:00 **** ] 2018-09-15 13:24:00 *** 2018-09-15 13:25:00 *** 2018-09-15 13:26:00 **** ] 2018-09-15 13:27:00 **** ] 2018-09-15 13:28:00 ******* ] 2018-09-15 13:29:00 ***
As shown above, aggregating to the minute, or any of the other native TRUNC levels (minute, hour, day, week, etc.) isn’t too difficult. If you want to aggregate to another range, for example sampling in windows of 15 minutes then the math is a little trickier but the idea is still the same. Modify the truncation logic to produce times on the hour, 15, 30, and 45 minutes past the hour and divide the count totals by 900 seconds (15 minutes * 60 seconds per minute.)
SQL> WITH 2 sample_counts 3 AS 4 ( SELECT sample_time, COUNT(*) cnt 5 FROM dba_hist_active_sess_history 6 WHERE sample_time >= TIMESTAMP '2018-09-15 07:00:00' 7 AND sample_time < TIMESTAMP '2018-09-15 17:00:00' 8 GROUP BY sample_time) 9 SELECT time, 10 RPAD('*', avg_sess_per_window, '*') || 11 LPAD(']',max_sess_per_window-avg_sess_per_window) chart 12 FROM ( SELECT TRUNC(sample_time, 'hh24') 13 + FLOOR((TO_NUMBER(TO_CHAR(sample_time, 'mi'), '99')) / 15) * 15 / 1440 time, 14 ROUND(10 * SUM(cnt) / (15*60)) avg_sess_per_window, 15 MAX(cnt) max_sess_per_window 16 FROM sample_counts 17 GROUP BY TRUNC(sample_time, 'hh24') 18 + FLOOR((TO_NUMBER(TO_CHAR(sample_time, 'mi'), '99')) / 15) * 15 / 1440) 19 ORDER BY 1; TIME CHART 2018-09-15 07:00:00 ****************** ] 2018-09-15 07:15:00 *********** ] 2018-09-15 07:30:00 ********** ] 2018-09-15 07:45:00 ********* ] 2018-09-15 08:00:00 ********** ] 2018-09-15 08:15:00 ********** ] 2018-09-15 08:30:00 ********** ] 2018-09-15 08:45:00 ********* ] 2018-09-15 09:00:00 ******** ] 2018-09-15 09:15:00 ****** ] 2018-09-15 09:30:00 ***** ] 2018-09-15 09:45:00 ***** ] 2018-09-15 10:00:00 ******** ] 2018-09-15 10:15:00 ******* ] 2018-09-15 10:30:00 ****** ] 2018-09-15 10:45:00 ******** ] 2018-09-15 11:00:00 ****** ] 2018-09-15 11:15:00 **** ] 2018-09-15 11:30:00 ****** ] 2018-09-15 11:45:00 ***** ] 2018-09-15 12:00:00 ***** ] 2018-09-15 12:15:00 **** ] 2018-09-15 12:30:00 **** ] 2018-09-15 12:45:00 **** ] 2018-09-15 13:00:00 **** ] 2018-09-15 13:15:00 ***** ] 2018-09-15 13:30:00 **** ] 2018-09-15 13:45:00 **** ] 2018-09-15 14:00:00 **** ] 2018-09-15 14:15:00 ***** ] 2018-09-15 14:30:00 ***** ] 2018-09-15 14:45:00 **** ] 2018-09-15 15:00:00 **** ] 2018-09-15 15:15:00 **** ] 2018-09-15 15:30:00 **** ] 2018-09-15 15:45:00 **** ] 2018-09-15 16:00:00 **** ] 2018-09-15 16:15:00 **** ] 2018-09-15 16:30:00 **** ] 2018-09-15 16:45:00 **** ]
This method can easily be modified to use 5 minute, 10 minute, or other windows simply by replacing each “15” above with the corresponding window length in minutes. The method doesn’t require even division within an hour. For example, if you used 13 minutes for your sampling windows you’d get windows beginning at 00, 13, 26, and 39 minutes after the hour, each lasting 13 minutes and a final remainder window of 8 minutes starting at 52 minutes after the hour.
And finally, the same idea can be used with arbitrary time windows. Instead of having a fixed duration for each window, you simply divide each window’s count by the duration (in seconds) of that window. Below I’ve selected 5 different windows ranging in duration from 1 minute up to an hour. The same approach as above still applies.
SQL> WITH 2 sample_times 3 AS 4 (SELECT TO_DATE('2018-09-15 07:00:00', 'yyyy-mm-dd hh24:mi:ss') start_time, 5 TO_DATE('2018-09-15 08:00:00', 'yyyy-mm-dd hh24:mi:ss') end_time 6 FROM DUAL 7 UNION ALL 8 SELECT TO_DATE('2018-09-15 08:00:00', 'yyyy-mm-dd hh24:mi:ss') start_time, 9 TO_DATE('2018-09-15 08:05:00', 'yyyy-mm-dd hh24:mi:ss') end_time 10 FROM DUAL 11 UNION ALL 12 SELECT TO_DATE('2018-09-15 08:30:00', 'yyyy-mm-dd hh24:mi:ss') start_time, 13 TO_DATE('2018-09-15 09:00:00', 'yyyy-mm-dd hh24:mi:ss') end_time 14 FROM DUAL 15 UNION ALL 16 SELECT TO_DATE('2018-09-15 13:00:00', 'yyyy-mm-dd hh24:mi:ss') start_time, 17 TO_DATE('2018-09-15 13:01:00', 'yyyy-mm-dd hh24:mi:ss') end_time 18 FROM DUAL 19 UNION ALL 20 SELECT TO_DATE('2018-09-15 13:30:00', 'yyyy-mm-dd hh24:mi:ss') start_time, 21 TO_DATE('2018-09-15 14:00:00', 'yyyy-mm-dd hh24:mi:ss') end_time 22 FROM DUAL), 23 sample_counts 24 AS 25 ( SELECT ash.sample_time, COUNT(ash.sample_id) cnt 26 FROM dba_hist_active_sess_history ash, sample_times t 27 WHERE ash.sample_time >= t.start_time 28 AND ash.sample_time < t.end_time 29 GROUP BY ash.sample_time) 30 SELECT start_time, 31 end_time, 32 RPAD('*', avg_sess_per_window, '*') 33 || LPAD(']', max_sess_per_window - avg_sess_per_window) chart 34 FROM ( SELECT t.start_time, 35 t.end_time, 36 ROUND(10 * SUM(c.cnt) / 37 ((t.end_time - t.start_time) * 86400)) avg_sess_per_window, 38 MAX(c.cnt) max_sess_per_window 39 FROM sample_counts c, sample_times t 40 WHERE c.sample_time >= t.start_time 41 AND c.sample_time < t.end_time 42 GROUP BY t.start_time, t.end_time) 43 ORDER BY 1; START_TIME END_TIME CHART 2018-09-15 07:00:00 2018-09-15 08:00:00 ************ ] 2018-09-15 08:00:00 2018-09-15 08:05:00 ********** ] 2018-09-15 08:30:00 2018-09-15 09:00:00 ********** ] 2018-09-15 13:00:00 2018-09-15 13:01:00 ***** ] 2018-09-15 13:30:00 2018-09-15 14:00:00 **** ]
Hopefully these queries and the previous articles give a good start in pulling data with standard SQL tools, both numerically and with text-based charting. In the next chapter we’ll look at using other tools with graphical capabilities to make more sophisticated charts with finer visual granularities.