Some data is more easily absorbed in visual form. Trending and relative comparisons in particular can often be digested at a glance when presented as a line or bar chart.
While SQL itself doesn’t have any graphing capabilities; using some of the Unicode symbol characters it is possible to create simple charts within a text element of a SQL result set. Specifically Unicode characters x2581 through x2588, each character, in order, representing a larger portion of a box.
2581 ▁
2582 ▂
2583 ▃
2584 ▅
2585 ▅
2586 ▆
2587 ▇
2588 █
A set of values that can be represented in ranks of 1-8 can then be displayed as a bar chart in series. These could be exact values, increments, or relative values. If you want to represent 0, that can be an blank space, an underscore, a period, or any other character to give a distinct representation from the other values.
As an example, I will make a simple bar chart showing relative salaries within each job type of the EMP table.
SQL> SELECT job,
2 LISTAGG(
3 CASE ROUND(8 * sal / maxsal)
4 WHEN 0 THEN UNISTR('\005F')
5 ELSE UNISTR('\258' || TO_CHAR(ROUND(8 * sal / maxsal), 'fm0'))
6 END)
7 WITHIN GROUP (ORDER BY sal) relative_salaries,
8 LISTAGG(sal, ',') WITHIN GROUP (ORDER BY sal) salaries
9 FROM (SELECT job, sal, MAX(sal) OVER (PARTITION BY job) maxsal FROM emp)
10 GROUP BY job
11 ORDER BY job;
JOB RELATIVE_SALARIES SALARIES
ANALYST ██ 3000,3000
CLERK ▅▆▇█ 800,950,1100,1300
MANAGER ▇██ 2450,2850,2975
PRESIDENT █ 5000
SALESMAN ▆▆██ 1250,1250,1500,1600
The CASE logic above to turn numbers into Unicode characters isn’t overly complicated, but it is cumbersome. It would be more convenient to use a function on the aggregated values.
SQL> SELECT job, 2 sparkline(CAST(COLLECT(sal ORDER BY sal) AS numtab)) relative_salaries, 3 LISTAGG(sal, ',') WITHIN GROUP (ORDER BY sal) salaries 4 FROM (SELECT job, CAST(sal AS NUMBER) sal FROM emp) 5 GROUP BY job 6 ORDER BY job; JOB RELATIVE_SALARIES SALARIES ANALYST ██ 3000,3000 CLERK ▅▆▇█ 800,950,1100,1300 MANAGER ▇██ 2450,2850,2975 PRESIDENT █ 5000 SALESMAN ▆▆██ 1250,1250,1500,160
I created the SPARKLINE function to take a collection type of numbers I call NUMTAB and return a character string. By default, the function ranks the values relative to the maximum value.
Alternately a fixed incremental value can be used climbing from 0 to 8 steps of that increment, any value greater than 8 increments will be treated as 8. Also, the function assumes all input values are non-negative.
CREATE OR REPLACE TYPE numtab IS TABLE OF NUMBER;
CREATE OR REPLACE FUNCTION sparkline(p_nums IN numtab, p_increment IN NUMBER DEFAULT 0) RETURN VARCHAR2 IS v_max NUMBER := p_nums(1); v_height PLS_INTEGER; v_result VARCHAR2(4000); BEGIN IF p_increment <= 0 THEN -- If the increment isn't a positive value then increment in proportion to the maximum of value FOR i IN 2 .. p_nums.COUNT LOOP v_max := GREATEST(v_max, p_nums(i)); END LOOP; FOR i IN 1 .. p_nums.COUNT LOOP v_height := ROUND(8 * p_nums(i) / v_max); v_result := v_result || CASE v_height WHEN 0 THEN UNISTR('\005F') ELSE UNISTR('\258' || TO_CHAR(v_height, 'fm0')) END; END LOOP; ELSE -- If the increment is positive then each step of height represents -- an increase in value by the increment amount with a maximum of 8 steps. FOR i IN 1 .. p_nums.COUNT LOOP v_height := LEAST(8, ROUND(p_nums(i) / p_increment)); v_result := v_result || CASE v_height WHEN 0 THEN UNISTR('\005F') ELSE UNISTR('\258' || TO_CHAR(v_height, 'fm0')) END; END LOOP; END IF; RETURN v_result; END; /
An another set of unicode characters I've used are the shading characters:
2591 ░2592 ▒
2593 ▓
Combined with a blank and the full block these characters give visualizations to 5 levels of magnitude which can be useful to represent fill levels. For example, in the query below show the relative amount of used space within a data file. The denser the block the fuller that area of the file, the lighter the shade the more empty it is.
SELECT tablespace_name, file_id, LISTAGG( CASE WHEN data_blocks = 0 THEN UNISTR('\005F') WHEN data_blocks = bucket_width THEN UNISTR('\2588') WHEN data_blocks / bucket_width < 0.50 THEN UNISTR('\2591') WHEN data_blocks / bucket_width < 0.75 THEN UNISTR('\2592') WHEN data_blocks / bucket_width >= 0.75 THEN UNISTR('\2593') END) WITHIN GROUP (ORDER BY bucket) extent_map, file_name FROM ( SELECT file_id, file_name, tablespace_name, bucket, bucket_width, SUM(block_type) data_blocks FROM (SELECT file_id, block_id, block_type, file_blocks, file_name, tablespace_name, CEIL(file_blocks / 100) bucket_width, FLOOR(block_id * 100 / file_blocks) bucket FROM (SELECT x.file_id, x.block_id + COLUMN_VALUE block_id, x.block_type, ddf.blocks file_blocks, ddf.file_name, ddf.tablespace_name FROM (SELECT file_id, block_id, blocks, 1 block_type FROM dba_extents UNION ALL SELECT file_id, block_id, blocks, 0 block_type FROM dba_free_space) x, dba_data_files ddf, TABLE( SELECT COLLECT(LEVEL - 2) FROM DUAL CONNECT BY LEVEL <= x.blocks) WHERE ddf.file_id = x.file_id)) GROUP BY file_id, bucket, bucket_width, file_name, tablespace_name) GROUP BY file_id, file_name, tablespace_name ORDER BY tablespace_name, file_id;
Below I have pulled an excerpt from one system's file map, note the first file is almost completely full, the second and fifth file have not been used much and much of the free space is at the end of the files. So those two could be resized to be much smaller.
The fourth and sixth file are interesting and possible candidates for coalescing or otherwise moving objects around to try to consolidate unused space. Especially the fourth file which still has a fair amount of free space in it, but not in large contiguous chunks.
EXTENT_MAP
▒███████████████████████████████████████████████████████████████████████████████░___________________
░█████████████████░_________________________________________________________________________________
░██████████████████████████████████████████░░░█░░██████████████████████████░░░░▒░████████░███▒░░____
░░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░█░░░░
░░________________________________________░_________________________________________________________
░███░████░████░████░████░████░████░████░████░████░████░████░████░████░████░████░████░█░██░██░█░░____
Using simple raw numeric data these patterns might not be so easily recognized; but a few simple pictures constructed of special characters describe the situation quite nicely. Of course, the CASE in this example could be replaced with a function similar to the sparkline function declared earlier. Or, put them both in a single function and choose the set of characters by passing in a option. This idea could also be extended with ODCI functionality to create your own aggregate function rather than using LISTAGG or COLLECT to bring the values together. I will leave that exercise to the reader but it shouldn't be too difficult to take some of the examples here.
I hope you find these useful and would be interested in other use cases with these or other characters used to create simple sparkline type visualizations within a result set.
I changed the data file mapping query to use a skewed scale instead of a linear scale of shading. Ithink the new picture conveys the usage better than the original.