Close

Advent of Code 2025- Day 11

Reactor

Part 1 is a simple recursion with limited paths, so I used CONNECT BY where the device (source) of each row is contained within the output (target) of the prior row. So, in the sample data, the line “bbb: ddd eee” has source “bbb” and targets “ddd eee”. But we can see “bbb” is in the targets of the initial line “you: bbb ccc” thus connecting those rows.

I make all of the connections, and every path eventually ends at out. I didn’t check for this explicitly, but if it were not true, that would mean there would be a loop in some of the paths and the connect by would fail. Since it doesn’t fail, we know every path ends and those ends. I show the paths using sys_connect_by_path, and count how many leaf rows are generated to get the final result.

 WITH data AS(SELECT SUBSTR(str, 1, 3) source, 
                    SUBSTR(str, 6) targets
              FROM advent.data2rows('advent2025-11'))
SELECT COUNT(*) cnt
  FROM (
               SELECT source, 
                      targets,                      
                      CONNECT_BY_ISLEAF isleaf
                 FROM data
           START WITH source = 'you'
           CONNECT BY PRIOR targets LIKE '%' || source || '%'
       ) x
 WHERE isleaf = 1;


Part 2 is, in theory, just more of the same except the search tree grows too much to use SQL efficiently (my data had over 500 trillion possible paths). But a simple pl/sql collection to memoize sub-results, collapses the work very quickly. I did run into a slight snag in my recursion in that my first attempts would save the partial results for each node as I reached it. The problem with that though is that many, if not all, of the nodes have some path from svr to out that does not pass through the required dac and fft nodes. Thus my cached results for those nodes was 0 and thus the sum of them was 0 as well.

I spent a little while trying to find a flaw in my recursive search before realizing my mistake. The recursion itself was fine, but my memoization needed to keep track of how I reached each node. That is, reaching a particular node without passing through dac or fft or just one of them was different than passing through both of them. So, instead of indexing by just the node name, I indexed by the name with T and F appended to the name for each of the required nodes.

Thus, I might reach node xzl by paths xzlFF, zxlTF zxlFt, or xzlTT – and I only incremented the counters of the sums of the last one.

Once I realized my error and changed my memoization key, my answer came out correctly on the next run.

DECLARE
    TYPE cache_tab IS TABLE OF INTEGER
        INDEX BY VARCHAR2(5);

    TYPE connect_tab IS TABLE OF vctab
        INDEX BY VARCHAR2(3);

    g_connections   connect_tab;

    -- memoize previously visited paths
    g_cache         cache_tab;

    FUNCTION search(p_target VARCHAR2, p_dac BOOLEAN, p_fft BOOLEAN)
        RETURN NUMBER
    IS
        v_result   INTEGER;
        v_key      VARCHAR2(5)
                       := p_target || 
                          CASE WHEN p_fft THEN 'T' ELSE 'F' END ||
                          CASE WHEN p_dac THEN 'T' ELSE 'F' END;
    BEGIN
        CASE
            WHEN g_cache.EXISTS(v_key)
            THEN
                v_result := g_cache(v_key);
            WHEN p_target = 'out'
            THEN
                IF p_dac AND p_fft
                THEN
                    g_cache(v_key) := 1;
                ELSE
                    g_cache(v_key) := 0;
                END IF;

                v_result := g_cache(v_key);
            ELSE
                v_result := 0;

                FOR i IN 1 .. g_connections(p_target).COUNT
                LOOP
                    v_result :=
                          v_result
                        + search(g_connections(p_target)(i), 
                                 p_dac OR (p_target = 'dac'), 
                                 p_fft OR (p_target = 'fft')
                          );
                END LOOP;

                g_cache(v_key) := v_result;
        END CASE;

        RETURN v_result;
    END search;
BEGIN
    FOR x
        IN (
               SELECT SUBSTR(str, 1, 3) source, 
                      str2tbl(SUBSTR(str, 6), ' ') targets
                 FROM advent.data2rows('advent2025-11')
           )
    LOOP
        g_connections(x.source) := x.targets;
    END LOOP;

    DBMS_OUTPUT.put_line('Path count: ' || search('svr', FALSE, FALSE));
END;

Despite the solution being much larger (448 paths vs 500+ trillion) than in part 1, this finished in less time than the SQL solution I used then (130ms vs 185ms).
Neither is slow, but obviously the part 2 solution scales better.

The part 2 solution can still be used for part 1 through, simply by changing the inputs to the initial search call. Even though the dac and fft nodes aren’t relevant, forcing the search to see all paths as countable is achieved just by initializing them to TRUE.

DBMS_OUTPUT.put_line('Path count: ' || search('you', true, true));

When I isolate just the search time, it is only about 0.0003 seconds for part 1 and 0.005 seconds for part 2.

This was a fun one for me, making me rethink through my recursion and memoization to get the right answer. For readers that have been following along with my recent posts about the puzzles, you might have noticed I skipped Day 10.
I have to concede it’s because I haven’t thought of a solution for part 2 yet. I might post an article with my part 1 solution anyway at some point but I’m hoping I’ll think of something and hopefully post them together. I have looked at Day 12, but off hand, nothing came to me right away, so I’ll go back to Day 10 and try that again before tackling the last one.

If I don’t get around to those, hopefully these first articles with 20 solutions have been interesting and helpful. I’m already looking forward to the 2026 puzzles later this year.

Addendum

I was explaining my code to someone and when asked about the term memoize I said it was like an Oracle function Result Cache; and then it hit me – why did I reinvent the wheel, why didn’t I just use that?

Using an anonymous block as in my original solution, I can’t declare a result cache function; but I thought it was worth testing out. So, I created a stand-alone procedure with an embedded search function using result cache. The results were comparable to implementing my own, maybe slightly faster.

CREATE OR REPLACE PROCEDURE day11
AS
    TYPE cache_tab IS TABLE OF INTEGER
        INDEX BY VARCHAR2(5);

    TYPE connect_tab IS TABLE OF vctab
        INDEX BY VARCHAR2(3);

    g_connections   connect_tab;

    v_time          TIMESTAMP WITH TIME ZONE;

    FUNCTION search(p_target VARCHAR2, p_dac BOOLEAN, p_fft BOOLEAN)
        RETURN NUMBER
        RESULT_CACHE
    IS
        v_result   INTEGER;
    BEGIN
        CASE
            WHEN p_target = 'out'
            THEN
                IF p_dac AND p_fft
                THEN
                    v_result := 1;
                ELSE
                    v_result := 0;
                END IF;
            ELSE
                v_result := 0;

                FOR i IN 1 .. g_connections(p_target).COUNT
                LOOP
                    v_result :=
                          v_result
                        + search(g_connections(p_target)(i), p_dac OR (p_target = 'dac'), p_fft OR (p_target = 'fft'));
                END LOOP;
        END CASE;

        RETURN v_result;
    END search;
BEGIN
    v_time := SYSTIMESTAMP;

    FOR x
        IN (
               SELECT SUBSTR(str, 1, 3) source, str2tbl(SUBSTR(str, 6), ' ') targets
                 FROM advent.data2rows('advent2025-11')
           )
    LOOP
        g_connections(x.source) := x.targets;
    END LOOP;

    DBMS_OUTPUT.put_line('Path count: ' || search('svr', FALSE, FALSE));
END;
/

BEGIN
    day11;
END;
/

One advantage to using my own cache collection is that I can run this on databases where there isn’t any result cache memory allocated or if there is a lot of contention for it, my memoize data will be isolated from that shared memory space.

Another advantage is in debugging and performance testing. My collection is cleared each time I run so I know I’m starting clean each time. With result cache, I could have results already saved from a prior run and thus skew my results. It is possible to flush the result cache but doing so requires elevated privileges or forcing an invalidation.

Finally, as mentioned above, defining a result cache function requires creating an object and on some databases I might not have privileges to do so. Of course, I could just use Oracle’s FreeSQL if I ran into that problem.

I can’t really say one way is strictly better than the other, but it is nice that Oracle provides the caching natively even if adding it manually wasn’t that more extra code or complexity.

Leave a Reply