Advent of Code, in Erlang: Day 21

Published Tuesday, December 21, 2021 by Bryan

I don't often rewrite everything between Part 1 and Part 2 of the Advent of Code puzzles. But I sure did for Day 21.

Part 1 started off with a simple game simulation. It's a bit more state than usual to keep track of, but the loop/recursion should be familiar to people who have done the earlier puzzles.

-record(p, {    % player
             i, % index
             p, % position
             s  % score
           }).

-record(d, {    % die
             n, % next roll
             c  % count of rolls
           }).

play_game(P1Start, P2Start) ->
    play_game(#p{p=P1Start, s=0}, #p{p=P2Start, s=0}, #d{n=1, c=0}).

play_game(Player1, Player2, Die) ->
    case player_roll(Player1, Die) of
        {NewPlayer1, Die1} when NewPlayer1#p.s >= 1000 ->
            Player2#p.s * Die1#d.c;
        {NewPlayer1, Die1}->
            case player_roll(Player2, Die1) of
                {NewPlayer2, Die2} when NewPlayer2#p.s >= 1000 ->
                    NewPlayer1#p.s * Die2#d.c;
                {NewPlayer2, Die2} ->
                    play_game(NewPlayer1, NewPlayer2, Die2)
            end
    end.

player_roll(#p{p=Start, s=Score}, Die) ->
    {Move1, Die1} = die_roll(Die),
    {Move2, Die2} = die_roll(Die1),
    {Move3, Die3} = die_roll(Die2),
    Move = Move1+Move2+Move3,
    NewPosition = case (Start + Move) rem 10 of
                      0 -> 10;
                      P -> P
                  end,
    {#p{p=NewPosition, s=Score+NewPosition}, Die3}.

die_roll(#d{n=N, c=C}) ->
    {N, #d{n=case (N + 1) of 101 -> 1; V -> V end, c=C+1}}.

I went a little overboard in making nicer datastructures and breaking each piece of the game into separate functions. I could tell that the number of things to keep track of was right in that spot where I could manage it with "just" a handful of integers, but I'd be happier to have things bundled with names. So, there is a player record p and a die record d, and a play_game loop that calls out to a player_roll function that does three die_rolls for a player, and then moves the player. State in, state out - simple.

739785 = puzzle21:play_game(4, 8).

Part 1 was eerily low-difficulty, compared to other puzzles this late in the calendar. I guessed that Part 2 would have something to do with predicting which player would win, but I didn't see quantum dice coming.

I went down a few incorrect paths in figuring out how to deal with the quantum dice. It sounded like a probability problem at first, and I did not enjoy probability in college. Maybe it is a probability problem after all, and those that did enjoy it were able to solve without a second simulation. I got to work simulating, hoping that I'd recognize some pattern that would help me recall the right formula.

%% 4 -1 5(5) -1 6(11) -1 7(18) -1 8(26)W
%%                             -2 9(27)W
%%                             -3 10(28)W
%%                    -2 8(19) -1 9(28)W
%%                             -2 10(29)W
%%                             -3 1(20) -1 2(22)W
%%                                      -2 3(23)W
%%                                      -3 4(24)W
%%                    -3 9(20) -1 10(30)W
%%                             -2 1(21)W
%%                             -3 2(22)W
%%           -2 7(12) -1 8(20) -1 9(29)W
%%                             -2 10(30)W
%%                             -3 1(21)W
%%                    -2 9(21)W
%%                    -3 10(22)W
%%           -3 8(13) -1 9(22)W
%%                    -2 10(23)W
%%                    -3 1(14) -1 2(16)
%%                             -2 3(17)
%%                             -3 4(18)

That started with hand-simulating one player's paths to victory. My thinking at this point was that I needed a table listing the relative likelihoods that a player, starting on a given space, won in N moves. While my text notes above may look tedious, this actually went pretty quickly, and convinced me that the search space wasn't large.

But then I looked back at my Part 1 implementation and remembered that one move consists of three rolls. My hand simulation was for only one roll. Three rolls would have twenty-seven outcomes! … or would it?

%% 111 = 3 2xx 4 3xx 5
%% 112 = 4     5     6
%% 113 = 5     6     7
%% 121 = 4     5     6
%% 122 = 5     6     7
%% 123 = 6     7     8
%% 131 = 5     6     7
%% 132 = 6     7     8
%% 133 = 7     8     9

After momentary panic at the thought of a branching factor of 27, I saw the twist I was looking for. The puzzle describes a player's turn as three rolls of a die. For the purposes of counting universes, three rolls with three potential values is 27 outcomes. But for the purposes of counting score, rolls of something like 1,1,3 produce exactly the same result as rolls of 1,3,1 and 3,1,1! In fact, there are only seven unique movement results. A branching factor of seven isn't small, but it's better than twenty-seven. If games go similar to my hand simulation of one die roll, I'm looking at only five or six rounds to a win. That makes only 117,649 (=76) outcomes!

%%              roll value V  V ways that roll can show up
-define(ALL_ROLL_COUNTS, [{3, 1},
                          {4, 3},
                          {5, 6},
                          {6, 7},
                          {7, 6},
                          {8, 3},
                          {9, 1}]).

dirac_player_win_histo(Start) ->
    dirac_player_win_histo(Start, 0, 0, 1, ?ALL_ROLL_COUNTS, #{}).

dirac_player_win_histo(_Space, _Score, _, _, [], Histo) ->
    Histo;
dirac_player_win_histo(Space, Score, RollCount, LeafMult,
                       [{Roll, RollMult}|Rest], Histo) ->
    NewSpace = case (Space + Roll) rem 10 of 0 -> 10; S -> S end,
    case Score + NewSpace of
        Win when Win > 21 ->
            dirac_player_win_histo(
              Space, Score, RollCount, LeafMult, Rest,
              maps:update_with(RollCount+1,
                               fun(C) -> C+(RollMult*LeafMult) end,
                               RollMult*LeafMult,
                               Histo));
        NewScore ->
            NewHisto = dirac_player_win_histo(NewSpace, NewScore, RollCount+1,
                                              LeafMult*RollMult,
                                              ?ALL_ROLL_COUNTS, Histo),
            dirac_player_win_histo(Space, Score, RollCount, LeafMult, Rest,
                                   NewHisto)
    end.

So, I set about simulating. At this point, I was still thinking probability, so I was looking for the number of games won after N rounds. The ALL_ROLL_COUNTS constant is a list of possible roll value (sum of three dice), and how many ways that roll can come up. The function below it produces a map from number of game rounds, to the number of universes that took that many rounds to win. The implementation is yet another depth-first traversal. If we find a win, we don't have to look deeper. If we're not at a win yet, we consider all roll values from this point. The key to counting universes is the RollMult * LeafMult. Whenever we consider a roll value, we have to consider that there are RollMult universes that can take that can roll that value (the second number in ALL_ROLL_COUNTS), and that there are LeafMult universes that might have arrived at this choice.

puzzle21:dirac_player_win_histo(4).
% #{3 => 3427,4 => 244332,5 => 3784741,6 => 34277144,
%   7 => 139217660,8 => 164170968,9 => 44423487,10 => 1165428}

puzzle21:dirac_player_win_histo(8).
% #{3 => 820,4 => 206916,5 => 6134645,6 => 48218921,
%   7 => 171609162,8 => 165277231,9 => 33755399,10 => 755271}

There are three interesting things here:

  1. I was wrong about the depth of the tree. It's not a maximum of six moves to win. It's 10.
  2. These functions are fast anyway! I'm measuring about 6ms on my laptop.
  3. The universe count in these results is at least six orders of magnitude too low!

That last point sent me spinning for a bit. The example in the puzzle description says that player 1 wins in 444 trillion universes, but my results only count only a few hundren million. I tried adding the results up in different ways for a bit (i.e. maybe it's the sum of values for rolls 3-4, plus 3-5, plus 3-6, whenever the other player has fewer wins for that value). But adding a few millions is not how one reaches trillions. What was I missing?

Thinking through the N-choose-M of it all, I got the answer. Player 1 only wins after 4 rounds if Player 2 didn't win in 3. Or, to reword it, in order for one of Player 1's 4-round universes to win, we can't be in one of Player 2's 3-win universes. The number of ways a player can win in three rounds isn't independent of the other player. We have to interleave the player-universe choices.

The independent calculation isn't worthless. Knowing the maximum number of rounds to a win tells us the maximum depth of the tree: 10+10 = 20. *cough* Woah 720 = almost 8x1016! That's even larger than the 4x1014 the example says we should get, and we haven't even multiplied by the independent die orderings yet!

Luckily, 20 is only the maximum depth of the tree. Looking at the independent win counts, most branches should max out at depth 7+7=14 (a mere 678billion), and many are even less than that. So the actual number of leaves we need to reach is much smaller than a full depth-20 tree.

dirac_game(P1Start, P2Start) ->
    dirac_game([#p{i=1, p=P1Start, s=0}, #p{i=2, p=P2Start, s=0}], 1, {0, 0}).

dirac_game([Up,Next], LeafMult, Wins) ->
    lists:foldl(fun({Roll, RollMult}, AccWins) ->
                        NP = case (Up#p.p + Roll) rem 10 of 0 -> 10; S -> S end,
                        case Up#p.s + NP of
                            Win when Win >= 21 ->
                                setelement(Up#p.i, AccWins,
                                           LeafMult*RollMult
                                           +element(Up#p.i, AccWins));
                            NS ->
                                dirac_game([Next,Up#p{p=NP,s=NS}],
                                           LeafMult*RollMult,
                                           AccWins)
                        end
                end,
                Wins,
                ?ALL_ROLL_COUNTS).

This loop has similar structure to the earlier ones. I felt like handling the Rest of the current universe's recursion made the subtree recursion less obvious, so I've restructured to fold across the current universe. I also didn't like the explicit handling of player 1 and player 2 in the simple simulation, so I changed them to a list that swaps back and forth depending on who's turn it is (who is "Up"). The accumulator is a 2-tuple, where the first number is player 1's win count, and the second is player 2's win count.

{444356092776315,341960390180808} = puzzle21:dirac_game(4, 8).

Hooray! Those are the numbers we should expect. This function takes about seven seconds to calculate 800 trillion universes on my laptop. Not bad. … but can we do better? Though I've written this whole series in Erlang, I haven't yet used one tool people rave about: easy parallelization.

dirac_game_parallel(P1Start, P2Start) ->
    Me = self(),
    Pids = lists:map(
             fun(Roll) ->
                     spawn(puzzle21, dirac_game_worker,
                           [P1Start, Roll, P2Start, Me])
             end,
             ?ALL_ROLL_COUNTS),
    dirac_game_collector(Pids, {0,0}).

dirac_game_worker(P1Start, {Roll, RollMult}, P2Start, Collector) ->
    NP = case (P1Start + Roll) rem 10 of 0 -> 10; S -> S end,
    Result = dirac_game([#p{i=2, p=P2Start, s=0}, #p{i=1, p=NP, s=NP}],
                        RollMult, {0,0}),
    Collector ! {self(), Result}.

dirac_game_collector([], Wins) ->
    Wins;
dirac_game_collector(Workers, {P1Wins, P2Wins}) ->
    receive {Pid, {P1WorkerWins, P2WorkerWins}} ->
            dirac_game_collector(
              lists:delete(Pid, Workers),
              {P1Wins + P1WorkerWins, P2Wins + P2WorkerWins})
    end.

No, I'm not going to go the Erlang-demo route, and spin up one process per universe. I'm just going to take one step down the tree, and spawn seven processes - one for each of the first seven universe-groups. Each of those is a dirac_game_worker that starts the same tree-exploring loop we used before, but with P1 already having moved once. When a worker finishes looking at its subtree, it sends its result back to the original spawning process. That process collects and sums all of the responses together.

timer:tc(puzzle21, dirac_game, [4,8]).
% {6918003,{444356092776315,341960390180808}}

timer:tc(puzzle21, dirac_game_parallel, [4,8]).
% {3726102,{444356092776315,341960390180808}}

On my dual-core laptop, parallelization nearly halves the time - from just under 7sec, to just over 3.5sec. That's about as good as you can hope for.

But can we do better still? When I was facing another huge result space on Day 14, I used "memoization" to remember results of intermediate tree nodes that I had already calculated. At first, I didn't think that would work here, because I didn't see how subuniverses would ever repeat each other. But, just as we don't care whether the roll was 1,1,3 or 3,1,1, we don't care if players got to their current position and score in 3 moves or 7. The state of the game - each player's position and score - is independent of history.

dirac_game_memoize(P1Start, P2Start) ->
    Players = [#p{i=1, p=P1Start, s=0}, #p{i=2, p=P2Start, s=0}],
    {Wins, _} = dirac_game_memoize_i(Players, #{}),
    Wins.

dirac_game_memoize_i([Up,Next], Memo) ->
    case Memo of
        #{[Up,Next] := Wins} -> {Wins, Memo};
        _ ->
            {Wins, NewMemo} =
                lists:foldl(
                  fun({Roll, RollMult}, {AccWins, AccMemo}) ->
                          NP = case (Up#p.p + Roll) rem 10
                               of 0 -> 10; S -> S end,
                          case Up#p.s + NP of
                              Win when Win >= 21 ->
                                  {setelement(Up#p.i, AccWins,
                                              RollMult
                                              +element(Up#p.i, AccWins)),
                                   %% don't bother memoizing leaves -
                                   %% recomputing them is cheap
                                   AccMemo};
                              NS ->
                                  SubPlayers = [Next,Up#p{p=NP,s=NS}],
                                  {{P1SubWins,P2SubWins},SubMemo} =
                                      dirac_game_memoize_i(SubPlayers,
                                                           AccMemo),
                                  {{RollMult*P1SubWins+element(1, AccWins),
                                    RollMult*P2SubWins+element(2, AccWins)},
                                   SubMemo}
                          end
                  end,
                  {{0,0}, Memo},
                  ?ALL_ROLL_COUNTS),
            {Wins, NewMemo#{[Up,Next] => Wins}}
    end.

This implementation flips the universe-count tracking upside down. Instead of passing down how many universes got us to a result, we pass back up how many universes from this point in the game lead to wins for each player. Apologies for the scrunch toward the right. The important note is that the AccWins are now only wins "from here", not total wins seen so far in the whole tree, and an AccMemo is now passed along, holding recordings of game states we've already seen and computed.

timer:tc(puzzle21, dirac_game_memoize, [4,8]).
% {66857,{444356092776315,341960390180808}}

No, I didn't mis-paste. Memoization cuts the runtime to 66ms - a 100x improvement. A little back-of-the-envelope math helps it make sense. We said our typical tree depth would produce something like 678 billion leaves. If each player has a potential 0..21 points, and can be in any of 10 positions, that's only 48,400 (=22*22*10*10) potential game states. We could see each state 10 million times! The details of the actual tree shape, and the overhead of using the memoization table, reduce our expected returns, but it's no surprise that the speedup is big.

I'm not going to attempt to parallelize this for even more gain. For one, a 2x speedup after a 100x speedup just doesn't feel as awesome. For another, I'm not sure I'd see a 2x speedup this time. Memoization works because later computations can be skipped if we did them earlier. If I'm computing in parallel, I either have to duplicate work to make those findings in each thread, or share findings between threads. Either of those will decrease the returns on parallelization. For an even larger dataset, it could be worth it, but not here.

In any case, lesson learned, again: it's faster to reduce duplicate work than to do it in parallel.

Whew, I got to avoid dredging up too much of my rusty probability course after all. Give me a shout on Twitter (@hobbyist) if you did use those methods. I've decided to not clean up today's code. I think it's good for folks to see that we all get confused, take wrong turns, double-back, and make mistakes in general. A polished end result does not indicate a clean path to it.