I liked day 8. It made me feel clever when I worked it out. Spoilers follow as usual, and my solution is here.
There are a few classes of puzzle that show up time and time again in Advent of Code. One of these is the virtual machine puzzle. That’s where it gives you some computer code, and the task is to make something that can run it.
Remember I mentioned Intcode from 2019? That was a virtual machine puzzle writ large. It was built up over multiple puzzle-days and ended up as a fully functional computer, capable of performing any computation and even able to talk to external hardware. It was a masterpiece of puzzle design.
This is not that complicated. Today, we’ve got a simple instruction set. There are three things our hypothetical computer can do: add a number to another number, jump forward or backward in its instruction list, or just do nothing and move on to the next step. It’s not really a proper computer, just a slightly convoluted adding machine.
(Maybe I should go into exactly what a universal computer and Turing-completeness mean some time. Be a fun writing exercise.)
Something this simple wouldn’t take me long at all in a language I already know, but as previously established, I’ve been learning Haskell this year. Haskell is a pure functional programming language. What this means is it doesn’t follow step-by-step instructions like another language like C or Python would. It feeds the results of various functions into each other in a long pipeline. This makes having it pretend to be a step-by-step computer… challenging.
A challenge I greatly enjoyed.
The thing with a pure functional programming language is that it doesn’t really have state. If you run the same function with the same inputs twice, you will get the same output both times, always. There are two ways around this.
The first of these you’ve already seen me use. It’s recursion – the function calls itself for the next step with slightly different input. That would have worked here, because the pretend adding machine only keeps track of two numbers internally – the results of its adding, and the instruction number it’s on. It’d be pretty easy to keep passing those to the next step.
That is insufficiently needlessly elaborate.
The second way, which is what I actually did, is to use something called a monad. It’s a way for a functional language like Haskell to pretend to have state. Behind the scenes, it’s still doing recursion for you. It’s just easier to write. My friend Emi did an excellent introduction to the topic on their own site, and I heartily encourage you to have a read.
To summarise: when working with monads, you have actions instead of functions. Those take the existing state, do something with it, and give back the resulting changed state. You step through various actions not unlike how you would in an imperative programming language.
Definitely hit a few pitfalls on the way, though! This is a new way of working in Haskell, with its own syntax and rules that are slightly different from non-monadic stuff.
I fell into the trap of thinking the
return function does the same thing a return does in an imperative language – drops you out of the function with the value you give it. Nope, nope, nope, it’s just badly named. What it actually does is give you an action that will always produce the value you give it, regardless of the current state.