The lazy state monad doesn’t work the way you think it works. Thinking of the lazy state monad in terms of imperative programming is a very useful first approximation, at least in terms of what is computed, but in terms how things are computed this intuition is beyond useless. I hope to dissuade you of the latter part of the intuition in this post, by demonstrating two of the more interesting things that can be done with the lazy state monad.
Albert Y.C. Lai recently shared a program that demonstrates the lazy state monad in a rather interesting way:
pro :: State [Bool] () pro = do pro s <- get put (True : s)
Instead of tail recursion, this employs head recursion. The question is, what does pro
return when you run it, and why? I found it easy to guess the correct answer, but my reasoning was completely wrong. Of course, if viewed through a wholly imperative mindset, this leads to non-termination, but the lazy state monad extracts usable information from this definition.
In my recent Monad Reader article, I disentangled the bits of circular programming from Lloyd Allison’s queue, and for the last year I have been obsessed with pulling the same trick with Jones and Gibbons’ breadth-first labeling algorithm. At times, this obsession has been a bit of a disease. It’s also worth pointing out Chris Okasaki’s discussion of a particular instance of this algorithm, and the mention of this algorithm on the FP Lunch Weblog. Here is code for the special case of breadth-first numbering:
data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b) lazyRenum :: Num n => Tree a b -> Tree n n lazyRenum t = t' where (ns, t') = renumber (0:ns, t) renumber (n:ns, Leaf _ ) = (n+1:ns , Leaf n ) renumber (n:ns, Branch _ l r) = (n+1:ns'', Branch n l' r') where (ns' , l') = renumber (ns , l) (ns'', r') = renumber (ns', r)
I finally disentangled the corecursive bits from this example today. The circular programming occurs in the list argument, not the tree. Note that the flow of the list ns
from one call of renumber
to the next is like the state monad. From this observation, I wrote the following whimsical code:
lazyRenum :: Num n => Tree a b -> Tree n n lazyRenum = runFresh . renumber where renumber (Leaf _ ) = fresh (\n -> do return (Leaf n)) renumber (Branch _ l r) = fresh (\n -> do l' <- renumber l r' <- renumber r return (Branch n l' r'))
Once I had this right, it was pretty easy to fill in the definitions for fresh
and runFresh
, by cribbing off Chris Okasaki’s simplification of Jones and Gibbons’ algorithm:
type Fresh n a = State [n] a runFresh :: Num n => Fresh n a -> a runFresh m = a where (a, ns) = runState m (0:ns) fresh :: Num n => (n -> Fresh n a) -> Fresh n a fresh f = do (n:ns) <- get put ns a <- f n ns' <- get put ((n+1) : ns') return a
And finally, we have arrived at a way to disentangle Jones and Gibbons’ algorithm. This easily generalizes from breadth-first numbering to breadth-first labeling, and like the original, it is capable of renumbering the Stern-Brocot tree. The key insights here are the use of the lazy state monad, and getting the type of fresh
correct. Everything else is relatively straightforward, once this groundwork is laid.
It’s interesting to perform a post-mortem analysis of why coming up with this proved to be so difficult for me. I’ll admit that I spent a few weeks trying to decipher the operational characteristics of Jones and Gibbons’ labeling algorithm, and while I think I have a reasonable grasp on it, I’m still not completely comfortable with it. However, this monadic abstraction seems perfectly obvious in hindsight.
This contrasts starkly to my own understanding of Lloyd Allison’s queue: I was completely comfortable with the operational aspects of the algorithm, but the monadic abstraction was quite difficult to come to grips with. So my difficulties with Jones and Gibbons’ algorithm was an over-emphasis on the operational aspects of the algorithm, and too much focus on the continuation monad as part of the solution. Basically, I was hopeful that the same methodologies in my Monad Reader article would be directly useful, so much so that I had a case of tunnel vision.
While it is not obvious how the continuation monad might be applicable to this problem, continuations still did play a role in the solution: look at the type of fresh
, it’s the same (a -> r) -> r
pattern that the continuation monad uses. It seems to me that there is something deeper here, although I don’t know what that would be.
These examples might make a stronger case for the lazy state monad than the example in my previous post. While foldrByLevel
is relatively easy to adapt to the continuation passing state monad, I don’t know how to do the same with either of these two examples.