Melding Monads

2009 December 30

Fun with the Lazy State Monad

Filed under: continuations, corecursion, haskell, monads — lpsmith @ 9:20 pm

The lazy state monad doesn’t work the way you think it works. Thinking of the lazy state monad in terms of imperative programming is a very useful first approximation, at least in terms of what is computed, but in terms how things are computed this intuition is beyond useless. I hope to dissuade you of the latter part of the intuition in this post, by demonstrating two of the more interesting things that can be done with the lazy state monad.

Albert Y.C. Lai recently shared a program that demonstrates the lazy state monad in a rather interesting way:

pro :: State [Bool] ()
pro = do
  pro
  s <- get
  put (True : s)

Instead of tail recursion, this employs head recursion. The question is, what does pro return when you run it, and why? I found it easy to guess the correct answer, but my reasoning was completely wrong. Of course, if viewed through a wholly imperative mindset, this leads to non-termination, but the lazy state monad extracts usable information from this definition.

In my recent Monad Reader article, I disentangled the bits of circular programming from Lloyd Allison’s queue, and for the last year I have been obsessed with pulling the same trick with Jones and Gibbons’ breadth-first labeling algorithm. At times, this obsession has been a bit of a disease. It’s also worth pointing out Chris Okasaki’s discussion of a particular instance of this algorithm, and the mention of this algorithm on the FP Lunch Weblog. Here is code for the special case of breadth-first numbering:

data Tree a b = Leaf a | Branch b (Tree a b) (Tree a b)

lazyRenum :: Num n => Tree a b -> Tree n n
lazyRenum t = t'
   where
     (ns, t') = renumber (0:ns, t)

     renumber (n:ns,  Leaf    _    ) = (n+1:ns  , Leaf    n      )
     renumber (n:ns,  Branch  _ l r) = (n+1:ns'', Branch  n l' r')
       where
         (ns' , l')  = renumber (ns , l)
         (ns'', r')  = renumber (ns', r)

I finally disentangled the corecursive bits from this example today. The circular programming occurs in the list argument, not the tree. Note that the flow of the list ns from one call of renumber to the next is like the state monad. From this observation, I wrote the following whimsical code:

lazyRenum :: Num n => Tree a b -> Tree n n
lazyRenum = runFresh . renumber 
   where
     renumber (Leaf    _    ) 
        = fresh (\n -> do 
                          return (Leaf n))
     renumber (Branch  _ l r) 
        = fresh (\n -> do
                          l' <- renumber l
                          r' <- renumber r
                          return (Branch n l' r'))

Once I had this right, it was pretty easy to fill in the definitions for fresh and runFresh, by cribbing off Chris Okasaki’s simplification of Jones and Gibbons’ algorithm:

type Fresh n a = State [n] a 

runFresh :: Num n => Fresh n a -> a
runFresh m = a
      where
          (a, ns) = runState m (0:ns)

fresh :: Num n => (n -> Fresh n a) -> Fresh n a
fresh f = do
   (n:ns) <- get
   put ns
   a <- f n
   ns' <- get
   put ((n+1) : ns')
   return a

And finally, we have arrived at a way to disentangle Jones and Gibbons’ algorithm. This easily generalizes from breadth-first numbering to breadth-first labeling, and like the original, it is capable of renumbering the Stern-Brocot tree. The key insights here are the use of the lazy state monad, and getting the type of fresh correct. Everything else is relatively straightforward, once this groundwork is laid.

It’s interesting to perform a post-mortem analysis of why coming up with this proved to be so difficult for me. I’ll admit that I spent a few weeks trying to decipher the operational characteristics of Jones and Gibbons’ labeling algorithm, and while I think I have a reasonable grasp on it, I’m still not completely comfortable with it. However, this monadic abstraction seems perfectly obvious in hindsight.

This contrasts starkly to my own understanding of Lloyd Allison’s queue: I was completely comfortable with the operational aspects of the algorithm, but the monadic abstraction was quite difficult to come to grips with. So my difficulties with Jones and Gibbons’ algorithm was an over-emphasis on the operational aspects of the algorithm, and too much focus on the continuation monad as part of the solution. Basically, I was hopeful that the same methodologies in my Monad Reader article would be directly useful, so much so that I had a case of tunnel vision.

While it is not obvious how the continuation monad might be applicable to this problem, continuations still did play a role in the solution: look at the type of fresh, it’s the same (a -> r) -> r pattern that the continuation monad uses. It seems to me that there is something deeper here, although I don’t know what that would be.

These examples might make a stronger case for the lazy state monad than the example in my previous post. While foldrByLevel is relatively easy to adapt to the continuation passing state monad, I don’t know how to do the same with either of these two examples.

2009 December 20

Are continuations really the mother of all monads?

Filed under: continuations, haskell, monads — lpsmith @ 8:34 am

So Issue 14 of the Monad Reader is out, which includes my paper “Lloyd Allison’s Corecursive Queues” with a few significant revisions. Compared to earlier versions mentioned on this blog, I moved a few sections around to improve the flow: the first 8 pages now contain an uninterrupted informal derivation of the queue operators. I also demonstrated a new fixpoint operator on continuations, an implementation of the standard mfix fixpoint on the codensity monad, and argued that mapCont cannot be implemented in terms of callCC.

However, this post focuses on the section that I moved, entitled “Monad Transformers” from pages 51 to 56. It’s basically a short rant about why I don’t like monad transformers, an argument that later culminates in a mostly broken corecursive queue transformer. In retrospect, it’s somewhat unfortunate that I did not reread the classic paper Monad Transformers and Modular Interpreters sometime before Issue 14 came out. I did read that paper approximately nine or ten years ago. Although the paper was helpful in understanding how monads could be shaped to my own ends, now that I actually understand the contents of the paper, it feels rather crufty.

Section 8.1 defines correctness criteria for a monad transformer and associated lifted operations, which I quote as follows:

The basic requirement of lifting is that any program which does not use the added features should behave in the same way after a monad transformer is applied.

The thrust of my argument is that this requirement is indeed very basic; one would hope that certain properties useful for reasoning inside a given monadic language would also be preserved. This additional requirement seems rather hopeless. However, the pièce de résistance of the argument is that the continuation transformer is incorrect by their own criteria, at least in the context of a lazy language such as Haskell or Gofer.

I demonstrate an expression that depends on the laziness of the lazy state monad, and fails to terminate after a continuation transformer is applied. (As an aside, it doesn’t matter if this is a ContT r (StateT st Identity) monad or StateT st (ContT r Identity), they are the same monad with the same operations.) In retrospect, this seems obvious: something written in the continuation passing style specifies an evaluation order independent of the host language, and applying the continuation transformer corresponds to a call-by-value CPS transformation of part of the program.

The example involves a definition that computes a right fold over a level-order traversal of a tree:

foldrByLevel :: (MonadState (Queue a) m)
             => (a -> [a])
             -> (a -> b -> b) -> b -> [a] -> m b 
foldrByLevel childrenOf f b as = fold as 
  where
    fold []     = deQ >>= maybe (return b)
                          (\a -> fold (childrenOf a))
    fold (a:as) = do 
                     enQ a 
                     b <- fold as
                     return (f a b)

If we use this definition to traverse an infinite tree, it will be productive when run with Control.Monad.State.Lazy, but will get stuck in an infinite non-productive loop when combined with Control.Monad.Cont. You can download a complete file that demonstrates this phenomenon. There are “simpler” expressions that demonstrate this, but the example I gave is itself interesting because it is useful in other contexts.

As demonstrated in my paper, the laziness can be restored by tweaking the definition of foldrByLevel to use mapCont. However, this is a leaky abstraction: in order to maintain the same behavior, we must modify existing code to make use of the “added” features in ways that are not backwards compatible. I do not know how to write a lazy state monad using continuations, or a sensible way of writing a single definition of foldrByLevel that behaves the same on both the lazy state monad and the continuation state monad.

(I use the word “sensible” because one could provide a unfaithful definition of mapCont for the lazy state monad that happens to work in the case of foldrByLevel, but fails in other contexts.)

What impact does this have on the notion that continuations are the mother of all monads? Is there a monad that corresponds to a call-by-name CPS transformation? Is it even possible to express the lazy state monad using continuations?

I do not yet have answers to these questions. One thing is clear, however: its tricky to generalize Filinski’s Representing Monads to lazy evaluation, if indeed it is possible to do so fully.

The Shocking Blue Green Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.