Melding Monads

2010 July 21

Iteratees and a (very) Minimal Model of Shared-State Concurrency

Filed under: continuations, haskell, monads — lpsmith @ 6:26 pm

In May, I gave a brief presentation to the Chicago Haskell User’s Group on modelling shared-state concurrency in a purely functional way without using the IO or STM monads. You can download a copy of the slides and Haskell source files. The slides may be of particular interest to those who are baffled by the continuation monad but are comfortable with the state monad.

The problem motivating the presentation is the Thread Game, attributed to J Strother Moore: given two threads each running the doubler pseudo-code below, with local variables x and y and a single shared global variable a initialized to 1, what values can appear in the global variable? Can you prove your answer?

doubler = loop
            x := a
            y := a
            a := x + y
          end

It should be noted that no synchronization is used, that each line represents an atomic instruction, and that arbitrary schedules are allowed. (e.g. they need not be fair schedules)

The talk covered two implementations of a function from finite schedules to the final value of a, providing a model of the Thread Game. The first implementation used a relatively straightforward technique reminiscent of automata and Turing Machines from theoretical computer science. The local configuration for a thread consists of the local variables plus an instruction pointer. The delta function step_doubler takes a local configuration and the current value of the global variable, and produces a new local configuration and value for the global variable.

type IP     = Int       -- Instruction Pointer
type Local  = Integer   -- Local Variable
type Global = Integer   -- Global Variable

type LocalConf  = (IP, Local, Local)

step_doubler :: (LocalConf,Global) → (LocalConf,Global)
step_doubler ( (ip,x,y) , a )
   = case ip of
      0 → ( (1,a,y) , a   )
      1 → ( (2,x,a) , a   )
      2 → ( (0,x,y) , x+y )

The global configuration consists of the value of the global variable, and one local configuration for each doubler process. The global step function takes the global configuration and a process identifier, and produces another global configuration.

type PID        = Int    -- Process Identifier
type GlobalConf = (LocalConf, LocalConf, Global)

step_global :: GlobalConf → PID → GlobalConf
step_global (d0, d1, a) pid
  = case pid of
     0 → let (d', a') = step_doubler (d0, a) in (d', d1, a')
     1 → let (d', a') = step_doubler (d1, a) in (d0, d', a')

The function from schedules to the final value in a is just a left fold, resulting in the first model of the thread game:

threadGame :: [PID] → Integer
threadGame pids = a 
  where
    ( _, _, a ) = foldl step_global ( (0,0,0), (0,0,0), 1 ) pids

While this technique is generic and relatively straightforward, the code itself is not generic, and is fairly far removed from the original specification. It would seem natural to model doubler using the state monad as shown below. The state monad can handls the local variables and the global variable, but cannot represent the instruction pointer or offer the co-routine facility needed to model the given problem.

doubler = do
    x ← get
    y ← get
    put (x+y)
    doubler

The continuation monad can represent the instruction pointer, but because Haskell is a pure language, further tricks are needed to mediate between the thread scheduler and each doubler process.

Iteratee-based Shared-State Concurrency

The second implementation used an idea derived from Iterators as shown to me by Roshan James. These are a more generic variant of to Oleg’s Iteratees. Iteratees are something of a hot topic these days, being a core feature of the Snap Framework and the subject of assorted blog posts.

data Iterator i o r
   = Done r
   | Result o (i → Iterator i o r)

The idea here is that an iterator is either done, and returns a final result (of type r), or it might want to produce more, so it returns an intermediate result (of type o) along with a continuation to be called if and when further computation is required. Interestingly, Stephen Blackheath also used an equivalent structure in today’s post, Co-routines in Haskell.

Basically, each thread is an Iteratee that accepts the value in the global variable as an input, and produces a new value for the global value and it’s own continuation as output. This continuation represents the local configuration, which consists of the instruction pointer and the two local variables. The thread scheduler manages these continuations and feeds the global variable into the corresponding continuation.

I had (almost) invented iteratees myself a number of years ago, although I didn’t realize the connection until after I had begun to write this blog post. Back in the year 2000, I helped write a paper about implementing finite automata and Turing machines in functional languages by directly using the standard textbook definition. In the process I stumbled on an alternate representation of a DFA that I did not publish, mentioned briefly here.

data DFA st ab = DFA (st → ab → st) st (st → Bool)

data ContDFA ab = ContDFA Bool (ab → ContDFA ab)

The first representation is the standard textbook definition, a automaton consists of a set of states st, an alphabet ab, and a transition function, a start state, and (the characteristic function of) a set of final states. Interestingly, this representation was revisited in the most recent issue of the Monad Reader.

The second representation is a specialized case of Iterator above, without the Done alternative. Basically, the ContDFA represents a single state in the machine, which has a boolean field saying whether or not the state is final, and a method that accepts a letter and returns another state.

When I was first exposed to Iterators, I was confused by them and the way my friend was using them. Funnily enough, I completely missed the connection to my ContDFA invention, even though I was immediately reminded of Mealy Machines and thought of that project! In a way, it was a fortuitous slip of the mind. While thinking about implementing Mealy Machines with continuations in an effort to understand Roshan’s work, I came up with “functional iterators”. (If anybody can suggest a better name, please speak up!)

newtype FunIter i o r
      = FunIter (FunIter o i r → i → r)

Compared to Iterator, FunIter assumes that the consumer of the intermediate values is itself written in the Continuation Passing Style. I don’t understand the exact relationship between Iter and Iterator; they are remarkably similar, but there are trade-offs between what these two structures can represent, trade-offs I don’t understand very well.

Both Iterator and FunIter are capable of expressing the coroutine facility needed to model the thread game, however, my Iterator solution has a few vestigial features, and I like my FunIter solution a bit better.

newtype FunIter i o r
      = FunIter {runIter :: FunIter o i r → i → r}

type Thread r st a = Cont (FunIter st st r) a

runThread :: Thread r st r → FunIter st st r
runThread m = runCont m (\r → FunIter (\_ _ → r))

I used a continuation monad to capture the local continuations, while FunIter mediates the transition between the local continuations and the scheduler continuation. We can then define get and put, which yield to the scheduler continuation:

get    = Cont (\k → FunIter (\(FunIter sk) st →  sk (k st) st))
put st = Cont (\k → FunIter (\(FunIter sk) _  →  sk (k ()) st))

Here is a more generic thread scheduler:

update :: Int → a → [a] → [a]
update n x [] = []
update n x (y:ys) 
  | n <= 0    =  x : ys 
  | otherwise =  y : update (n-1) x ys

runSchedule :: [FunIter st st st] → [PID] → st → st
runSchedule threads []          st = st
runSchedule threads (pid:pids)  st = runIter (threads !! pid) sched st
   where
     sched = FunIter $ \iter st' → 
               runSchedule (update pid iter threads) pids st'

And finally, we can bring everything together into an alternate model of the thread game:

threadGame' :: [PID] → Integer
threadGame' sched = runSchedule [runThread doubler, runThread doubler] sched 1

I don’t know who first invented Iteratees, but the idea has been lurking in a variety of different contexts for some time. The functional iterators presented here might be original. The second solution is a bit longer and more esoteric, but it’s more generic and should scale better in terms of source code and maintenance.

The Shocking Blue Green Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.