Alright, it’s been almost a month since I’ve last posted, and I wanted to post weekly. Perhaps bi-weekly would be a better goal, as long as I really can stick to it.

So, a few months ago, while working on a combinatorial problem for Project Euler, I was re-implementing polynomial arithmetic, and made an interesting observation. I was using a sparse list representation, with each element of the list being `(M c k)`

, representing the monomial `cx`

.^{k}

> data Mono = M !Integer !Integer deriving (Eq, Ord, Show)

A polynomial is represented by a list of monomials, which is assumed to be sorted in ascending order of the power `k`

above, and every power is assumed to be unique in our representation. Thus we can easily define addition and subtraction using standard techniques for dealing with ordered lists:

> mergeBy :: (Integer -> Integer -> Integer) -> [Mono] -> [Mono] -> [Mono] > mergeBy f xs ys = loop xs ys > where > loop [] ys = ys > loop xs [] = xs > loop (M c0 k0:xs) (M c1 k1:ys) > = case compare k0 k1 of > LT -> M c0 k0 : loop xs (M c1 k1:ys) > EQ -> let c' = f c0 c1 > in if c' == 0 > then loop xs ys > else M c' k0 : loop xs ys > GT -> M c1 k1 : loop (M c0 k0:xs) ys > add = mergeBy (+) > sub = mergeBy (-)

It’s also very straightforward to multiply two monomials, and to perform a “scalar” multiplication of a single monomial with a polynomial. Note that the degree of each term is changed by a constant amount, so we don’t have to worry about re-ordering the list, and that the first case of `smul`

is a completely optional minor optimization.

> mmul (M c0 k0) (M c1 k1) = M (c0 * c1) (k0 + k1) > smul (M 1 0) ys = ys > smul x ys = map (mmul x) ys

Taking into account the possibility of polynomials with an infinite number of terms, it’s now fairly easy to implement a polynomial multiplication program in terms of `add`

and `mmul`

:

> pmul _ [] = [] > pmul [] (_:_) = [] > pmul (x:xs) (y:ys) = mmul x y : add (smul x ys) (pmul xs (y:ys))

This is based around the algebraic identity `(x + xs) * (y + ys) == x * y + x * ys + xs * (y + ys)`

. It works on infinite polynomials because `x`

and `y`

are the terms of least degree to be multiplied, and every other product will have a higher degree. Unfortunately, this code turns out to be hopelessly inefficient.

The second thing I tried was simply changing the order in which arguments are passed to the recursive call to `pmul`

:

> pmul2 _ [] = [] > pmul2 [] (_:_) = [] > pmul2 (x:xs) (y:ys) = mmul x y : add (smul x ys) (pmul2 (y:ys) xs)

Empirically, this is asymptotically faster than the first attempt. I do not know why. Any insight would be fun to read about!

This seems very similar to the approach described in McIlroy’s old pearl, “The music of streams”: http://www.cs.dartmouth.edu/~doug/music.ps.gz

Comment by Jade NB — 2009 April 13 @ 8:54 pm

Yep, they are indeed similar, thanks for the reference! The difference is that McIlroy’s paper uses a list of coefficients, whereas this attempt uses a possibly more compact coefficient-power representation. In fact, my original attempt at polynomial arithmetic in Haskell some years ago used McIlroy’s representation and re-invented some of his techniques.

This difference might explain my observation: I suspect that the observed difference in asymptotic behavior is due to some kind of interaction with the merging algorithm.

Comment by lpsmith — 2009 April 13 @ 10:02 pm

The interesting thing to me is that the first version attempts to process one list completely first, whereas the other processes both lists “at the same speed”. I don’t know anything about Haskell internals, so can’t hope to see what difference that would make from that point of view; but maybe it has a purely theoretical explanation (from the point of view of counting operations). What were your sample inputs that evoked the asymptotically different behaviour?

Comment by Jade NB — 2009 April 13 @ 10:50 pm

Sorry for the slow reply. Unfortunately I didn’t have access to my source code for a few days to review, which didn’t turn out to be too enlightening anyway as I didn’t take careful notes of what I had done. (Other than praising or cursing various experimental implementations.)

There shouldn’t be any need to delve into nitty gritty implementation details to explain this behavior; just a basic understanding of lazy evaluation should suffice.

As for sample inputs, it consisted of a product of subsets of this family of polynomials:

> pstep n = [ M 1 k | k <- [0,n..] ]

Comment by lpsmith — 2009 April 17 @ 8:20 am

I am terrible at Haskell, but pretty good at math:

In pmul, (y:ys) is passed down the recursive chain until it is multiplied with []. This means (x:xs) must be completely expanded just to get the second term in the resulting polynomial.

On the other hand, pmul2 does an excellent job of swapping the polynomials: effectively performing the complete expansion in two steps:

(x + xs) * (y + ys) == x*y + x*ys + xs*y + xs*ys.

——————————

Looking at the expansion for pmul: There will be many (smul xi ys) for each xi in x, but pmul2 will use smul with decreasing augument size. It seems the many-term smul and the resulting adds (and sorting?) consume more resources in the former.

Comment by Kyle Lahnakoski — 2009 April 22 @ 8:58 am

It sounds like you are onto something, but you are going to have to clarify. This part of your reply, at least, is wrong.

The base cases are completely optional, so long as both polynomials being multiplied are infinite. So in a sense, these are “optionally corecursive” functions.

For example, if you want to find out how many ways you can add together 2, 3, and 5 to get say, 1000, then all you have to do is look at the coefficient of the 1000th power of:

where

`pstep n`

represents the polynomialComment by lpsmith — 2009 April 22 @ 1:21 pm

Yes, I was wrong: I had forgotten that add() also has a recursive definition.

I tried to apply your definitions with hope that the inefficiency of pmul would be apparent. Here is what I got before I gave up.

pmul (x1x2…xn) (y1y2y3…yn) =

___cat

______(mmul x1 y1)

______(cat

_________(mmul x2 y1)

_________(add

____________(cat

_______________(mmul x1 y2)

_______________(map (mmul x1) (y3y4…yn))

____________)

____________(add

_______________(cat

__________________(mmul x2 y2)

__________________(map (mmul x2) (y3y4…yn))

_______________)

_______________(cat

__________________(mmul x3 y1)

__________________(add

_____________________(smul x3 (y2y3…yn))

_____________________(pmul (x4x5…xn) (y1y2y3…yn))

__________________)

_______________)

____________)

_________)

______)

I had to assume (once) the add() function expanded using the LT case, just to save myself from three of these things. Also I used “cat” for concatenation; it was easier to indent without infix operators. I must conclude Haskell has an expansion/reduction debugger because it is too laborious to do much more by hand.

It could be that the more complex pattern matching required in pmul is causing the slowdown. But all this is speculation now. And my comments are no better than Jade NB, who saw the equi-speed consumption of both lists may be the factor. Assuming Haskell is pattern matching, I must say, this is the most inefficient polynomial multiplier I have ever seen. :)

PS, I do not know what your point was with your solution to “how many ways you can add together 2, 3, and 5 to get say, 1000”. It seems to me you have found another inefficient way to frame a solution. And, combined with the most inefficient polynomial multiplier I have ever seen, you have probably made a measurable contribution to the heat-death of the universe. :)

Please forgive my bad joke.

Comment by Kyle Lahnakoski — 2009 April 22 @ 6:58 pm

Nothing to forgive, in my opinion. :-) If you find this interesting, by all means, keep working at it!

Using

`pmul`

might be somewhat inefficient, but it should still be much faster than many of the more “obvious” alternatives for solving that particular counting problem. Of course,`pmul2`

is a drop-in replacement, and is definitely the way to go.But of course, to the trained mind, this solution is perfectly obvious. It’s covered in many books on Combinatorics, such as my copy of Brualdi (I have a much older edition.)

Take a look at the first chapter or two of Generatingfunctionology. It’s a good book and available free of charge; I definitely recommend it.

Comment by lpsmith — 2009 April 23 @ 12:16 am

Hmm… Generatingfunctionology is a good book, but my recollection was wrong. It doesn’t have the introduction to using polynomial arithmetic to count things. I’m not sure of the location of an internet resource that contains a good introduction to what I’m trying to get at.

This technique is often a good choice because it easily generalizes to problem instances that are tricky to handle with simple applications of the multiplication rule and the principle of inclusion and exclusion, and it’s efficient because it has a high degree of memoization built in.

I was being a bit hyperbolic in saying that the first solution is

hopelesslyinefficient, but it decidedly not good.Comment by lpsmith — 2009 April 23 @ 2:28 am

Ran across this paper today, it looks like it’s rather applicable:

Paul Vrbik and Michael Monagan. Lazy and Forgetful Polynomial Arithmetic and Applications. (Older version?)

I’m not going to have the chance to look at this carefully for a couple of weeks, and it looks like it’s going to take some effort in determining the exact relationship here, due to the usage of C to express the algorithms.

Comment by lpsmith — 2009 May 3 @ 8:21 pm

Yeah it is an older version. I have a newer (and in my opinion much better) version if you would like.

Comment by Paul Vrbik — 2009 July 4 @ 11:48 pm

Well, it’s been a little over a year now since I’ve made this observation, and I still haven’t taken a serious effort at figuring out why. On the other hand, this problem shouldn’t be that difficult, and your paper might be useful if/when I do. So by all means, send me a link or a copy, though I won’t promise to dig into it anytime soon.

I am curious though, if you’ve ever heard of or experimented with Haskell. :-)

Comment by lpsmith — 2010 January 19 @ 7:57 am

This post is a bit old now, but here goes anyway…

The definition for “sub” does not work. The coefficients in the monomials left over at the end of the recursion, and those in the LT and GT branches need to be negated in order to get polynomial subtraction.

You can define sub properly with “sub x = add x . map (\(M c k) -> M (negate c) k)”. But if you do that, you might as well replace make “mergeBy” be the addition function, using the “+” operator instead of the merging function parameter.

Comment by Jon Shapcott — 2012 August 3 @ 9:48 pm

Good catch!

Comment by lpsmith — 2012 August 4 @ 8:25 am