http://rosie.projects.telrock.org ]]>

Good catch!

]]>The definition for “sub” does not work. The coefficients in the monomials left over at the end of the recursion, and those in the LT and GT branches need to be negated in order to get polynomial subtraction.

You can define sub properly with “sub x = add x . map (\(M c k) -> M (negate c) k)”. But if you do that, you might as well replace make “mergeBy” be the addition function, using the “+” operator instead of the merging function parameter.

]]>Well, it’s been a little over a year now since I’ve made this observation, and I still haven’t taken a serious effort at figuring out why. On the other hand, this problem shouldn’t be that difficult, and your paper might be useful if/when I do. So by all means, send me a link or a copy, though I won’t promise to dig into it anytime soon.

I am curious though, if you’ve ever heard of or experimented with Haskell. :-)

]]>Yeah it is an older version. I have a newer (and in my opinion much better) version if you would like.

]]>Paul Vrbik and Michael Monagan. Lazy and Forgetful Polynomial Arithmetic and Applications. (Older version?)

I’m not going to have the chance to look at this carefully for a couple of weeks, and it looks like it’s going to take some effort in determining the exact relationship here, due to the usage of C to express the algorithms.

]]>Hmm… Generatingfunctionology is a good book, but my recollection was wrong. It doesn’t have the introduction to using polynomial arithmetic to count things. I’m not sure of the location of an internet resource that contains a good introduction to what I’m trying to get at.

This technique is often a good choice because it easily generalizes to problem instances that are tricky to handle with simple applications of the multiplication rule and the principle of inclusion and exclusion, and it’s efficient because it has a high degree of memoization built in.

I was being a bit hyperbolic in saying that the first solution is *hopelessly* inefficient, but it decidedly not good.

Nothing to forgive, in my opinion. :-) If you find this interesting, by all means, keep working at it!

Using `pmul`

might be somewhat inefficient, but it should still be much faster than many of the more “obvious” alternatives for solving that particular counting problem. Of course, `pmul2`

is a drop-in replacement, and is definitely the way to go.

But of course, to the trained mind, this solution is perfectly obvious. It’s covered in many books on Combinatorics, such as my copy of Brualdi (I have a much older edition.)

Take a look at the first chapter or two of Generatingfunctionology. It’s a good book and available free of charge; I definitely recommend it.

]]>I tried to apply your definitions with hope that the inefficiency of pmul would be apparent. Here is what I got before I gave up.

pmul (x1x2…xn) (y1y2y3…yn) =

___cat

______(mmul x1 y1)

______(cat

_________(mmul x2 y1)

_________(add

____________(cat

_______________(mmul x1 y2)

_______________(map (mmul x1) (y3y4…yn))

____________)

____________(add

_______________(cat

__________________(mmul x2 y2)

__________________(map (mmul x2) (y3y4…yn))

_______________)

_______________(cat

__________________(mmul x3 y1)

__________________(add

_____________________(smul x3 (y2y3…yn))

_____________________(pmul (x4x5…xn) (y1y2y3…yn))

__________________)

_______________)

____________)

_________)

______)

I had to assume (once) the add() function expanded using the LT case, just to save myself from three of these things. Also I used “cat” for concatenation; it was easier to indent without infix operators. I must conclude Haskell has an expansion/reduction debugger because it is too laborious to do much more by hand.

It could be that the more complex pattern matching required in pmul is causing the slowdown. But all this is speculation now. And my comments are no better than Jade NB, who saw the equi-speed consumption of both lists may be the factor. Assuming Haskell is pattern matching, I must say, this is the most inefficient polynomial multiplier I have ever seen. :)

PS, I do not know what your point was with your solution to “how many ways you can add together 2, 3, and 5 to get say, 1000”. It seems to me you have found another inefficient way to frame a solution. And, combined with the most inefficient polynomial multiplier I have ever seen, you have probably made a measurable contribution to the heat-death of the universe. :)

Please forgive my bad joke.

]]>In pmul, (y:ys) is passed down the recursive chain until it is multiplied with []. This means (x:xs) must be completely expanded just to get the second term in the resulting polynomial.

It sounds like you are onto something, but you are going to have to clarify. This part of your reply, at least, is wrong.

The base cases are completely optional, so long as both polynomials being multiplied are infinite. So in a sense, these are “optionally corecursive” functions.

For example, if you want to find out how many ways you can add together 2, 3, and 5 to get say, 1000, then all you have to do is look at the coefficient of the 1000th power of:

(pstep 2 `pmul` pstep 3) `pmul` pstep 5

where `pstep n`

represents the polynomial