Yes, there's lots of free and open-source computer algebra systems out there, of which Sage is probably the most-used. I have access to Mathematica and others on the school's computers, but I think it's obvious why I'd prefer something open-source -- a proof should be verifiable, and if you use a CAS computation for a proof, then your CAS's source code is implicitly part of the proof.
You see, I've got this program that I've written in Haskell that needs to do a bunch of comparisons of numbers made from logs. Lots of numbers like 2-3log32 floating around, you see[0]. Now, so far double precision has been more than enough, but as I compute farther that may well cease to be the case. So I downloaded an arbitrary-precision arithmetic library, ERA, which unfortunately is rather slower than just using doubles, but does the job... mostly.
There's actually a serious problem with ERA -- when I say "arbitrary precision", that means I want comparisons to be *correct*. If I ask whether than one number is less than a number, it should compute until it can tell. Unfortunately that's not what it does, meaning it doesn't do what I care about. 40 digits is better than what you'll get with doubles, and almost certainly good enough, but do I have a guarantee that it's good enough? No? Then I don't have a proof.
And of course there's a more fundamental problem: No matter what your arbitrary precision arithmetic library, there's no way you can verify equality in finite time by computing more digits! Currently in my program I've worked around this by doing such tests via integer arithmetic whenever equality is actually possible... but if I want this thing to accept more general inputs, I'm not going to be able to predict in advance when that could happen and route around it. No, if you want to verify equality, you need to be able to do symbolic computations, and that means a CAS. Well, either that or rolling my own linear-combination-of-logs type and then converting all comparisons to integer arithmetic (since I'm too lazy to learn what the fast algorithms might be). Seriously though, this thing is slow enough already.
So why not just use Sage? Well, there's a good chance I will, but... Sage is based on Python. Now I wrote this program in Haskell. No, rewriting it in another language doesn't in general bother me, but there's a reason I picked Haskell. You know what I like about Haskell? It's really easy to define nice, simple, algebraic datatypes. You know, like if you wanted to define linked lists -- they're built in so you don't need to, of course, but if you wanted to, it would just be something like
data List A = Nil | Cons A (List A)
...and boom, you've got linked lists, and you can start defining functions on them.
Python... well, it's been a long time since I looked at Python, but as I recall, it goes the object-oriented route when it comes to defining new datatypes. And, well, ew. I want immutable datatypes I can define simply and operate on algebraically. Does it even support union types? I doubt it. I mean, you can work without union types, but I'm thinking in terms of Haskell which makes them simple and convenient, and to me they just seem the right way to do things, rather than having members which might be null or whatever the equivalent is. Not to mention Python is dynamically typed, so you can't even tell whether you've written something meaningful until you run it.
Now I don't need much, I just need the ability to compare numbers with logarithms in them (and take their floors and ceilings), and get the comparison correct, guaranteed, in finite time. You shouldn't, strictly speaking, need a whole CAS for that; but it's the sort of thing that you don't tend to find outside of one. I don't think I see a Haskell package that can do it (though I suppose I could have missed it). I don't want to take the time learn how to do it myself, and I don't want to convert everything to integer arithmetic, because I'm afraid that would be slow -- well, unless that that *is* how a CAS would do it, in which case I assume it really is the best way.
In short, I like Haskell because it's so algebraic, it's a lot like just writing math, and I would like a CAS that also has that property -- not just when you're using its built-in data types, but when working with your own weird tree type which you've implemented as a union type and you want it to do compile-time type checking so you can be sure you've written something meaningful.
...I guess I mostly just want a CAS based on Haskell, when you get down to it. I guess ML would be fine too. :P Anyway, I don't suppose there are any open-source CAS'es out there that do what I want? Anyone know one that would be close, at any rate? Or am I just going to have to end up either using Sage or rolling my own slow comparisons?
-Harry
[0]I'll write about just what it does later. I'm sure the constant involved has tipped you off that it's related to integer complexity.
You see, I've got this program that I've written in Haskell that needs to do a bunch of comparisons of numbers made from logs. Lots of numbers like 2-3log32 floating around, you see[0]. Now, so far double precision has been more than enough, but as I compute farther that may well cease to be the case. So I downloaded an arbitrary-precision arithmetic library, ERA, which unfortunately is rather slower than just using doubles, but does the job... mostly.
There's actually a serious problem with ERA -- when I say "arbitrary precision", that means I want comparisons to be *correct*. If I ask whether than one number is less than a number, it should compute until it can tell. Unfortunately that's not what it does, meaning it doesn't do what I care about. 40 digits is better than what you'll get with doubles, and almost certainly good enough, but do I have a guarantee that it's good enough? No? Then I don't have a proof.
And of course there's a more fundamental problem: No matter what your arbitrary precision arithmetic library, there's no way you can verify equality in finite time by computing more digits! Currently in my program I've worked around this by doing such tests via integer arithmetic whenever equality is actually possible... but if I want this thing to accept more general inputs, I'm not going to be able to predict in advance when that could happen and route around it. No, if you want to verify equality, you need to be able to do symbolic computations, and that means a CAS. Well, either that or rolling my own linear-combination-of-logs type and then converting all comparisons to integer arithmetic (since I'm too lazy to learn what the fast algorithms might be). Seriously though, this thing is slow enough already.
So why not just use Sage? Well, there's a good chance I will, but... Sage is based on Python. Now I wrote this program in Haskell. No, rewriting it in another language doesn't in general bother me, but there's a reason I picked Haskell. You know what I like about Haskell? It's really easy to define nice, simple, algebraic datatypes. You know, like if you wanted to define linked lists -- they're built in so you don't need to, of course, but if you wanted to, it would just be something like
data List A = Nil | Cons A (List A)
...and boom, you've got linked lists, and you can start defining functions on them.
Python... well, it's been a long time since I looked at Python, but as I recall, it goes the object-oriented route when it comes to defining new datatypes. And, well, ew. I want immutable datatypes I can define simply and operate on algebraically. Does it even support union types? I doubt it. I mean, you can work without union types, but I'm thinking in terms of Haskell which makes them simple and convenient, and to me they just seem the right way to do things, rather than having members which might be null or whatever the equivalent is. Not to mention Python is dynamically typed, so you can't even tell whether you've written something meaningful until you run it.
Now I don't need much, I just need the ability to compare numbers with logarithms in them (and take their floors and ceilings), and get the comparison correct, guaranteed, in finite time. You shouldn't, strictly speaking, need a whole CAS for that; but it's the sort of thing that you don't tend to find outside of one. I don't think I see a Haskell package that can do it (though I suppose I could have missed it). I don't want to take the time learn how to do it myself, and I don't want to convert everything to integer arithmetic, because I'm afraid that would be slow -- well, unless that that *is* how a CAS would do it, in which case I assume it really is the best way.
In short, I like Haskell because it's so algebraic, it's a lot like just writing math, and I would like a CAS that also has that property -- not just when you're using its built-in data types, but when working with your own weird tree type which you've implemented as a union type and you want it to do compile-time type checking so you can be sure you've written something meaningful.
...I guess I mostly just want a CAS based on Haskell, when you get down to it. I guess ML would be fine too. :P Anyway, I don't suppose there are any open-source CAS'es out there that do what I want? Anyone know one that would be close, at any rate? Or am I just going to have to end up either using Sage or rolling my own slow comparisons?
-Harry
[0]I'll write about just what it does later. I'm sure the constant involved has tipped you off that it's related to integer complexity.
no subject
Date: 2011-12-14 04:42 pm (UTC)I'm not sure what you mean by this. Generally when you declare a variable in Python you'll do it with either a literal ("variable = 3") or a constructor ("variable = ReallySpecialClass()"). In both cases it'll be pretty clear what type you're using. There can be some ambiguity when you're using a variable that you assigned from a function call, but if so it's because the function you're calling is poorly documented, not because the language is inherently ambiguous.
Anyway, I googled "arbitrary precision math python" and I got this. I don't know if it'll solve your problem or not. It might help to install it, open a Python shell (just type 'python' into a shell) and write out some code with it by hand just to test it out.
no subject
Date: 2011-12-15 05:55 am (UTC)If you've used Haskell you've used union types, if nothing else because in Haskell linked lists are a union type. :) (Well, OK, they're built-in, but...) Hell, so are bools! Whenever you use a '|' in a data declaration you're making a union type. So e.g. in this big mess I have (with the comments stripped out):
data APTTree = LTimes [APTTree] | Clump3 Integer | PlusOne APTTree | OneLeaf | Plus APTTree APTTree deriving (Eq, Ord)
...and I mean doing that without union types would be... oh wait. Python is dynamically typed, which I guess means that it largely doesn't *need* union types, instead you make each one a different type and let your function just test which one it is and handle all of them! I think that would work, right? Hm. It's not as clean, but I guess it's not nearly as much of a problem as I thought.
Actually I think it may be not as much of a problem as I thought. Like, say you wanted to do it in Java. You could make an interface that represents the union type, and then make the different possibilities be types implementing it. I think that would work? Again, it's messy, but it's not as terrible as I was thinking.
See, what I was thinking was, you'd have to make a type with *all* those fields, and a new field whose type would be an enum stating what sort of thing your object was, and the ones not in use would be nulled or set to invalid values... and if you wanted to risk confusion you could have some reuse of fields...
OK, I guess this is just much less of a problem than I thought, unless I wanted to do it in, like, straight C. (Which is what I use most of the time, after all. :) )
I'm not sure what you mean by this. Generally when you declare a variable in Python you'll do it with either a literal ("variable = 3") or a constructor ("variable = ReallySpecialClass()"). In both cases it'll be pretty clear what type you're using.
Variables? Pah! I want the compiler to check that my big terrible expressions are meaningful and return the type I claim they do! Like, from the current version of the program I'm talking about:(Maybes are another good example of a union type. :) The reason this has no comments aside from a statement of what the function does is because I haven't gotten around to writing them yet. I will eventually, since I really do intend for people to read it. Also, this might look better with a "where" clause, but I kept forgetting about those until recently.)
Anyway, I googled "arbitrary precision math python" and I got this. I don't know if it'll solve your problem or not. It might help to install it, open a Python shell (just type 'python' into a shell) and write out some code with it by hand just to test it out.
Eh, if I'm fine with using Python, I'll just use Sage. It does, like, everything. :P That was why Python was coming up in the first place, after all.
For now I've decided to try just rolling my own; sure, it's probably slow, but it occurs to me that (for equality testing) a CAS probably has to do it the same way I am! Now what I'm doing will probably be slow when it comes to inequalities, but... well, we'll see just how slow it is, and if it's unacceptable, then I'll try another solution.
no subject
Date: 2011-12-15 09:07 pm (UTC)If you have a bunch of types that share some common data fields between them, the way to do that in an OO language is inheritance, which Python and Java both do. Unless I'm misunderstanding something about what you're doing...
Having used duck-typed languages pretty extensively in the past few years, I've never had a problem with this; again, if your functions are being awkwardly ambiguous about what they're returning, that's because your code could be a lot clearer. It's never been a problem for me, because the times in which you'd ever want to have a function return different types by situation are rare.
no subject
Date: 2011-12-16 10:56 pm (UTC)Ah, but I don't, I have a union type! To use my example above, an APTTree could store an APTTree, two APTTrees, a list of APTTrees, an Integer, or nothing, and you'd best know which. By "field reuse" I meant "field abuse" -- using the same field to mean different things depending on context. E.g. in C you could do this with 3 fields: An enum indicating which of the above it is; an integer; and a (pointer to a) list of APTTrees; but what that list means depends on the value of the enum.
It's never been a problem for me, because the times in which you'd ever want to have a function return different types by situation are rare.
Well, they're never in Haskell, what with it being statically typed and all. :) I'm not worried about "This function could return multiple possible types; which one is it?". I'm worried about, say, you've got some expression and you want to do something to it, but you've gotten mixed up and think it has type [Integer] when actually it has type [[Integer]], so you attempt to map an Integer->Integer over it, resulting in something meaningless. I'd like that to be a compile error. Because if the language is dynamically typed, and the list being mapped over happens to be empty, it won't detect anything wrong, even though the result is meaningless. (Yes, in real testing, that sort of error won't last for long, because there's no way that list is always going to be null. Still, I'd like the compiler to prevent me from writing things that don't mean anything.)
no subject
Date: 2011-12-19 08:46 pm (UTC)In all honesty, Haskell is literally the only language I've ever used where I would routinely get confused trying to remember whether a given type is [Integer], [[Integer]] or Integer (and this was on a pretty simple task, implementing Merge Sort). Generally in Python, if I have a list of something, it'll be declared as such, and I'll use a variable name that's either plural or clearly signifies an aggregate.
no subject
Date: 2011-12-21 10:39 pm (UTC)Obviously if never get your types confused it can only be because you're not doing anything really complicated, so clearly Haskell must be better for allowing you to express such complicated ideas succinctly. :D (OK, fine, so you said it happened implementing merge sort...) (Also I think the obvious conclusion to this line of reasoning must be that APL and J must be even better! :D )
I have to wonder if the difference is related to mutability. I imagine in Python you do a lot of in-place operations, since lists are mutable; this means they had better be explicitly named at some point. In Haskell everything is immutable so there's no such thing as an in-place operation, and so it just isn't necessary to name things a lot of the time.
no subject
Date: 2011-12-22 04:44 am (UTC)no subject
Date: 2011-12-22 07:12 am (UTC)I'm not really clear on how I'm talking as though imperative programming is alien to me? Oh, wait, I think I see it. I just hadn't explicitly noticed that before -- in imperative programming you're storing things in variables all the time, in functional programming (or at least pure functional programming) you don't bother. I mean I guess it's obvious now that I think about it but I hadn't explicitly before.
Heh I guess I actually have time to go on IM now. Back in Glen Rock for two weeks. We should meet up.
no subject
Date: 2011-12-19 09:29 pm (UTC)no subject
Date: 2011-12-21 10:31 pm (UTC)Definitely glad you pushed me to use Haskell for that other monstrosity. I doubt I would want to use it for anything heavy on I/O, but for pure computation, it's just so much better for organizing my thoughts than anything else I know of... well, OK, except ML. Which is pretty close to Haskell already.