sniffnoy: (Kirby)
So, here's an idea I had the other day: My inequality that o(Rev(α,B))≤o(Rev(α,o(B))) doesn't work if you replace o(B) by B', an extension of B. But what if you replace it by B', an extension of B satisfying o(B')=o(B)?

(We can also more generally replace α by A, a WPO, although then we have to make the additional assumption that everything involved is well-defined, and also in that case I haven't actually proven the original inequality in general)

I mean, it's a fairly natural generalization, right? And like obviously it works if A and B are both finite. And there's also some cases where, based on what I already know, one can see that they have to actually be equal (both being equal to o(Rev(α,o(B)))).

But in fact I'm pretty sure now that this generalization is false! One can take α=ω, B=N²+N³, and B'=N+N³. Since N² can be extended to N, B' is an extension of B; and o(B)=o(B')=ω³. And yet things don't work out quite the same way when you take weakly decreasing maps from ω into these partial orders. Because, and I haven't checked this totally thoroughly, but I'm pretty sure that (with the additional restriction that the functions must be eventually 0) the left hand side is equal to ωω5, while the right hand side is ωω4.

(Without that restriction, it's only slightly different; ωω5+2ω3+3 vs ωω4+1ω3+3.)

So, so much for that! That said, it can probably still be proven under more restrictive conditions. For instance, maybe if A is finite. And I'd definitely bet on it being true if A=α is a power of ω and B is finite; probably more generally if A=α is total and B is finite. (Probably also if o(A) is initial and B is finite.) Beyond that it's hard to say. One can probably get it to work with other smallness conditions -- I had to take o(B)=ω³ to get a counterexample, can one prove that's the smallest? -- but I don't know to what extent I'll bother...

EDIT: OK, here's a counterexample with B finite: Take α=ω once again, take B'=2×2, and take B that looks as follows:
*   *
 \ /
Maybe it can still work if A is finite, though, at least!

EDIT: Nope, it fails then too! Take A=2, and take B=(1∪ω)+(1∪ω), where here by "union" I mean "disjoint union"; you can extend the lower copy of 1∪ω to an ω to get B'=ω+(1∪ω). Then o(Rev(2,B))=ω²3+ω+2, but o(Rev(2,B'))=ω²3+ω+1.

(I think this case should work also with A=ω, but I haven't done a proper check. I could be wrong there.)

So, basically, this one is unsalvageable. It works in the trivial case when A and B are both finite, and some other trivial or not-so-trivial cases I can prove, but not in any decent generality. I'm calling this one a wrap.

sniffnoy: (Kirby)
EDIT 8/3: One tiny correction, see struck out portion below. Also, pretty sure I really have proved this now.


OK, so it turns out that despite my big "EDIT: This entry is wrong" warning on my last entry, almost everything it is correct. Just, not quite for the reasons I thought.

My formula for "multidimensional natural multichoose", in the case of powers of ω, was indeed correct. What was not correct was the part I didn't write down -- what if the inputs are not powers of ω? In this entry I will describe the correct formula for this.

...OK, I say "correct" but in reality I haven't actually sat down and fully proved it. I'm pretty certain this has to be correct, though. If somehow I'm wrong, well, I'll put a big edit on this later. :P

So. Recall, we're given ordinals α1 through αn, and we want to compute the maximum extending ordinal of the poset of lower sets of α1×...×αn that are "strictly bounded in each dimension", i.e., none of the projections onto any of the αi are surjective. (To ignore some stupid stuff, I'm going to assume all αi are nonzero. You can rephrase the below so it's still correct even if one of them is zero, but I don't want to have to phrase it that way here.)

Here is what we do. First, we write out each ordinal in Cantor normal form; we're going to use the "weak" Cantor normal form, where exponents can be repeated but there are no coefficients. Each ordinal can be broken down into "segments" corresponding to the Cantor normal form, each of length a power of ω. Now, we're looking at the product of these ordinals, so this breaks up the product into finitely many "boxes"; again, the dimensions of these boxes are all powers of ω.

For ease of talking about it, I am going to place these boxes on a grid. Suppose that αi has ri segments. We will associate each box to an element of {1,...,r1}×...×{1,...,rn}.

Now, as per ancient mathematical tradition, we are going to take a (natural) sum of (natural) products (I'll omit the "natural" from here on out). (Indeed, if we were in 2 dimensions -- the case I'd solved earlier -- it would even be a sum over paths, with the value of each path being a product based on what it passed through. A wonderful mathematical tradition! Unfortunately, things don't quite work out that way in the general case.) The set we are going to be summing over is the set of lower sets of {1,...,r1-1}×...×{1,...,rn-1}. That's right -- we only want lower sets that don't use the highest possible coordinate in any dimension. Not unlike the problem we're trying to solve.

So for each such lower set, we need to get a value. What we're going to do is take the "frontier" of the lower set; to define that, let me introduce some notation. I'll use ε to denote the vector (1,...,1). A point p will be on the frontier of S if it is not in S, but p-ε is; or if p is along one of the lower boundaries (p has 1 among its coordinates), so that p-ε is "out of bounds".

So, are we going to take the product of values associated to the boxes on the frontier of S? Not quite. There's an additional restriction. For each point p, I'll define ε(p) to be the vector which is 1 in each dimension where the corresponding box is finite (i.e. has length 1), and 0 in each dimension where it's infinite. The box associated to the point p will only be included in the product if both p and p-ε(p) are on the frontier. (Note that if the box is infinite in all dimensions -- if it's "full-dimensional" -- then p-ε(p) is just p again.)

So now we can associate a value to each box included in our product. How do we do that? Well, write down the box's dimensions; let's drop all of them that are finite (i.e. 1), so we'll be looking at a box, possibly lower-dimensional, all of whose dimensions are infinite.

If the box has dimension 0, its value is 1. If the box has dimension 1, its value is equal to its length.

If the box has dimension 2 or more, its value is given according to the formula I stated earlier; if its dimensions are ωβ_1×...×ωβ_k, then its value is ωω^g(β_1,...,β_k), where g(β_1,...,β_k) is given as follows:
1. If at least two of the βi are infinite, or at least one of them is of the form ε+k where ε is an ε-number and k is finite, then g(β_1,...,β_k) is the natural sum of the βi.
2. Otherwise, it is the natural sum minus 1. (And it is guaranteed in this case that the natural sum will be a successor, so you will be able to subtract 1.)

(Note how different the rule is in dimension 2 or higher from in dimension 1! In dimension 2 or higher, the value of the box always has the form ωω^γ; this is not true at all in dimension 1.)

(Isn't the rule about the value of a box of dimension 0 unnecessary, seeing as such a box can never be included in the product anyway? Yes... unless you're in dimension 0 in the first place. I'm a believer in including trivial cases. Yes. You could actually leave the value undefined; I decided not to, but whatever, it's arbitrary.)

So, to summarize: You sum over lower sets of the set of boxes, excluding the outer boundaries; for each lower set S you multiply over boxes p on the frontier of S such that p-ε(p) is also on on the frontier of S; and the value of a box is given by the formula above. Not too bad, really. Much simpler than the general rule for "right-multichoose", certainly!

Now that that's out of the way, let's talk about inequalities. Specifically, the inequality o(Rev(A,B))≤o(Rev(A,o(B))), where A and B are WPOs, and we assume that Rev(A,B) and Rev(A,o(B)) are also WPOs.

Previously, I had proved this inequality when A is total, by mimicking the proof of the 2-dimensional case of the above. I then claimed (in the entry below) that by iterating this, one could prove it more generally when A is a product of ordinals, rather than just an ordinal.

That iterative proof does not work. Fortunately, now that I know how to handle the multidimensional case, indeed I can say that it is true when A is a product of ordinals, by mimicking the proof of the multidimensional case.

But I've been able to prove some other nice special cases of this inequality as well -- ones dependent only on the values of o(A) and o(B). Unfortunately, the methods used here are seriously unlikely to extend to other cases. Still, a month or two ago I would have been surprised to make basically any progress at all on the case where A might not be total.

So, first recall that, for trivial reasons, it works if A and B are both finite. Also, using the stuff above, it's not too hard to see that it works if A is finite, and o(B) is a power of ω (although somehow I apparently missed this earlier).

But I've also got it working now if we assume that B is finite... and that o(A) is an initial ordinal. Yeah, that's a pretty restrictive condition. Oh well. It'll have to do for now. (Note that a WPO A with o(A) initial is in some sense "close to total". Though maybe not too close; you can have A with o(A)=ω1 but Rev(A,2) not a WPO.)

In fact I've also got it working if o(B)=ω and o(A) is an initial ordinal. Or if we reverse it -- if o(A)=ω and o(B) is an initial ordinal. Unfortunately, I have not been able to "complete the rectangle" and get it working when o(A) and o(B) are both initial ordinals (unless one of them is ω, obviously).

Still, considerable progress! Maybe this inequality really is true in general. Or maybe I just haven't been clever enough in trying to come up with a counterexample. Well, we'll see -- hopefully, anyway.

sniffnoy: (Kirby)
EDIT 7/27: The statements in this entry may very well be FALSE! It turns out my formula is very wrong in more than two dimensions! Now, admittedly, the case I actually bothered to state here might well still be right; in fact, I'm guessing that more likely than not, it is. (OK, having gone over it again now actually I'm pretty sure it is.) But the more general formula they are a part of is false, and I will have to see about correcting this. More on this in an entry soon to come.

So I wrote this down in an email to John, and then realized I should probably post it here as well.

So let's consider "big multichoose". I want to consider the variant that I denoted ((α multichoose β))0, where we consider all weakly decreasing functions from beta to alpha that are eventually 0.

This can also be considered as the set of lower sets of α×β that are "bounded in each direction", in the sense that neither projection is surjective.

Now of course if α and β are finite -- say n and m -- then this is just counting the number of partitions that fit in an (n-1)x(m-1) box, i.e., it's (n+m-2 choose n-1). (Which is different from multichoose, obviously, but nicer for the purposes I want to consider here.)

So suppose we generalize from two ordinal factors in the product to multiple. What happens now? Can we get a formula similar to my formula for dimension 2?

Answer: We can! It's not hard, either, once you know the dimension 2 case (OK, I haven't sat down and fully proven it, but the proof should be similar).

The thing is that I'd considered this before but always dismissed it because, well -- counting partitions in a given box is easy; counting plane partitions in a given box is less easy; and in dimensions higher than 3, things apparently get pretty wild pretty fast. So I figured, well, if the finite case is intractable, what's the point of attempting the infinite case?

Of course, this is silly, because if you can reduce the infinite case to the finite one, that's still significant. Like really, in the dimension 2 case, if you plug in finite ordinals, my formula doesn't actually say "OK, it's (n+m-2 choose n-1)", rather it says, OK, count the partitions in this box. We just know what that is. So in higher dimensions it will just fail to reduce the problem in the finite case, but in the infinite case, at least now it is a finite description.

So anyway as before the main core of the problem is when all the factors are powers of ω (and not equal to 1). So let's say we're putting in ωα_1, ..., ωα_n. Also, say n≥2; if n=1 (or n=0) the problem A. is trivial and B. is *not* given by the formula below, so let's exclude that. (And yes there is a good reason that dimension≥2 turns out different from dimension 1, but I'm not going to go into that here.)

So what we get out is ωω^g(α_1,...,α_n), for some g. What's g? Well, it's basically the same as before:

If either:
1. one of the αi is of the form epsilon+finite, or
2. at least two of the αi are infinite
then g(α1,...,αn) is just the natural sum of the αi.
But otherwise, it's the natural sum, minus 1.

(Before I did it out, I had assumed that condition (2) would require that all of the αi be infinite, but in fact, only two of them need to be. This is the influence of dimension 2, the starting point for the pattern, being felt all the way up the chain.)

(Note again that in dimensions 1 and 0, the pattern is broken and there is no g; in dimension 0 it's always 1 and in dimension 1 it's the identity. We don't generally get something of the form ωω^α.)

This got me wondering if I could prove some bound on o(Rev(α1×...×αn,X)) for any WPO X by similar means, like how earlier I proved a bound on o(Rev(α,X)), but then I realized I could actually just prove that by taking my earlier bound and iterating it; I don't need to get into anything fancy there (which is good, it's ugly enough in the two-dimensional case I did already need to do by hand).

EDIT 7/25: The above paragraph seems to be false! Yikes, I may have to do that case by hand as well!

So, one more addition to the lore of "big multichoose"...

sniffnoy: (Kirby)
It works! OK, so...

All ordinal arithmetic in this entry will be natural. All addition will be natural addition and all multiplication will be natural multiplication. Basically, for now, I am embedding the ordinals in the surreals. Also, all rings will be commutative.

So you'll recall I came up with notions of "natural multichoose" and "natural choose" (aka "big multichoose" and "big choose") for ordinals. These are defined order-theoretically, remember: ((α multichoose β)) is the largest ordinal extending the partial order of weakly decreasing functions from β to α, and (α choose β) is the same with strictly decreasing functions. (Obviously, this latter is zero unless β is finite.)

So I spent quite a bit of time figuring out how to compute these, but that's a problem I solved a while back. Also you'll recall I figured out some inequalities for them. But just a few weeks ago I realized something -- that while I was looking at inequalities, I had missed at least one straight-up identity!

While the description of how to compute ((α multichoose β)) in general is more complicated (though not that complicated!), the description of how to compute ((α multichoose k)), where k is finite, is quite simpler. Here's one simple way of stating it, in the form of two rules that determine it completely: 1. We'll start with powers of ω; ((ωα multichoose k))=ωαk. 2. Now extend by Vandermonde's convolution. (Vandermonde's identity, while usually stated for choose, is also valid for multichoose, with the same proof.)

Now I had never described it in that form before; and yet, that's plainly what it was. Somehow, until the other week, I never realized that, wait, natural multichoose satisfies Vandermonde's identity! (So, for that matter, does natural choose. The rule for computing natural choose is the same as the rule for computing natural multichoose, except that of course (1 choose k) is 0 instead of 1 for k>1.)

This got me wondering about the possibility of a version of Vandermonde's identity holding even when the thing on the bottom is infinite -- after all, we're using natural addition here, and a given ordinal has only finitely many natural summands. Unfortunately, the obvious analogue doesn't work. For choose it does... but only because if you have something infinite on the bottom, both sides are automatically 0. So that's not interesting. Let's forget about putting infinite things on the bottom; while it's a fruitful subject (that I've written about elsewhere!), for the remainder of this entry, I will only consider choosing or multichoosing whole numbers.

So this got me wondering -- what about further identities? There are various combinatorial identities we could try, and I could go over how I came up with ones to try, but, well, most of them didn't work, so I think I'll just skip that.

However, one thing did appear to work. Let's talk about λ-rings. A λ-ring is a ring equipped with operations λk for each whole number k, which must satisfy certain identities. The quantity λk(x) is meant to be analogous to (x choose k) -- or, more analogously, the k'th exterior power of x. An equivalent description is in terms of operations σk, analogous to ((x multichoose k)), or, more analogously, to the k'th symmetric power of x; these satisfy similar identities but not the same ones. The λk and σk are interdefinable via σk(x)=(-1)kλk(-x).

In fact one simple way of constructing a λ-ring is binomial rings. If you have a ring R with no Z-torsion and where for any x, (x choose n) is in R -- that's (x choose n) in the usual polynomial sense -- then you get a λ-ring this way, with λk(x)=(x choose k) and σk(x)=((x multichoose k)). These are basically the simplest examples of λ-rings. We'll return to these later.

So, the one thing that appeared to work is that natural multichoose appeared to satisfy the identities of a λ-ring! The σ-identities that is, not the λ-identities. (Unfortunately, natural choose does *not* satisfy the identities of a λ-ring. You do of course get operations λk from taking Sk to be natural multichoose, but while I can find an order-theoretic interpretation of these, it's rather artificial.) I haven't listed out what the identities are, but basically, aside from a few trivial ones, there's identities for λk(x+y), for λk(xy), and for λmn(x)). (And of course one can write down analogous identities for σk.) The addition identity is just Vandermonde, and works the same way for σk; the mutliplication and iteration identities are more complicated, and are different for σk; I won't attempt to describe them here.

And actually there's a pretty simple proof that ordinal multichoose satisfies these σ-identities, though it took me a while to realize, and I spent quite a while just wondering. Basically it follows quite easily once you realize that if you have an ordinal α=ωα_1+...+ωα_r (written in "weak Cantor normal form", where there are no coefficients but the exponents are weakly decreasing), then a simple way of describing ((α multichoose k)) is as hkα_1,...,ωα_r), where hk is a complete homogeneous polynomial. (If you replace hk here with ek, the elementary symmetric polynomial, you get the λk corresponding to this σk. Which, remember, is not natural choose.)

So this is pretty good -- natural multichoose satisfies the identities of a λ-ring. There's just one thing lacking, and that's that ordinals don't form a ring. So actually they're a "λ-semiring". Or a "σ-semiring", I guess; I'm a little doubtful that the equivalence works without negatives (even though in this case the λk do take ordinals to ordinals). (EDIT: OK actually it probably works. Whatever.)

So that raises the question -- can we extend these operations, these σk, to differences of ordinals in a sensible manner, and make an actual λ-ring? Or, hell, let's go all the way -- how about we extend them all the way to the surreals? Can we make the surreals into a λ-ring like this, so that when we restrict the σk down to the ordinals, we get back the order-theoretically meaningful natural multichoose?

Note of course that there's already a λ-ring structure on the surreals coming from the fact that, as a characteristic 0, they're certainly a binomial ring. But this would be a different λ-ring structure; as a binomial ring, we'd have σ²(ω)=ω²½+ω½, whereas under this one, we'd have σ²(ω)=ω².

Basically, with ordinals, if we had a "monomial", ωαa, then what we had was σkαa)=ωαk((a multichoose k)), and then you can extend by Vandermonde. (For λ instead of σ, just use choose instead of multichoose.) Well, surreal numbers have something like Cantor normal form -- Conway normal form. The differences are that, firstly, the exponents can be arbitrary surreal numbers rather than just ordinals; secondly, the coefficients can be real numbers rather than just whole numbers; and thirdly, rather than being a finite sum, it can be an arbitrary well-ordered decreasing sum. So this gives us a way of extending it; define σkαa)=ωαk((a multichoose k)), where now α can be surreal and a can be real, and then extend in the obvious fashion. (Or you can do the same with λ instead of σ.) Of course, the question remained: Does this form a λ-ring? Vandermonde (and the trivialities) are easy enough to prove, but the multiplication and iteration identities are nasty and hard to reason about.

Now, since Conway normal forms can be added and multiplied in the obvious way, this means that surreal numbers are isomorphic to the field of Hahn series over the real numbers with the value group being the surreal numbers themselves; the isomorphism is given by just replacing ω with T-1. (Yes, the value group is also the surreal numbers; that's the sort of weird thing that happens when you've got surreal numbers.) So the question now becomes -- if we have a λ-ring, and we take a ring of Hahn series over that λ-ring, is it also a λ-ring? Where here, in this more general case, we're defining λk(Tαa) to be Tαkλk(a). (Or the same thing but with σ instead of λ.) Right, in the case we care about, that of the surreals, the base ring is the reals, and we're considering it as a λ-ring by considering it as a binomial ring.

Well, I learned, from Darij Grinberg's text on λ-rings, that in fact this is well-known if, rather than Hahn series, we are just taking polynomials. So, could I extend it? To power series, to Laurent series, to Puiseux series -- to Hahn series?

Well, long story short, yes. At first I had some trouble because Hahn series, unlike power series or Laurent series or Puiseux series, aren't actually the limit of finite sums of their monomials; so when I tried to imitate the proofs Grinberg gives, which work by first doing it for monomials and then extending by Vandermonde, I ran into a problem. I wanted to just do the same thing but extending by Vandermonde and also continuity, and this worked for power series and Laurent series and Puiseux series, but not for Hahn series. I was just going to go ask MathOverflow, honestly. But today I made one last try -- I imitated Grinberg's proofs, but this time, instead of doing it for monomials like he does, I just plugged in the whole damn sum. And it worked! (Note that Grinberg's proof basically offloads the hard part to the verification that for a λ-ring A, Λ(A), which I'm not going to define here, is a λ-ring.) I was able to keep all the infinite sums and products to contexts where they truly made sense and could be manipulated sensibly without having to worry about any sort of convergence (or where such convergence is easily proved, like rings of formal power series).

So that's it then! Hahn series over a λ-ring form a λ-ring; in particular the surreal numbers, considered as Hahn series over the real numbers, form a λ-ring; and when you restrict the σk operations of this λ-ring down to the ordinals, you get natural multichoose, which is order-theoretically meaningful. And therefore (although we knew this already by easier means) the natural multichoose operations also satisfy the identities of the σk in a λ-ring, because they are the σk in a λ-ring.

So, that's one more thing to write up someday...

EDIT: Fixed interdefinition of λk and σk, oops.
EDIT: Found a reference for the equivalence, thanks to Darij Grinberg! Hazewinkel, Gubareni, and Kirichenko's "Algebra, Rings, and Modules" volume 3 goes over it. It uses σ rather than S, so I'm changing that in the entry.

sniffnoy: (Sonic)
EDIT, 2/25, 7 PM: I'm no longer confident that the claims in this entry are correct. I've found a serious mistake in my proof. It seems like fixable, but I haven't succeeded so far.

EDIT, 2/27, 1:30 AM: THIS ENTRY AS ORIGINALLY POSTED IS FALSE. I've edited to be true, but, uh, much less interesting. False statements are in strikethrough, added statements are in bold.

So I've been thinking more about big multichoose and WPOs recently. I'll use Rev(A,B) to denote the set of order-reversing maps from A to B.

So recall, I figured out how to compute o(Rev(α,β)) (where α and β are ordinals) and showed that for any WPO X, o(Rev(α,X))≤o(Rev(α,o(X))). Or to state it another way -- which is the way the proof actually goes -- I figured out how to compute an upper bound on o(Rev(α,X)) based on α and o(X), and showed that when X is total, this upper bound is attained.

The other day I noticed that in fact, if o(X) is an (infinite) initial ordinal, then this upper bound is also attained. (Because in this case, o(X) embeds in X.) In other words, in this case, the value of o(Rev(α,X)) depends only on α and o(X), and not on any further details about X.

Could there be other values of o(X) for which this is true, I wondered? More specifically, could it be true whenever o(X) is a power of ω? (It's certainly not true when o(X) isn't a power of ω.)

It took me quite a while to find a counterexample, but eventually I did: Take α=ω, and take X to be Rado's order. In this case, o(X)=ω², and o(Rev(ω,ω²))=ωω²+2, while o(Rev(ω,X))=ωω2+2.

It should not have taken me so long to find a counterexample. A simpler counterexample is α=ω, X=ω×ω. I computed o(Rev(ω,ω×ω)) incorrectly earlier, and got ωω²+2; in fact it is ωω2+2, the same as for Rado's order. What's more, this is really easy to see, since Rev(X,Y×Z) = Rev(X,Y) × Rev(X,Z), so it's just the natural product of two copies of omega multichoose omega, which is ωω+1. Somehow I entirely failed to notice this.

So that's it -- problem solved, right? Well, not quite. Rado's order is a famous example of a WPO which is not a BPO. This leaves the question: Could the statement be true if we require that X be a BPO? Or what if instead of requiring that X be a BPO, we just insist on the weaker requirement that Rev(X,2) be a WPO, or something like that? So far, I have no counterexample to this. It's definitely not true, see above.

Unfortunately, I know very little about the theory of BPOs, so I think I am unlikely to resolve such a question (though perhaps a bit more likely if it turns out that only a weaker hypothesis is needed). But I thought this was worth putting out there.

Note, by the way, that if o(X) is a power of ω and, in addition, we have that α is finite, then the statement is true without any further hypothesis on X; hence why my counterexample above had α=ω. Note that o(X) is also as small as a counterexample can be, as o(X)=ω². (This remark is still true.)

By the way, I also tried proving the statement, by means analogous to how I proved that the bound is attained if X is total, and found that much of a proof went surprisingly well. There were three problems that arose, essentially:
1. How do we go from the case where α is a power of ω, to the case where it isn't? I was able to solve this problem.
2. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a limit ordinal? I was able to solve this problem (assuming an inductive hypothesis, of course).
3. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a successor ordinal? This turned out to be a real problem, even assuming an inductive hypothesis! And indeed, the counterexample I mentioned above has o(X)=ω². But the fact that it's just this one part of the proof that turned out to be a problem does incline me to think that it might indeed be possible to prove it with additional hypotheses on X.

Note: If X has a bottom element, everything I'm saying still works if we restrict to functions which are eventually 0. Pretty sure. And of course if α finite we can restrict to strictly decreasing functions.

EDIT: It totally works! Just using the hypothesis that Rev(X,2) is a WPO, too! This is amazing! Not only that, but there's a generalization that covers both this case and the case where X is total! (Though not also the case where o(X) is initial.) Namely, you require that Rev(X,2) be a WPO, and that X=X1+...+Xr, where each o(Xi) is a power of ω. In the case where X is total you have Cantor normal form, and in the amazing case, r=1. And once again it works if we remove the requirement that Rev(X,2) be WPO but add the requirement that α be finite.

EDIT 2: Sorry, just noticed a mistake in my proof. I think this is fixable.

sniffnoy: (Dead face)
Well, it's time for me to declare a big "oops".

My paper Transfinitely iterated natural multiplication of ordinal numbers was recently rejected from the Notre Dame Journal of Formal Logic, and for good reason.

The mathematical assertions aren't wrong. But contrary to what I said, it's actually quite easy to prove Jacobsthal's distributivity law by transfinite induction, and I entirely missed this. My thanks to the anonymous referee for pointing this out. The referee also claimed this is true of my theorem that a⊗(b⊕c)=a⊗b⊗a⊗c, which I haven't yet checked, but is presumably also true.

EDIT Nov 12: OK, now I'm confused. I've tried to check it today, and... all I could get was an inequality. I'm not sure what to make of this.
EDIT Nov 13: OK, I figured out the mistake, disregard the above.

This cuts out a lot of what I said; I'll have to rewrite a lot! It also turns it into, well, more of an expository paper. I mean, I think it's some good exposition; I would still really like to see it published somewhere, because it summarizes and connects up some things that haven't been connected up elsewhere. But I'm going to need to find somewhere else to publish it, that's for sure.

Mind you, there still is the problem of finding an order-theoretic proof of these laws. Of course, that first requires an order-theoretic interpretation of a×b (Jacobsthal's multiplication) and a⊗b (super-Jacobsthal exponentiation).

But the former of those now does in fact exist! Take a look at this paper: Some iterated natural sums, by Paolo Lipparini. It gives an order-theoretic interpretation of the infinitary natural sum, and therefore, implicitly, Jacobsthal multiplication.

This order-theoretic interpretation does not make it immediately obvious that a×(b⊕c)=(a×b)⊕(a×c), nor that a×(b×c)=(a×b)×c; but just having an interpretation at all is a huge start. I have written to Lipparini and he agrees the problem is interesting.

One thing worth noting about this order-theoretic characterization of a×b is that while it makes it obvious that ab≤a×b, it doesn't make it obvious that a×b≤a⊗b. Of course, what Lipparini did wasn't meant specifically as an interpretation of a×b at all, but the generic infinitary natural sum, where you can't assume that all the summands are the same. It's possible that for the specific case of a×b, one might be able to find a reformulation of his characterization that does make this obvious. Of course, proving that this reformulation is the same might be far from easy! Who knows.

So, that's the state of things. Guess I've got quite a bit of work to do... (Well, I guess that's true regardless. In fact I might want to put off fixing this for now to get more writing up of integer complexity done.)

sniffnoy: (SMPTE)
EDIT Oct 12: Corrected a mistake in question 0, and also expanded it slightly. Also pretty sure I've proved question 1 and therefore 2 and 3; see note below.

So, OK, I know how to compute big multichoose now. But I was going through some old notes of mine and I found a few questions about it I still hadn't answered. Two of them, I suppose, could potentially be solved based on my computation rule, but that seems like it would be quite ugly. I still don't have a resolution to these questions; I'm going to state them here.

Notation: I'll write ((α multichoose β)) for my "big multichoose" operation. Given posets A and B, I'll write Rev(A,B) for the set of order-reversing maps from A to B, considered as a poset. Now, all orders considered here will be WPOs; in general, given WPOs A and B, the order Rev(A,B) is not necessarily a WPO, but I'll only consider cases where it is. In particular, for any ordinal α and a WPO B, Rev(α,B) is a WPO. Remember, ((α multichoose β)) is by definition o(Rev(β,α)).

So suppose A, B, and Rev(A,B) are WPOs; then it's easy to see that o(Rev(A,B))≥o(Rev(o(A),B)). Adding more relations to A just removes points from Rev(A,B), so o(Rev(A,B)) goes down.

However, it's less clear what happens if we instead take o(Rev(A,o(B))) instead of o(Rev(o(A),B)). Adding more relations to B adds points to Rev(A,B), which would appear to make o(Rev(A,B)) go up; however, it also adds relations to Rev(A,B), which would appear to make o(Rev(A,B)) go down.

So naively, it could go either up or down. However, I have never once seen it go down! I've only see it go up or stay the same.

So, question 0: If A, B, Rev(A,B), and Rev(A,o(B)) are all WPOs, does one have o(Rev(A,B))≤o(Rev(A,o(B))? (And if A, B, and Rev(A,o(B)) are all WPOs, does this imply that Rev(A,B) is too?)

And in case that's false, the more specific question 1: If α is an ordinal and B is a WPO, does one have o(Rev(α,B))≤o(Rev(α,o(B)))?

Note that if this is true, it cannot be simply because o(B) is an extension of B! One can find other extensions that make this false. For instance, suppose α=2, and B is the disjoint union of ω and 1; one may extent B to an order of type ω, but this will cause o(Rev(α,B)) to go down.

So far, I've been able to prove this in the case when α is finite, by making use of my computation rule. (I think attempting to prove this without making use of my computation rule will be impractical.)

The above two questions don't purely deal with big multichoose and thus can't be settled by my computation rules. The next two questions potentially could.

With Rev, one has the obvious isomorphisms Rev(A⨿B,C)=Rev(A,C)×Rev(B,C), Rev(A,B×C)=Rev(A,B)×Rev(A,C), and Rev(A,Rev(B,C))=Rev(A×B,C).

If one combines the first of these isomorphisms with the inequality o(Rev(A,B))≥o(Rev(o(A),B)) mentioned above, one obtains the inequality

((α multichoose β⊕γ)) ≤ ((α multichoose β)) ⊗ ((α multichoose γ)).

However, attempting to turn the other two isomorphisms into inequalities would require the use of the inequality in question 1. So this gives rise to questions 2 and 3 -- keep in mind that a positive answer to question 1 (or, y'know, question 0) would settle both of these in the affirmative:

Question 2: Is ((α⊗β multichoose γ)) ≥ ((α multichoose γ)) ⊗ ((β multichoose γ))?

Question 3: Is ((α multichoose β⊗γ)) ≤ (( ((α multichoose β)) multichoose γ))?

Because I can currently prove question 1 in the case where α is finite, I can prove both of these in the case where γ is finite.

EDIT Oct 12: I'm pretty sure I have now proven question 1 in general, therby proving questions 2 and 3. Unfortunately the proof is not simple. As for question 0... well, it's pretty obviously true when A and B are both finite. Other than that I haven't gotten very far, but I can't find a counterexample. If it is indeed false, perhaps it's worth asking with the assumptions that A and B are both BPOs instead of merely WPOs. (We can call that question 1/2.)

Both these could be potentially answered by purely computational means, though it would be somewhat unsatisfying. Right now, though, if this is true, I have not done even that. Although I can at least verify that they're true when the inputs are powers of ω; that case, at least, is straightforward computationally...

sniffnoy: (Kirby)
So I have a number of neat things to report about ordinals! In order from least to most exciting:

#1: I can now compute big multichoose, in general

At least, I'm pretty sure I can prove it. More on this in a moment.

(Unfortunately, this means I have no "easy math" I can work on right now -- I need to get back to the hard math, y'know.)

Hm -- I just looked and saw I never defined "big multichoose" here. In short: If α and β are ordinals, we can consider the poset of all weakly decreasing functiosn from β to α; this is a WPO, so we can take the largest ordinal extending it. This is what I mean by "α multichoose β", which I will denote here as ((α|β)) because what else am I going to do.

Point is, I've been puzzling over it for a while, working out special cases, because for obvious reasons this is quite a bit harder to compute than left and right multichoose from earlier.

#2: There really, really, is no natural exponentiation

In appendix B of my ordinal arithmetic paper, I showed that there's no operation satisfying several desiderata that we'd expect of a "natural exponentiation". However, one of the desiderata was a bit less, uh, to be desired than the others. But I've now managed to eliminate the dependence on that hypothesis! Best of all, the new proof is only a slight modification of the existing one. The super short version is, instead of defining f(n) = deg e(n,ω), you define f(n) to be the "leading coefficient" of deg e(n,ω).

I'll update the paper shortly -- I'll also need to notify the journal I submitted it to about the update...

#3: Big multichoose is related to the surreal exponential!

Don't ask me why -- I don't actually understand the surreal exponential at all. But it totally is! The thing is, this doesn't actually depend on any of the new stuff in #1; what's related to the surreal exponential is the rule for computing ((ω|α)), or rather ((ω|α))0. (I'm using the subscript 0 to mean that I'm restricting to functions which are eventually 0.) It turns out for α a limit ordinal,

((ω|α))0 = exp(α)

where exp is the surreal exponential.

Again, I don't have any idea why this is so, I just noticed that the rules for computing them are the same. I didn't notice it before because I hadn't written the rule for computing the former in a way that made it obvious.

So let me go into more detail about #1 now -- the final piece of the puzzle was figuring out how to compute ((ωαβ))0 for α,β>0. What I eventually figured out was that this is equal to ωω^G(α,β) for a certain function G. Specifically, G(α,β) is equal to α⊕β if either α or β is of the form εγ+k for some ordinal γ and some finite k, or if both are infinite. Otherwise -- if one is finite and the other is not of the form εγ+k -- then instead you get (α⊕β)-1, which necessarily makes sense since one of them is finite and nonzero.

Now, Harry Gonshor, in his book "An Introduction to the Theory of Surreal Numbers" -- where he defined the surreal exponential in the first place -- showed that there's a function g such that for a surreal number α>0, exp(ωα)=ωω^g(α), and showed how to compute g. Of course, we only care about the case where α is an ordinal, not just any surreal. And in that case, g(α) is equal to α+1 if α is of the form εγ+k, and is equal to α otherwise.

See the similarity? G(1,β) is exactly equal to g(β). And from this follows the equation above.

Now as I said this only requires knowing how to compute G(1,β), i.e., how to compute ((ω|β))0, which I already knew how to do. But I didn't notice the similarity until I wrote it in this form depending on G. And I wouldn't have written it that way if I weren't trying to determine ((α|β)) more generally. So there is that bit of relation there. I determined the form of G, and then noticed that G(1,β) reminded me of the rule for g(β); and I ran to the library to look it up, and indeed it was the same.

So yay! That's finally solved. Now back to harder math, I guess...

sniffnoy: (SMPTE)
In their paper "Well-partial orderings and hierarchies", D. H. J. De Jongh and Rohit Parikh showed that given any WPO X, we can define an ordinal o(X), which is the largest order type of any well-order extending X. (As in, yes, the maximum is achieved.) Then they proved some things about how to compute o(X) and computed some specific cases. In particular, o(X⨿Y)=o(X)⊕o(Y) and o(X×Y)=o(X)⊗o(Y), where ⊕ and ⊗ denote natural sum and natural product, respectively.

I'm surprised, however, that nowhere did they note the following rule:

o(X) is equal to the smallest ordinal greater than any o(L), where L is a lower set of X other than X itself.

Equivalently, if we use their notation l(x) to mean o({y: y≱x}), then one can state this as, o(X) is the smallest ordinal greater than any l(x).

I mean, this is kind of the rule you would expect if you know that o(X⨿Y)=o(X)⊕o(Y) and you know the recursion for natural addition. But it's surprising that they don't note this because it's basically just a rephrasing of what they did prove. You could say that it's an unnecessary rephrasing, but I think it's a helpful one, not to mention a clean one.

To be clear, this follows from the following facts that they proved:
1. If o(X) is a limit, it's the sup of all the l(x)
2. l(x) is always less than o(X)
3. o(X) is a successor iff X has a maximal element (call it x); in which case, o(X) is equal to o(X\{x})+1

Number 1 just is the statement above in the case where o(X) is a limit, and the case where it's a successor follows pretty easily from 2 and 3. There's also of course the case where o(X)=0, which is trivial.

So, that's the basic recursion. If you want to go computing o(X) for some WPOs but you don't know how to get an upper bound, it's an available tool (if possibly a hard to apply one).

(Also, unrelatedly, remember I said that as best as I could tell, nobody had ever studied super-Jacobsthal exponentiation before? Well, they don't give a name to it, and they don't really "study" it, but somehow, even though I'd looked at this paper a number of times before, I never noticed that De Jongh and Parikh briefly make use of it. I'd better edit my paper to mention that before I resubmit it!)

sniffnoy: (SMPTE)

I just learned from a discussion on Wikipedia that Jacobsthal multiplication was rediscovered in the 1980s by Conway. Possibly because they didn't have Wikipedia back then, he didn't know it was originally due to Jacobsthal. As far as I can tell, this rediscovery resulted in only two papers: One by Harry Gonshor, and one by John Hickman, both of which explored the arithmetic of this multiplication.

I think I'm going to go edit references to these into my own paper on the subject now. The history deserves to be connected up!

sniffnoy: (SMPTE)
As discussed last entry... here it is! Again, hopefully I'm not embarrassing myself with this.

If you look at nothing else, look at the two tables. Especially the first table, it's so fancy! :)

sniffnoy: (Dead face)
So remember that Jacobsthal thing? (Here's the version on my UMich website, it's a bit easier for me to find than the LJ version. Also, I've updated the terminology there; see below.)

Well -- so, a while ago Jeff told me that yeah I should write this up as an actual paper. (I may have said this here before.) So I did. I didn't post it on arXiv, however; I wasn't too sure whether it was actually original, and I didn't want to potentially embarrass myself by posting something unoriginal. I mean, I hadn't been able to turn it up in my searches, but I'm not a set theorist, and it's not like I did a super-thorough search. (Integer complexity is a pretty small area, and addition chains isn't a huge area either. But ordinal arithmetic seems rather larger.) Maybe for all I knew it had been done a hundred years ago!

So instead I figured I'd try to get it published in a journal first -- I'd know it was original if it got accepted! Unfortunately, the first place I submitted it didn't accept it, and after that I was really busy so I haven't resubmitted it anywhere else yet.

(I also changed "semi-Jacobsthal" to "super-Jacobsthal". Jeff sugested I should really change "semi-Jacobsthal" to something else, and I'm glad he did, because "super-Jacobsthal" is much better and I wouldn't have thought of it if he hadn't suggested that.)

Point is -- Jeff, and also Dan Hathaway, recently pointed out to me this paper by Paolo Lippiari. And this is certainly not the same thing I'm doing, but it was close enough in flavor that I was basically just like "Oh crap, I should probably really put this up." So I'm doing that! I can't link to it yet, because it doesn't go up till Monday, but I'm doing it. I guess we'll see whether I end up embarrassing myself...

But if nothing else, at least my paper has two really nice tables in it! One of them is pretty fancy and took a decent amount of work in TeX. :)

(Meanwhile -- do you remember the problem of "ordinal multichoose"? Well, I figured out how to compute it in general! Both the "left" and "right" versions, and both choose and multichoose. I also came up with a third one, "natural" multichoose (you can guess the definition), but I can't compute that one except in very special cases, with lower bounds and conjectures for a few more cases. I can write more about this if people want. I haven't here partly because the rules for computing these are, unfortunately, really complicated. I'm probably not going to get around to writing this up formally for quite a while -- finally getting back to my integer complexity and addition chains work, after the constant not-having-time that was last semester, is higher priority, I'm pretty sure.)

sniffnoy: (Sonic)

I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.

First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.

Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).

So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.

Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.

When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.

I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.

I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.

Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.

Say α=ωα_0a0 + ... + ωα_ra_r + a, where I'm writing α in Cantor normal form, and separating out the finite part "a" as worthy of special attention. Then we have, for k finite and nonzero,

((α↓k)) = ωα_0 ka0 + ... + ωα_r k a_r + ((a|k)).

Pretty nice, no? The choose version is the same, except the multichoose at the end becomes a choose. Unfortunately, once k becomes infinite, things become complicated fast.

Also last time I was trying to resolve the question, for k finite, does one always have ((α↑k))≤(α↓k))? (And the same for the choose versions.) I had thought this was true, and spent some time trying to prove it, but now I can report a counterexample: α=ωω+1+1, k=2. On the left hand side we get ωω2+1ω+1+1, and on the right hand side we get ωω2+1+1. At least, I'm pretty sure I calculated both of those correctly. It's also a counterexample for the choose versions; in that case, we get the same things but without the +1s on each.

So, there's that. But the big thing is... how did I not notice this before? There's a symmetry law! The two operations are very closely related!

With ordinary multichoose, we have ((n+1|k))=((k+1|n)), since both are equal to (n+k|n,k) (I write it that way, rather than (n+k|k), to emphasize the symmetry.) With these ordinal versions of multichoose, we get

((α+1↑β)) = ((β+1↓α))

The proof is pretty simple, too! As in, you can straight-up construct an order isomorphism between these. I feel a little silly for not noticing this, but, this is really cool!

I feel like this also indicates that ((α↓β)) is somehow the "more fundamental" of the two operations. Because, see, ((α↑β)), well, if α=0, we know what it is; if α is a successor ordinal, we can apply the symmetry law to express it in terms of ↓ and if α is a limit ordinal, well, it's continuous in α, so it's a limit of things that can be expressed in terms of ↓. With ((α↓β)), if α=0 we know what it is, and if α is a successor we can switch it around, but if α is a limit ordinal, well, we don't have anything like that. (EDIT next day: Although, ((α↓β)) is pretty easy to compute in the case where α is a limit ordinal -- see below. The successor case is the only actually hard case.)

So, yeah. Now to put that away (for now, anyway) and get back to work...

EDIT next day: Actually, let me say a bit more about the computation of ((α↓β)) that I left out at first. Say we write α=α'+a, where α' is either 0 or a limit ordinal, and a is finite. Then in fact this breaks down into ((α'↓β))+((a↓β)). And the first of these is easy -- if β=0 we know it; if β is positive and finite it's given above (just multiply all the exponents in the Cantor normal form on the right by β); and if β is infinite, it's also easy (multiply by β+1 instead of β). Problem is the ((a↓β)) part! That seems to be complicated. Well, by the symmetry rule above, figuring out ((n↓β)) is equivalent to figuring out ((α↑k)), but, well, you'll notice I didn't give a formula for that -- that seems complicated in general. It might be doable, though. (Note that for any given β, since ((α↓β)) is strictly increasing in α, one has ((n↓β))<((ω↓β)) which does at least mean that when you compute ((α↓β)), the purely infinite part and the finite part do not interfere with one another.)

sniffnoy: (Kirby)
It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?

(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)

Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation ab.

Then I think we can agree on the following desiderata:

1. p(a,1) = a
2. p(a,b⊕c)=p(a,b)⊗p(a,c)
3. p(a,b⊗c)=p(p(a,b),c)
4. p(a,b) is weakly increasing in a
5. For a>0, p(a,b) is weakly increasing in b

Thing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.

But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ωb≤a. I.e. it's
the largest exponent appearing the Cantor normal form.

All the other operations have degree laws, or something like them. In
particular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ω
deg(ab) = deg(a×b) = deg(a) * b.
And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ω
deg(a⊗b) = deg(a) × b.

(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)

Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:

6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ b

And with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(nk)=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.

Whoo. Now I need to decide whether to add this as an appendix.

(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)

sniffnoy: (SMPTE)
So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.

(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)

Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.

I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.

(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )

So what's wrong with ordinal hyper?

Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.

OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!

The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is continuous in the right summand. Which would mean that if we were to take aω under this weird modified mutliplication, we would get a fixed point of left-addition by a. Multiplying by any higher ordinal would still get you aω. That isn't what we want at all.

Thus we have to add on the right. Similarly, when it comes to defining exponentiation, we have to multiply on the right. But what about tetration?

For natural numbers, we define tetration by doing our exponentiation on the left, and there's a good reason for this. (x^x)^x is the same as x^(x*x), which makes doing it on the right a bit silly. Addition and multiplication don't have this problem. They're associative, sure, but associativity of, say, multiplication, doesn't involve any operations simpler/smaller than multiplication. By contrast, this relation turns two exponeniations into an exponentiation and a multiplication, and in general (if you keep putting more exponents on the right) turns n exponentations into 1 exponentiation and n-1 multiplications. So this is not very good.

Thus when we try to generalize to ordinals, we have a conflict of which way things need to go in order to be nonstupid. If we continue to do things on the right, we run into the same problem we do for finite numbers. If, on the other hand, we break that convention and switch to the left, we run into the problem of continuity and stabilization. (We can't have "been on the left all along", or we'd have run into that problem even sooner!)

Now the reader may point out here that "left" and "right" are just directions, without any particular meaning, but in fact they have been used here with a consistent meaning: Left is what's getting iterated, right is the number of iterations. So this is indeed a real problem.

And so -- assuming we do switch to the left, because we want finite things to work -- we run into the continuity problem and things become stupid pretty quickly. Tetration is pretty stupid; hyper-n for 5≤n<ω is very stupid; and for n≥ω is maximally stupid.

(There is also the problem that H4(0,ω) is undefined, but oh well.)

(Note, by the way, that if you're defining hyper so that things are done on the right, you should define H0(a,b)=Sa, not Sb.)

(Also of note is the fact that while "ordinary tetration", "Jacobsthal tetration", and "semi-Jacobsthal tetration" all remain distinct, once you go to hyper-5, they all become the same.)

(Tangentially -- the other day I had the idea: while trying to define "natural exponentiation" using surreal exponentials doesn't work... what if you just rounded up to the next ordinal? Turns out, this is a pretty bad idea. No algebraic relations there that I can see.)

sniffnoy: (Sonic)
So here's an interesting paper that really deserves to be better known: On Onp, by Joseph DiMuro.

Here, On2 refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F2; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε0, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On Onp", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in Onp, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of Fp. And the operations are computable! So this gives a concrete description of the algebraic closure of Fp! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On0", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F2(t), you have to go well beyond ε0, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

sniffnoy: (Sonic)
Edit: Added a new paragraph pointing out that the two "coincidences" are in fact related (the first is used in the proof of the second). Later: Added the mostly-irrelevant footnote.

Edit next day: Fixed typos (natural mult is ⊗ not ⊕!); added clarifying footnote about 0 and 1.

So! Once again, let's review ordinal arithmetic. Ordinals have the successor operation; if we transfinitely iterate this, we get ordinal addition. If we transfinitely iterate this, we get ordinal multiplication; and if we transfinitely iterate that, we get ordinal exponentiation. One could keep going and get ordinal hyper but hyper has no nice algebraic properties so we won't be concerned with it here.

The ordinals also have the operation known as natural addition, denoted ⊕ (or sometimes #) which is defined by a different recursion. It is commutative and associative. Jacobsthal in 1907 defined a new multiplication by transfinitely iterating natural addition instead of ordinary addition; I'll call this "Jacobsthal multiplication" and denote it ×, as he did. He then defined an exponentiation by transfinitely iterating ×, which I'll call "Jacobsthal exponentiation", he denoted ab, but I'll denote a×b because... well, you'll see.

Finally there is natural multiplication, denoted ⊗ (or sometimes by a weird 16-pointed asterisk?), which is defined in terms of ⊕ by a different recursion. It is commutative, associative, and distributes over natural addition. We can, of course, transfinitely iterate this to get yet a third notion of exponentiation; I'll call this "semi-Jacobsthal exponentiation", and denote it a⊗b. (Hence why I'm using the notation a×b above -- if I called it ab, I wouldn't have a good parallel notation for semi-Jacobsthal exponentiation. In my own writing I've been denoting it by ab with a bar over the b instead of under it, but while that's easy in TeX I can't do that in HTML (unless I'm going to start pulling out the Unicode combining characters, and that wouldn't look right anyway if the exponent was more than one character long, as it frequently will be).)

(And yes, all these operations really are different. I'll leave finding examples to you.)

You could continue on to get additional notions of hyper, but again, this does not seem very interesting. (Sudden thought: I have not tested whether continuing on to tetration causes some of these operations to collapse. I should test that[0].)

So what algebraic relations do these satisfy? Of course for the ordinary and natural operations these are well-known, but let's do this from scratch regardless. I won't be giving any proofs (most of them are straightforward), just sort of heuristic reasons.

I'm going to leave out relations about how 1 and 0 behave as these are all obvious and exactly what you expect[3]. 0 and 1 are never problematic.

Ordinary addition:

a+(b+c)=(a+b)+c. + is just S iterated transfinitely; to start with a and apply S b+c many times, is to start with A, apply S b many times, and then apply S c many times.

Ordinary multiplication:

a(b+c)=ab+ac. · is just + iterated transfinitely; to add together b+c copies of a, is to add together b copies of a, and add to that the sum of c copies of a. Note this requires the associativity of ordinary addition, and the fact that ordinary addition -- being a transfinite iteration -- is continuous on the right.

a(bc)=(ab)c. To add together bc copies of a, is to add together b copies of a, and then add together c copies of the result. Note this requires a(b+c)=ab+ac.

Ordinary exponentiation:

ab+c=abac. To multiply together b+c copies of a, is to multiply together b copies of a, and multiply that by the product of c copies of a. Note this requires the associativity of ordinary multiplication.

abc=(ab)c. To multiply together bc copies of a, is to multiply together b copies of a, and then multiply together c copies of the result. Note this requires ab+c=abac.

Again, both these require the fact that ordinary addition and ordinary multiplication, being transfinite iterations, are both continuous on the right.

OK. So far so standard. Now for the next series.

Natural addition:

a⊕(b⊕c)=(a⊕b)⊕c, a⊕b=b⊕a. I'm just going to take the properties of natural addition and natural multiplication as given; I'm not going to try to explain them.

Jacobsthal multiplication:

This is where things get weird. One might be tempted to write down relations about a×(b+c) and a×(bc) like those above. However, while ⊕ is associative, and even commutative, it is not continuous on the right, which prevents such things from working.

That's not the weird part. The weird part is that, as Jacobsthal discovered, we can get a relation involving this multiplication anyway:

Yes, it's algebraically nice -- but what does it *mean*? Why should this be true? What does it mean to add something together b⊕c times? Unfortunately, I can give no really satisfying answer. Jacobsthal proved it by figuring out how to compute × on Cantor normal forms; this is pretty easy, and once you have that, it's a straightforward verification. But it seems very much like a mathematical coincidence, and that bugs me. If there's a better reason for it, I don't know it. (Of course, it's been 100 years since he discovered this, so there's a good chance this has since been resolved, but if there's a good reason it's certainly not obvious.)

Now, once we have this, it is pretty easy to get the relation:
a×(b×c)=(a×b)×c. What a strange operation to be associative! But it's a straightforward consequence of the relation above.

Jacobsthal exponentiation:

Continuing on, Jacobsthal found relations involving his exponentiation. These are straightforward consequences of the above:

a×(b+c)=ab×ac. To multiply together b+c copies of a, is to multiply together b copies of a, and multiply that by the product of c copies of a.

a×(bc)=(a×b)×c. To multiply together bc copies of a, is to multiply together b copies of a, and then multiply together c copies of the result. This requires the above relation.

So these are some pretty natural-looking relations... except for the fact that the first one, and therefore both of them, relies on the fact that × is associative! Notice neither of my heuristic justifications makes any sense if × is not associative. And × being associative rests on the apparent mathematical coincidence of it distributing over natural addition (on one side, anyway). (They also rely on +, ·, and × all being transfinite iterations and therefore continuous on the right, but that's not surprising.)

So. Let's take a look at the third series.

Natural multiplication:
a⊗(b⊕c)=(a⊗b)⊕(a⊗c), (a⊕b)⊗c=(a⊗c)⊕(b⊗c), a⊗(b⊗c)=(a⊗b)⊗c, a⊗b=b⊗a. Again, I'm just going to take these as givens.

Semi-Jacobsthal exponentiation:

This is where things get even weirder. Again, it's tempting to try to state a relation about a⊗(b+c) or a⊗(bc), but because ⊗ is not continuous on the right, this won't work.

But playing around a bit reveals the following surprising relation:

Once again, algebraically nice, but leaving the question -- why should this be true? What does it mean to multiply a together b⊕c times? Once again, I can't give a really satisfying answer. I proved this by coming up with a rule to compute a⊗b on Cantor normal forms, and then it's a straightforward verification. (I assume this is not original to me, but I was not working from anything else; I've only ever seen Jacobsthal's operations in that one paper of his, and I've never seen this operation anywhere.)

And not only that, but the earlier surprising coincidence is used in the proof of this one! Not in some sort of induction, because it wasn't done by induction, but because Jacobsthal product appears in the rule for computing a⊗b on Cantor normal forms. (Because × is iterated natural addition, and when one performs a natural multiplication, the exponents of the ω's undergo natural addition.)

Let's continue on. From this we can then deduce
a⊗(b×c)=(a⊗b)⊗c. Yes, that's Jacobsthal multiplication, not ordinary or natural multiplication! Because that's what you get when you iterate natural addition. This relies on the above relation (and the continuity of × on the right).

So these strange hybrid operations end up having some nice relations... but, it would seem, based on two mathematical coincidences. Two related and very similar mathematical coincidences. One alone would be a bit suspect; two would already be suggestive; these practically scream that there must be something going on here, some better reason for this that I don't know.

But I've no idea how I might investigate that (other than ask on the internet and see if anyone knows -- I'm certainly not going to study it myself). Back to writing this integer complexity paper...


[0]Mostly irrelevant footnote: They all remain distinct at tetration, but I suspect they do collapse at the next level. I hadn't realized before that ordinal hyper actually is kind of silly for infinite ordinals due to continuity on the right. Lower hyper I suppose does not have this problem, if you wanted to look at that. Also I didn't realize that you have to exclude 0 as the left argument at tetration at above if you want the right argument to be infinite because it alternates between 0 and 1? Also it occurs to me I don't know whether, when both the level and the right argument are infinite, which limit to apply first, if it matters? Well, all this is really irrelevant to the main point.

[3]Well, except perhaps how 1 behaves wrt ordinary addition. While a+1=Sa, this doesn't work for 1+a. Rather, a+1=Sa if a is finite, and a+1=a if a is infinite. However it works fine with natural addition; a⊕1=1⊕a=Sa.
sniffnoy: (Chu-Chu Zig)
So over on MathOverflow, Philip Ehrlich confirms that Gonshor's definition is the same as Kruskal's, and that Alling's definition is also the same as this except restricted to infinitesimals. Also, when people talk about surreal exponentiation, this is the definition they're talking about.

Which leaves the question of where the definition currently appearing on Wikipedia ever came from. Tracing through the history, it seems that whole section was added by one Michael K. Edwards. So I'd ask him, but seems he's on wikibreak till November. Well, actually, seems he hasn't been seen on WP since June of 2011... oh well, I'll put it on his talk page anyway... :-/

Addendum: Seeems that wikibreak notice has been up since 2006. Guess it's pretty unlikely he'll show up again. I suppose I'll just go and remove that section later...

sniffnoy: (Dead face)
Oops next day: Not ordinary exponentiation. See below.

I've posted this whole mess as a question to MathOverflow. After all, I don't even really know surreals, I'm clearly not equipped to answer it! Nor do I have the time; I still have to finish revising the first of these integer complexity papers, except of course not till Tuesday because first I have a problem set for Schubert calculus.

But for those too lazy to click through the link... things are bad.

Note: To avoid confusion between ωx in the usual sense and ωx in the sense of the exponentiation under discussion, I'll write the former as ω^x and reserve superscripts for the latter.

Recall that Wikipedia's definition (I still have no idea where it's taken from, so WP will just have to take the credit for now) can be extended in one of two ways: By replacing "2" with other numbers, or by doing the usual thing with logs. These don't agree! And of course you could replace 2 with another number and *then* do the usual thing with logs, in your new base, to get an *entirely different definition*. Hooray!

(Why don't these agree? Well, as an example, consider that if we try to generalize by replacing the base, we still get 3ω=ω, while if we try using logs, we get 3ω>ω.)

In fact, the "replace the base" definition seems pretty suspect for another reason: it looks like if we plug in two ordinals, it agrees with ordinary ordinal exponentiation. Now I don't know, maybe ordinary ordinal exponentiation actually works really well with natural addition and natural multiplication. I haven't checked, after all, because I only just noticed this problem. But it doesn't seem very likely.

Oops: Not ordinary exponentation, but rather the analogue of that based on natural multiplication. The exponentiation without a name that Jacobsthal didn't consider...

In fact if we use Gonshor's definition, ωω actually won't be an ordinal at all. Disappointing, but perhaps to be expected -- if surreal exponentiation really did yield a "natural" ordinal exponentiation, I think someone would have noticed in the past 30 years. If we apply Gonshor's definition, and the various theorems he proves about it, we find ωω=exp(ωlogω)=exp(ω*ω^(1/ω))=exp(ω^(1+1/ω))=ω^(ω^(1+1/ω)).

So that's a strike against the idea of "natural exponentiation", unless of course ordinary exponentiation turns out to be suitably "natural". There is Jacobsthal's definition -- or you could do the equivalent but working from natural multiplication rather than his weird multiplication -- but, blech.

I mean, let's say you wanted to try to define it. If you try to define it recursively, powers of 0 will be a problem. If you try to define it "computationally" -- well, you can't raise polynomials to transfinite powers! That leaves the "constructive" approach, which, I don't know, maybe it's possible -- but part of what makes the other operations nice in the first place is the fact that there's multiple ways to think about them, not just the one.

In short, I'm leaving standing the question on MathOverflow (because really guys, why is this a big mess, this should not be a big mess, this should be answered), but there is probably no such thing as "natural exponentiation" (I don't count Jacobsthal's definition).

sniffnoy: (Chu-Chu Zig)
Edit again Feb 20: Added note about Wikipedia's definition and ordinals.

Edit: Added small note about how one could verify equivalence with Kruskal's definition. Also, added a note to this entry pointing out that the possible definition of "natural exponentiation" I suggest therein fails very badly.

So having looked at Jacobsthal's paper I think his "natural exponentiation" does not deserve the name. At least, if this is the right paper and the right operation. It's called "Zur Arithmetik der transfiniten Zahlen". In it, he considers natural addition, and then uses it to define a new multiplication and exponentiation operation on the ordinals, so I figure this must be it, right?

But the multiplication he defines is *not* natural multiplication. Rather, he uses the recursion for ordinary multiplication, using natural addition in place of ordinary addition. This is a new multiplication based on natural addition alright, but it's not natural multiplication (consider that under this multiplication, 2ω=ω, for the same reason it does with ordinary multiplication).

Then he defines a new exponentiation the same way (usual recursion, but based on his new multiplication). I think it's pretty clear why this can't sensibly be called "natural exponentiation".

But there does seem to be such a thing as surreal exponentiation. Or, let's for now at least assume there really is such a thing. (I mean, certainly there exist definitions of it, the question is whether they're all the same.) Then if we had ordinals x and y, we could take xy in the surreals... and if the result is always another ordinal, perhaps we could call that natural exponentiation? It would have the right algebraic properties, though that leaves the question of how it could be defined in a purely ordinal manner (recursively? constructively? computationally?). Question is, for ordinals x and y, is xy always another ordinal in the first place? I have no idea.

Edit: If we use Wikipedia's definition of 2x and generalize it by allowing other ordinals in place of 2, it would appear that for ordinals x and y, xy is indeed again an ordinal.

So. Surreal exponentiation. Last entry I mentioned there was a definition due to Kruskal. I should note that I have never seen this definition; On Numbers and Games, and Gonshor's book, both mention it, but neither state it. They also both mention that it hasn't been published, and, well, it looks like it still hasn't.

But Gonshor's book does suggest that his definition is equivalent to Kruskal's older one. Annoyingly, it doesn't explicitly state it. So I gather it's probably equivalent, but perhaps I should still see if there's someone I can ask about it. (But who would have seen Kruskal's definition? Well, I guess there's enough people on MathOverflow for there to be a chance someone has.) Meanwhile, I'm not too certain what Alling is doing -- remember, I don't really know surreal numbers, I was just looking this up for fun! -- but it looks like it's probably the same as Gonshor's, except that he only defines it for non-infinite surreals? I'm not too sure.

Addendum: Actually, despite Kruskal's work not being published, there is a way one could verify equivalence with it (though I'm certainly not going to). Conway mentions that Kruskal's definition of exp, was inverse to a certain definition of log; and it looks like he gives enough information to determine what that definition of log was.

This leaves the question of where the hell Wikipedia's uncited definition of 2x comes from. With any luck maybe that'll turn out to be Kruskal's original definition. :P Still, at least I have reason to believe that there's at most 2 definitions and not, say, four of them. And hey, there's a good chance those remaining two are the same, right? :-/


July 2017

16 171819202122


RSS Atom
Page generated Jul. 26th, 2017 12:50 pm
Powered by Dreamwidth Studios