sniffnoy: (Dead face)
So remember that Jacobsthal thing? (Here's the version on my UMich website, it's a bit easier for me to find than the LJ version. Also, I've updated the terminology there; see below.)

Well -- so, a while ago Jeff told me that yeah I should write this up as an actual paper. (I may have said this here before.) So I did. I didn't post it on arXiv, however; I wasn't too sure whether it was actually original, and I didn't want to potentially embarrass myself by posting something unoriginal. I mean, I hadn't been able to turn it up in my searches, but I'm not a set theorist, and it's not like I did a super-thorough search. (Integer complexity is a pretty small area, and addition chains isn't a huge area either. But ordinal arithmetic seems rather larger.) Maybe for all I knew it had been done a hundred years ago!

So instead I figured I'd try to get it published in a journal first -- I'd know it was original if it got accepted! Unfortunately, the first place I submitted it didn't accept it, and after that I was really busy so I haven't resubmitted it anywhere else yet.

(I also changed "semi-Jacobsthal" to "super-Jacobsthal". Jeff sugested I should really change "semi-Jacobsthal" to something else, and I'm glad he did, because "super-Jacobsthal" is much better and I wouldn't have thought of it if he hadn't suggested that.)

Point is -- Jeff, and also Dan Hathaway, recently pointed out to me this paper by Paolo Lippiari. And this is certainly not the same thing I'm doing, but it was close enough in flavor that I was basically just like "Oh crap, I should probably really put this up." So I'm doing that! I can't link to it yet, because it doesn't go up till Monday, but I'm doing it. I guess we'll see whether I end up embarrassing myself...

But if nothing else, at least my paper has two really nice tables in it! One of them is pretty fancy and took a decent amount of work in TeX. :)

(Meanwhile -- do you remember the problem of "ordinal multichoose"? Well, I figured out how to compute it in general! Both the "left" and "right" versions, and both choose and multichoose. I also came up with a third one, "natural" multichoose (you can guess the definition), but I can't compute that one except in very special cases, with lower bounds and conjectures for a few more cases. I can write more about this if people want. I haven't here partly because the rules for computing these are, unfortunately, really complicated. I'm probably not going to get around to writing this up formally for quite a while -- finally getting back to my integer complexity and addition chains work, after the constant not-having-time that was last semester, is higher priority, I'm pretty sure.)

-Harry
sniffnoy: (Sonic)
So!

I should really be working on writing up integer complexity stuff at the moment. But, the other day I noticed these old entries of mine on "ordinal multichoose" and I caught the bug again. Done thinking about this for now, back to real work now, but I wanted to make some notes on what I found.

First off, new notation. The notation I've actually been using can't really go in HTML; I've been denoting these operations α multichoose β, except inbetween the α and the β is a fraction bar, except the fraction bar is an arrow -- pointing rightwards for lexicographic order and leftwards for reverse-lexicographic. (Had to look a few things up to figure out how to typeset that in TeX.) There's also the choose version, though that's 0 if β is infinite.

Anyway. I'll use the notations ((α↑β)), ((α↓β)), (α↑β), and (α↓β).

So, definitions: For ordinals α and β, ((α↑β)) is the set of all weakly decreasing functions from β to α, ordered lexicographically. This is well-ordered. ((α↓β)) is the same, except the order is reverse-lexicographic -- as in, higher places in β matter more, not as in reversing the whole order! This too is (well-defined and) well-ordered. (α↑β) and (α↓β) are the same, but restricting to strictly decreasing functions.

Note that if you try to do something similar with increasing functions, there is just no way you get a well-order.

When I thought about these previously, I think I considered ((α↑β)) to be nicer than ((α↓β)), in particular because it's continuous in α, while ((α↓β)) is continuous in neither variable. But now I don't think of either of them as particularly nicer.

I will use (n|k) to denote ordinary choose, and ((n|k)) to denote ordinary multichoose.

I wrote down some recursions for them last time, but I missed a few. Well -- my goal here isn't to put all my notes up on LJ, that would be pretty boring. Note that some of the recursions only work if an appropriate variable is finite.

Anyway. I had several goals. One was to figure out how to compute these operations on Cantor normal forms. I did not succeed at that in general, because, well, that appears to be really hard. But! There are some particular nice cases. In particular, the ↓ operations when β is finite.

Say α=ωα_0a0 + ... + ωα_ra_r + a, where I'm writing α in Cantor normal form, and separating out the finite part "a" as worthy of special attention. Then we have, for k finite and nonzero,

((α↓k)) = ωα_0 ka0 + ... + ωα_r k a_r + ((a|k)).

Pretty nice, no? The choose version is the same, except the multichoose at the end becomes a choose. Unfortunately, once k becomes infinite, things become complicated fast.

Also last time I was trying to resolve the question, for k finite, does one always have ((α↑k))≤(α↓k))? (And the same for the choose versions.) I had thought this was true, and spent some time trying to prove it, but now I can report a counterexample: α=ωω+1+1, k=2. On the left hand side we get ωω2+1ω+1+1, and on the right hand side we get ωω2+1+1. At least, I'm pretty sure I calculated both of those correctly. It's also a counterexample for the choose versions; in that case, we get the same things but without the +1s on each.

So, there's that. But the big thing is... how did I not notice this before? There's a symmetry law! The two operations are very closely related!

With ordinary multichoose, we have ((n+1|k))=((k+1|n)), since both are equal to (n+k|n,k) (I write it that way, rather than (n+k|k), to emphasize the symmetry.) With these ordinal versions of multichoose, we get

((α+1↑β)) = ((β+1↓α))

The proof is pretty simple, too! As in, you can straight-up construct an order isomorphism between these. I feel a little silly for not noticing this, but, this is really cool!

I feel like this also indicates that ((α↓β)) is somehow the "more fundamental" of the two operations. Because, see, ((α↑β)), well, if α=0, we know what it is; if α is a successor ordinal, we can apply the symmetry law to express it in terms of ↓ and if α is a limit ordinal, well, it's continuous in α, so it's a limit of things that can be expressed in terms of ↓. With ((α↓β)), if α=0 we know what it is, and if α is a successor we can switch it around, but if α is a limit ordinal, well, we don't have anything like that. (EDIT next day: Although, ((α↓β)) is pretty easy to compute in the case where α is a limit ordinal -- see below. The successor case is the only actually hard case.)

So, yeah. Now to put that away (for now, anyway) and get back to work...

EDIT next day: Actually, let me say a bit more about the computation of ((α↓β)) that I left out at first. Say we write α=α'+a, where α' is either 0 or a limit ordinal, and a is finite. Then in fact this breaks down into ((α'↓β))+((a↓β)). And the first of these is easy -- if β=0 we know it; if β is positive and finite it's given above (just multiply all the exponents in the Cantor normal form on the right by β); and if β is infinite, it's also easy (multiply by β+1 instead of β). Problem is the ((a↓β)) part! That seems to be complicated. Well, by the symmetry rule above, figuring out ((n↓β)) is equivalent to figuring out ((α↑k)), but, well, you'll notice I didn't give a formula for that -- that seems complicated in general. It might be doable, though. (Note that for any given β, since ((α↓β)) is strictly increasing in α, one has ((n↓β))<((ω↓β)) which does at least mean that when you compute ((α↓β)), the purely infinite part and the finite part do not interfere with one another.)

-Harry
sniffnoy: (Kirby)
It was only today that it occurred to me -- I say in the paper I'm writing that we're not going to consider "natural" exponentiation, because the one coming from the surreals doesn't work, and so there doesn't seem to be a natural exponentiation (unless you count Jacobsthal or "semi-Jacobsthal" exponentation); but could I sit down and prove that there isn't one, from a list of desiderata, and perhaps add this as an appendix?

(Note that I've already tried the approach of "take surreal exponentiation and then round up to the next ordinal". This has little in the way of nice properties.)

Well, that depends on your desiderata. I wrote down a list of 10 (all of which are satisfied by surreal exponentiation, except for the whole part where it doesn't reuturn an ordinal). Let's use p(a,b) to mean the hypothesized "natural" exponentiation ab.

Then I think we can agree on the following desiderata:

1. p(a,1) = a
2. p(a,b⊕c)=p(a,b)⊗p(a,c)
3. p(a,b⊗c)=p(p(a,b),c)
4. p(a,b) is weakly increasing in a
5. For a>0, p(a,b) is weakly increasing in b

Thing is -- the problem of finding a natural exponentiation is, it seems to me, severely underconstrained. Even with my full list, you could still probably define it in a completely silly way.

But let's add another restriction: A degree law. For an ordinal a>0, I'll define deg(a) to be the largest b such that ωb≤a. I.e. it's
the largest exponent appearing the Cantor normal form.

All the other operations have degree laws, or something like them. In
particular, for ordinary exponentiation and for Jacobsthal exponentiation, we have, assuming a≥ω
deg(ab) = deg(a×b) = deg(a) * b.
And for "semi-Jacobsthal" exponentiation, we have, again assuming a≥ω
deg(a⊗b) = deg(a) × b.

(Let's ignore for now what happens when a<ω; it's easy to describe, but whatever.)

Since this is supposed to be natural exponentiation, let's add the following degree law as a desideratum:

6. For a≥ω, deg(p(a,b)) = deg(a) ⊗ b

And with this, it becomes impossible! Because with this restriction, one can show that if we define f(n) = deg(p(n,ω)), then f(n) is a function from naturals to naturals which A. is weakly increasing, and B. satisfies f(nk)=k*f(n), and these together are sufficient to show that f(n)/f(m) = (log n)/(log m), contradiction.

Whoo. Now I need to decide whether to add this as an appendix.

(Jeff has told me not to worry particularly about whether my paper really is new, and just get it ready and submit it, and if I missed something in the literature and it's not new, I'll find out...)

-Harry
sniffnoy: (SMPTE)
So, do you remember this old entry? Well, Jeff is having me turn it into an actual paper. We'll see if it's new; I think it is, but I should actually, y'know, ask a set theorist.

(At this point you may be asking: Wait, why is Jeff relevant? Didn't you, y'know, finish your degree? Yes, but he's continuing to help me out for the time being. Ordinarily he'd have me forge ahead with complexity-related stuff, but I said I could get this done super-quick, so he's OK with it.)

Anyway, in my previous exploration of the subject, I mentioned that continuing past exponentiation into tetration and general hyper is pretty stupid for ordinals, but I never explained why. I thought I'd do that here.

I could actually go into quite a bit of detail on this, because I spent quite a bit of time thinking about it a few days ago, but I expect people would find it mind-numbing so I'll keep this short.

(Note: I am not claiming anything in this entry is new, except the second-to-last parenthetical. And while that's new, it's also kind of stupid. :P )

So what's wrong with ordinal hyper?

Let's start with ordinal addition. This is built up from the successor operation. To compute a+b, you apply the successor operation b times to a. Note that the resulting operation is continuous in b, by definition.

OK, that was pretty simple. How about ordinal multiplication? To compute ab, you add a to itself b times. Now here we have a choice; when we say "add a to itself b times", what we really mean is "start with 0, then add a to it b times". But are we adding a on the left or on the right? It makes a difference!

The correct choice is to add a on the right. As long as b is finite, of course, this makes no difference. But addition, recall, is continuous in the right summand. Which would mean that if we were to take aω under this weird modified mutliplication, we would get a fixed point of left-addition by a. Multiplying by any higher ordinal would still get you aω. That isn't what we want at all.

Thus we have to add on the right. Similarly, when it comes to defining exponentiation, we have to multiply on the right. But what about tetration?

For natural numbers, we define tetration by doing our exponentiation on the left, and there's a good reason for this. (x^x)^x is the same as x^(x*x), which makes doing it on the right a bit silly. Addition and multiplication don't have this problem. They're associative, sure, but associativity of, say, multiplication, doesn't involve any operations simpler/smaller than multiplication. By contrast, this relation turns two exponeniations into an exponentiation and a multiplication, and in general (if you keep putting more exponents on the right) turns n exponentations into 1 exponentiation and n-1 multiplications. So this is not very good.

Thus when we try to generalize to ordinals, we have a conflict of which way things need to go in order to be nonstupid. If we continue to do things on the right, we run into the same problem we do for finite numbers. If, on the other hand, we break that convention and switch to the left, we run into the problem of continuity and stabilization. (We can't have "been on the left all along", or we'd have run into that problem even sooner!)

Now the reader may point out here that "left" and "right" are just directions, without any particular meaning, but in fact they have been used here with a consistent meaning: Left is what's getting iterated, right is the number of iterations. So this is indeed a real problem.

And so -- assuming we do switch to the left, because we want finite things to work -- we run into the continuity problem and things become stupid pretty quickly. Tetration is pretty stupid; hyper-n for 5≤n<ω is very stupid; and for n≥ω is maximally stupid.

(There is also the problem that H4(0,ω) is undefined, but oh well.)

(Note, by the way, that if you're defining hyper so that things are done on the right, you should define H0(a,b)=Sa, not Sb.)

(Also of note is the fact that while "ordinary tetration", "Jacobsthal tetration", and "semi-Jacobsthal tetration" all remain distinct, once you go to hyper-5, they all become the same.)

(Tangentially -- the other day I had the idea: while trying to define "natural exponentiation" using surreal exponentials doesn't work... what if you just rounded up to the next ordinal? Turns out, this is a pretty bad idea. No algebraic relations there that I can see.)

-Harry
sniffnoy: (Sonic)
So here's an interesting paper that really deserves to be better known: On Onp, by Joseph DiMuro.

Here, On2 refers to the nimbers. Let me review those briefly.

One can put on the ordinals two unusual recursively-defined operations, known as nim-addition and nim-multiplication, that turn them into an algebraically closed field of characteristic 2. (Except of course that they're a proper class and not a set.) If one restricts to just the finite ordinals, the whole numbers, these form a subfield, if you want to just consider it on those.

Nim addition is very easy to describe. On whole numbers, it's just bitwise xor, or binary addition without carries, and on general ordinals, it's much the same. (Write them in Cantor normal form and do bitwise xor on corresponding coefficients.) Nim multiplication is complicated and I find it quite confusing, but it's certainly computable on whole numbers, as is nim inversion.

One thing worth noting is that the nimbers provide a very concrete (if impractical) description of the algebraic closure of F2; it consists of precisely the nimbers below ω^ω^ω. I mean, nim multiplication and inversion and square roots and such are all quite confusing, but they're certainly computable on ordinals that aren't too large (at least below ε0, I should think). Maybe even finding roots of polynomials is computable? I'm less certain of this. This is really not something I'm an expert on.

Anyway, naturally there's the question of, can you come up with an analogue of the nimbers for other (positive) characteristics? Exactly what counts as an "analogue" is arguable, but few people seem to have really gotten anywhere close. S. Norton, back in the 70s, found a recursive characterization of ternary addition without carries, and here F. Laubie provides a way of doing the same thing for any prime p, but it's multiplication that's truly the hard part. (Side note: Laubie's characterization is really complicated. I'm pretty sure I have a much simpler one. Maybe it's publishable?)

Well, in "On Onp", DiMuro finally provides a characteristic p analogue of the nimbers, and it sure seems like he's done a pretty good job of it. Now, I've only skimmed the paper; I don't really understand nimbers, so actually reading it would be a bit difficult. Still, he's got a lot of the analogy down. He dispenses with recursive definitions for the operations, in favor of just making them work. He does though prove that addition in Onp, by his definition, turns out to be just base-p addition without carries (extended to the ordinals in the obvious way), so that at least can be handled by Laubie's recursion (or mine). But yeah, there's a lot of analogy there. In particular, ω^ω^ω ends up being the algebraic closure of Fp. And the operations are computable! So this gives a concrete description of the algebraic closure of Fp! He doesn't give any simple description of multiplication like there is for the nimbers (well, to the extent that that can be called "simple"), but it's still computable. He doesn't address the question of solving polynomials effectively; hopefully someone else will take this further and do that.

At the end he raises the suggestion of "On0", which perhaps might give a concrete (if impractical) description of the algebraic closure of Q (though you'd need to go beyond ω^ω^ω). This is funny; I'd always thought of the nimbers as basically the characteristic 2 analogue of the surreals, but obviously that analogy is... well, yeah, I guess there really isn't much of an analogy there. So this is interesting. But he doesn't pursue it as it would be harder. (It's worth noting that, in the nimbers, if you want the algebraic closure of F2(t), you have to go well beyond ε0, and according to DiMuro finding the exact ordinal is still an open problem, though Lenstra's old paper on the matter offers an upper bound and conjecture.)

So, yeah, this is not a topic I intend to pursue or anything (though maybe I should write up that recursion). But -- how did I not know about DiMuro's paper? This really should be better-known.

EDIT: Perhaps I should note -- maybe part of the reason is because it's so hard to find. I only stumbled across it incidentally; it doesn't say "nim" or "nimber" anywhere in the abstract, so I couldn't turn it up in searches. If I had thought to search on "on_2", that would have turned it up, but...

-Harry
sniffnoy: (Sonic)
Edit: Added a new paragraph pointing out that the two "coincidences" are in fact related (the first is used in the proof of the second). Later: Added the mostly-irrelevant footnote.

Edit next day: Fixed typos (natural mult is ⊗ not ⊕!); added clarifying footnote about 0 and 1.

So! Once again, let's review ordinal arithmetic. Ordinals have the successor operation; if we transfinitely iterate this, we get ordinal addition. If we transfinitely iterate this, we get ordinal multiplication; and if we transfinitely iterate that, we get ordinal exponentiation. One could keep going and get ordinal hyper but hyper has no nice algebraic properties so we won't be concerned with it here.

The ordinals also have the operation known as natural addition, denoted ⊕ (or sometimes #) which is defined by a different recursion. It is commutative and associative. Jacobsthal in 1907 defined a new multiplication by transfinitely iterating natural addition instead of ordinary addition; I'll call this "Jacobsthal multiplication" and denote it ×, as he did. He then defined an exponentiation by transfinitely iterating ×, which I'll call "Jacobsthal exponentiation", he denoted ab, but I'll denote a×b because... well, you'll see.

Finally there is natural multiplication, denoted ⊗ (or sometimes by a weird 16-pointed asterisk?), which is defined in terms of ⊕ by a different recursion. It is commutative, associative, and distributes over natural addition. We can, of course, transfinitely iterate this to get yet a third notion of exponentiation; I'll call this "semi-Jacobsthal exponentiation", and denote it a⊗b. (Hence why I'm using the notation a×b above -- if I called it ab, I wouldn't have a good parallel notation for semi-Jacobsthal exponentiation. In my own writing I've been denoting it by ab with a bar over the b instead of under it, but while that's easy in TeX I can't do that in HTML (unless I'm going to start pulling out the Unicode combining characters, and that wouldn't look right anyway if the exponent was more than one character long, as it frequently will be).)

(And yes, all these operations really are different. I'll leave finding examples to you.)

You could continue on to get additional notions of hyper, but again, this does not seem very interesting. (Sudden thought: I have not tested whether continuing on to tetration causes some of these operations to collapse. I should test that[0].)

So what algebraic relations do these satisfy? Of course for the ordinary and natural operations these are well-known, but let's do this from scratch regardless. I won't be giving any proofs (most of them are straightforward), just sort of heuristic reasons.

I'm going to leave out relations about how 1 and 0 behave as these are all obvious and exactly what you expect[3]. 0 and 1 are never problematic.

Ordinary addition:

a+(b+c)=(a+b)+c. + is just S iterated transfinitely; to start with a and apply S b+c many times, is to start with A, apply S b many times, and then apply S c many times.

Ordinary multiplication:

a(b+c)=ab+ac. · is just + iterated transfinitely; to add together b+c copies of a, is to add together b copies of a, and add to that the sum of c copies of a. Note this requires the associativity of ordinary addition, and the fact that ordinary addition -- being a transfinite iteration -- is continuous on the right.

a(bc)=(ab)c. To add together bc copies of a, is to add together b copies of a, and then add together c copies of the result. Note this requires a(b+c)=ab+ac.

Ordinary exponentiation:

ab+c=abac. To multiply together b+c copies of a, is to multiply together b copies of a, and multiply that by the product of c copies of a. Note this requires the associativity of ordinary multiplication.

abc=(ab)c. To multiply together bc copies of a, is to multiply together b copies of a, and then multiply together c copies of the result. Note this requires ab+c=abac.

Again, both these require the fact that ordinary addition and ordinary multiplication, being transfinite iterations, are both continuous on the right.

OK. So far so standard. Now for the next series.

Natural addition:

a⊕(b⊕c)=(a⊕b)⊕c, a⊕b=b⊕a. I'm just going to take the properties of natural addition and natural multiplication as given; I'm not going to try to explain them.

Jacobsthal multiplication:

This is where things get weird. One might be tempted to write down relations about a×(b+c) and a×(bc) like those above. However, while ⊕ is associative, and even commutative, it is not continuous on the right, which prevents such things from working.

That's not the weird part. The weird part is that, as Jacobsthal discovered, we can get a relation involving this multiplication anyway:
a×(b⊕c)=(a×b)⊕(a×c)

Yes, it's algebraically nice -- but what does it *mean*? Why should this be true? What does it mean to add something together b⊕c times? Unfortunately, I can give no really satisfying answer. Jacobsthal proved it by figuring out how to compute × on Cantor normal forms; this is pretty easy, and once you have that, it's a straightforward verification. But it seems very much like a mathematical coincidence, and that bugs me. If there's a better reason for it, I don't know it. (Of course, it's been 100 years since he discovered this, so there's a good chance this has since been resolved, but if there's a good reason it's certainly not obvious.)

Now, once we have this, it is pretty easy to get the relation:
a×(b×c)=(a×b)×c. What a strange operation to be associative! But it's a straightforward consequence of the relation above.

Jacobsthal exponentiation:

Continuing on, Jacobsthal found relations involving his exponentiation. These are straightforward consequences of the above:

a×(b+c)=ab×ac. To multiply together b+c copies of a, is to multiply together b copies of a, and multiply that by the product of c copies of a.

a×(bc)=(a×b)×c. To multiply together bc copies of a, is to multiply together b copies of a, and then multiply together c copies of the result. This requires the above relation.

So these are some pretty natural-looking relations... except for the fact that the first one, and therefore both of them, relies on the fact that × is associative! Notice neither of my heuristic justifications makes any sense if × is not associative. And × being associative rests on the apparent mathematical coincidence of it distributing over natural addition (on one side, anyway). (They also rely on +, ·, and × all being transfinite iterations and therefore continuous on the right, but that's not surprising.)

So. Let's take a look at the third series.

Natural multiplication:
a⊗(b⊕c)=(a⊗b)⊕(a⊗c), (a⊕b)⊗c=(a⊗c)⊕(b⊗c), a⊗(b⊗c)=(a⊗b)⊗c, a⊗b=b⊗a. Again, I'm just going to take these as givens.

Semi-Jacobsthal exponentiation:

This is where things get even weirder. Again, it's tempting to try to state a relation about a⊗(b+c) or a⊗(bc), but because ⊗ is not continuous on the right, this won't work.

But playing around a bit reveals the following surprising relation:
a⊗(b⊕c)=a⊗b⊗a⊗c

Once again, algebraically nice, but leaving the question -- why should this be true? What does it mean to multiply a together b⊕c times? Once again, I can't give a really satisfying answer. I proved this by coming up with a rule to compute a⊗b on Cantor normal forms, and then it's a straightforward verification. (I assume this is not original to me, but I was not working from anything else; I've only ever seen Jacobsthal's operations in that one paper of his, and I've never seen this operation anywhere.)

And not only that, but the earlier surprising coincidence is used in the proof of this one! Not in some sort of induction, because it wasn't done by induction, but because Jacobsthal product appears in the rule for computing a⊗b on Cantor normal forms. (Because × is iterated natural addition, and when one performs a natural multiplication, the exponents of the ω's undergo natural addition.)

Let's continue on. From this we can then deduce
a⊗(b×c)=(a⊗b)⊗c. Yes, that's Jacobsthal multiplication, not ordinary or natural multiplication! Because that's what you get when you iterate natural addition. This relies on the above relation (and the continuity of × on the right).

So these strange hybrid operations end up having some nice relations... but, it would seem, based on two mathematical coincidences. Two related and very similar mathematical coincidences. One alone would be a bit suspect; two would already be suggestive; these practically scream that there must be something going on here, some better reason for this that I don't know.

But I've no idea how I might investigate that (other than ask on the internet and see if anyone knows -- I'm certainly not going to study it myself). Back to writing this integer complexity paper...

-Harry

[0]Mostly irrelevant footnote: They all remain distinct at tetration, but I suspect they do collapse at the next level. I hadn't realized before that ordinal hyper actually is kind of silly for infinite ordinals due to continuity on the right. Lower hyper I suppose does not have this problem, if you wanted to look at that. Also I didn't realize that you have to exclude 0 as the left argument at tetration at above if you want the right argument to be infinite because it alternates between 0 and 1? Also it occurs to me I don't know whether, when both the level and the right argument are infinite, which limit to apply first, if it matters? Well, all this is really irrelevant to the main point.

[3]Well, except perhaps how 1 behaves wrt ordinary addition. While a+1=Sa, this doesn't work for 1+a. Rather, a+1=Sa if a is finite, and a+1=a if a is infinite. However it works fine with natural addition; a⊕1=1⊕a=Sa.
sniffnoy: (Chu-Chu Zig)
So over on MathOverflow, Philip Ehrlich confirms that Gonshor's definition is the same as Kruskal's, and that Alling's definition is also the same as this except restricted to infinitesimals. Also, when people talk about surreal exponentiation, this is the definition they're talking about.

Which leaves the question of where the definition currently appearing on Wikipedia ever came from. Tracing through the history, it seems that whole section was added by one Michael K. Edwards. So I'd ask him, but seems he's on wikibreak till November. Well, actually, seems he hasn't been seen on WP since June of 2011... oh well, I'll put it on his talk page anyway... :-/

Addendum: Seeems that wikibreak notice has been up since 2006. Guess it's pretty unlikely he'll show up again. I suppose I'll just go and remove that section later...

-Harry
sniffnoy: (Dead face)
Oops next day: Not ordinary exponentiation. See below.

I've posted this whole mess as a question to MathOverflow. After all, I don't even really know surreals, I'm clearly not equipped to answer it! Nor do I have the time; I still have to finish revising the first of these integer complexity papers, except of course not till Tuesday because first I have a problem set for Schubert calculus.

But for those too lazy to click through the link... things are bad.

Note: To avoid confusion between ωx in the usual sense and ωx in the sense of the exponentiation under discussion, I'll write the former as ω^x and reserve superscripts for the latter.

Recall that Wikipedia's definition (I still have no idea where it's taken from, so WP will just have to take the credit for now) can be extended in one of two ways: By replacing "2" with other numbers, or by doing the usual thing with logs. These don't agree! And of course you could replace 2 with another number and *then* do the usual thing with logs, in your new base, to get an *entirely different definition*. Hooray!

(Why don't these agree? Well, as an example, consider that if we try to generalize by replacing the base, we still get 3ω=ω, while if we try using logs, we get 3ω>ω.)

In fact, the "replace the base" definition seems pretty suspect for another reason: it looks like if we plug in two ordinals, it agrees with ordinary ordinal exponentiation. Now I don't know, maybe ordinary ordinal exponentiation actually works really well with natural addition and natural multiplication. I haven't checked, after all, because I only just noticed this problem. But it doesn't seem very likely.

Oops: Not ordinary exponentation, but rather the analogue of that based on natural multiplication. The exponentiation without a name that Jacobsthal didn't consider...

In fact if we use Gonshor's definition, ωω actually won't be an ordinal at all. Disappointing, but perhaps to be expected -- if surreal exponentiation really did yield a "natural" ordinal exponentiation, I think someone would have noticed in the past 30 years. If we apply Gonshor's definition, and the various theorems he proves about it, we find ωω=exp(ωlogω)=exp(ω*ω^(1/ω))=exp(ω^(1+1/ω))=ω^(ω^(1+1/ω)).

So that's a strike against the idea of "natural exponentiation", unless of course ordinary exponentiation turns out to be suitably "natural". There is Jacobsthal's definition -- or you could do the equivalent but working from natural multiplication rather than his weird multiplication -- but, blech.

I mean, let's say you wanted to try to define it. If you try to define it recursively, powers of 0 will be a problem. If you try to define it "computationally" -- well, you can't raise polynomials to transfinite powers! That leaves the "constructive" approach, which, I don't know, maybe it's possible -- but part of what makes the other operations nice in the first place is the fact that there's multiple ways to think about them, not just the one.

In short, I'm leaving standing the question on MathOverflow (because really guys, why is this a big mess, this should not be a big mess, this should be answered), but there is probably no such thing as "natural exponentiation" (I don't count Jacobsthal's definition).

-Harry
sniffnoy: (Chu-Chu Zig)
Edit again Feb 20: Added note about Wikipedia's definition and ordinals.

Edit: Added small note about how one could verify equivalence with Kruskal's definition. Also, added a note to this entry pointing out that the possible definition of "natural exponentiation" I suggest therein fails very badly.

So having looked at Jacobsthal's paper I think his "natural exponentiation" does not deserve the name. At least, if this is the right paper and the right operation. It's called "Zur Arithmetik der transfiniten Zahlen". In it, he considers natural addition, and then uses it to define a new multiplication and exponentiation operation on the ordinals, so I figure this must be it, right?

But the multiplication he defines is *not* natural multiplication. Rather, he uses the recursion for ordinary multiplication, using natural addition in place of ordinary addition. This is a new multiplication based on natural addition alright, but it's not natural multiplication (consider that under this multiplication, 2ω=ω, for the same reason it does with ordinary multiplication).

Then he defines a new exponentiation the same way (usual recursion, but based on his new multiplication). I think it's pretty clear why this can't sensibly be called "natural exponentiation".

But there does seem to be such a thing as surreal exponentiation. Or, let's for now at least assume there really is such a thing. (I mean, certainly there exist definitions of it, the question is whether they're all the same.) Then if we had ordinals x and y, we could take xy in the surreals... and if the result is always another ordinal, perhaps we could call that natural exponentiation? It would have the right algebraic properties, though that leaves the question of how it could be defined in a purely ordinal manner (recursively? constructively? computationally?). Question is, for ordinals x and y, is xy always another ordinal in the first place? I have no idea.

Edit: If we use Wikipedia's definition of 2x and generalize it by allowing other ordinals in place of 2, it would appear that for ordinals x and y, xy is indeed again an ordinal.

So. Surreal exponentiation. Last entry I mentioned there was a definition due to Kruskal. I should note that I have never seen this definition; On Numbers and Games, and Gonshor's book, both mention it, but neither state it. They also both mention that it hasn't been published, and, well, it looks like it still hasn't.

But Gonshor's book does suggest that his definition is equivalent to Kruskal's older one. Annoyingly, it doesn't explicitly state it. So I gather it's probably equivalent, but perhaps I should still see if there's someone I can ask about it. (But who would have seen Kruskal's definition? Well, I guess there's enough people on MathOverflow for there to be a chance someone has.) Meanwhile, I'm not too certain what Alling is doing -- remember, I don't really know surreal numbers, I was just looking this up for fun! -- but it looks like it's probably the same as Gonshor's, except that he only defines it for non-infinite surreals? I'm not too sure.

Addendum: Actually, despite Kruskal's work not being published, there is a way one could verify equivalence with it (though I'm certainly not going to). Conway mentions that Kruskal's definition of exp, was inverse to a certain definition of log; and it looks like he gives enough information to determine what that definition of log was.

This leaves the question of where the hell Wikipedia's uncited definition of 2x comes from. With any luck maybe that'll turn out to be Kruskal's original definition. :P Still, at least I have reason to believe that there's at most 2 definitions and not, say, four of them. And hey, there's a good chance those remaining two are the same, right? :-/

-Harry
sniffnoy: (Dead face)
So in the previous entry I asked the question, what the hell is natural exponentation, and does it agree with surreal exponentiation? This of course assumes that there is such a thing as "surreal exponentiation". After all, the surreals are always described as a place where crazy expressions like (ω+3)√π make sense, so everyone has to agree what surreal exponentiation means, right?

Well, now I'm not so certain. There's a definition due to Kruskal and a definition due to Gonshor and I have no idea if these are actually the same. They both have the required properties. (Note these are definitions of exp, not of ^, but since both have exp being onto the positive surreals one can therefore easily turn this into a definition of ^.) There may also be one due to Alling? Oy... though that one at least seems to be similar to Gonshor's...

Then Wikipedia -- not citing any source -- lists a recursive definition of 2x, which, I suppose, could be generalized to ax by just replacing "2" by "a". Does this agree with the others? Again, no idea.

Let's note here something that doesn't fit in -- the function ωx, commonly used in studying surreals, is *not* part of what I'm looking for. It agrees with the ordinary ωx when x is an ordinal, and is not supposed to fit in with any more general notion of exponentiation; the above definitions of exponentiation actually do not agree with it. Wikipedia's definition of 2x seems rather similar to the recursive definition of ωx so maybe it agrees with *that* and ordinary ordinal exponentiation instead? :-/ But that would be a surprise -- though there have been some extensions of ordinary ordinal operations to surreals, but these are not actually well-defined on all surreals.

And then on top of that we still have the question of whether any of these agree with "natural exponentiation" -- whatever the hell that is!

Guess I can try reading the original German for an answer to that latter question, anyway. Well, once the people at the library get SpringerLink working here again...

-Harry
sniffnoy: (SMPTE)
OK, so I just went and asked Math.SE about the ordinal operations introduced in the previous two posts, since I don't really want to spend my time figuring out more about them.

So! Let's review ordinal arithmetic. Ordinals have their ordinary addition, multiplication, and exponentiation. These can be defined either recursively, or constructively in terms of what order types they represent. The recursive definition allows one to extend this to be extended to a general hyper operation on the ordinals but mostly I won't consider anything further than exponentiation here.

Let's recall the constructive definition of these. α+β is a well-order on α⨿β, with the elements of α coming first and then those of β. αβ is a well-order on α×β, namely, reverse-lexicographic. And αβ is a well-order on finitely-supported functions from β to α, namely, reverse-lexicographic.

(Is there some construction that works for general hyper? Probably not.)

One could also describe these operations in terms of how they work on Cantor normal form, though I don't think you could consider this a "definition". Not sure how exponenation would work on Cantor normal form, actually, maybe that's actually just terrible and you can't sensibly describe it that way. Let's perhaps leave that one out in that case.

Edit: Actually, thinking about it some more, it's not too bad. It's kind of bad, but it's not terrible. The point is, it's doable.

But then we also have the natural operations! These give up the continuity properties of the ordinary operations (in the second argument, anyway) to get better algebraic properties. There are natural sum, natural product, and natural exponentiation. (Natural hyper? I really doubt it. Higher operations don't even have nice properties, except for those that the ordinary hyper would already get you anyway.)

Let's shelve natural exponentiation for now. So we have natural sum and natural product. These can each be defined three ways:

1. Recursively. Note that the recursion for multiplication is usually defined in terms of natural subtraction -- since natural addition is cancellative, one can make a group of formal differences of ordinals under addition. But it isn't necessary to do it this way, because one can just unwind the subtraction and state it purely additively. It's just a little nasty.

(Incidentally, if one changes the (usual) recursive definitions by replacing "smallest ordinal larger than all of..." with "smallest ordinal different from all of..." one gets nimber addition and nimber multiplication. I really doubt there's a nimber exponentiation...)

Edit: Duh, of course there's no such thing as exponentiation in finite characteristic -- or at least not in a domain, anyway. Consider the implications of ab+c=abac... well, OK, the results aren't contradictory, it's just that we end up with ab=1 for all a,b, which is not exactly satisfactory.

2. Constructively. For α⊕β, we take the poset α⨿β, and then take the largest well-order that extends it. It's not obvious that the supremum is indeed a maximum, but it is. For α⊗β, we take the poset α×β, and then take the largest well-order that extends it. Same comment about supremum vs. maximum.

3. "Computationally". To take the natural sum of two ordinals, one just adds their Cantor normal forms in the obvious way. To take the natural product, one just multiplies their Cantor normal forms in the obvious way, using natural sum to add the exponents.

Worth noting is that while the ordinals are often said to embed in the surreals, really it's the ordinals with the natural operations that embed in the surreals. They can't embed with the ordinary operations since those aren't even commutative!

These operations have basically all the properties you would expect of addition and multiplication (they embed in the surreals -- an ordered field -- after all).

Now we come to natural exponentiation. Natural exponentiation is...? I have no idea. According to Wikipedia, Jacobsthal defined it in 1907, and... that's all its says. Well, that and that it's rarely used. I actually dug up Jacobsthal's paper where he defines it, but firstly in German, and secondly even ignoring that it seems like it'll be hard to read. Also, that's just where it's defined, and I'd like to know how it fits into the above framework; I doubt it would even answer all my questions about it. But when I looked for other papers on it, I couldn't find any...

I'll denote the natural exponential as αβ, because that is how Jacobsthal denoted it.

So... can it be defined recursively? Can it be defined "constructively"? There's a nice analogy between how ordinary sum and product are defined, and how natural sum and product are defined. Is this what natural exponentiation is as well? Take finitely-supported functions from β to α, give this the obvious poset structure, and then take the supremum (maximum?) of well-orders extending it? (Note that the category of posets does have an exponential object, but it's the set of all monotonic maps from one to the other, and this is not only totally dissimilar to how ordinary exponential is constructed, but it gives the wrong answer for finite numbers.) Can we give a "computational" definition, too?

Edit: The above "constructive" definition fails very badly. Observe that it only depends on the cardinality of the base and not its order type! Thus it gives answers that are way too large. Still, perhaps it could be salvaged by adding more relations to the poset...

What properties does it have? I would hope it would satisfy:
α0=1
α1
0β=0 for β>0
1β=1
nk=nk for n,k finite
αβ⊕γβ⊗αγ
αβ⊗γ=(αβ)γ
(α⊗β)γγ⊗βγ
αβ is strictly increasing in β for α≥2
αβ is strictly increasing in α for β≥2 (This one seems like a stretch, for reasons that should become clear below. But it had damn well better at least be weakly increasing or something's *really* wrong...)

But... does it? No idea. (α⊗β)γγ⊗βγ seems like kind of stretch... except ⊗ is, in fact, commutative, making it not so much of a stretch.

I'd would expect it agrees with the exponential in the surreals, but do the surreals even have an exponential? I had assumed so because they're kind of an "infinity kitchen sink", but Wikipedia only mentions powers of 2 and powers of ω. I should probably look this up. (I had kind of also hoped maybe we might get 2α>α, but if it agrees with powers of 2 in the surreals, this is false, since that satisfies 2ω=ω. Oh well -- that one was a real stretch anyway.)

I would also hope for ωαα, come to think of it. That would contradict the property in the above paragraph, but like I said, that one was a real stretch in the first place. And certainly, what with it being a natural operation in the first place, it had certainly better satisfy αβ≥αβ. I'd also like αnn for finite n, but there's no way that's true; consider α=ω+1. And it's not like we have α⊗n=αn for n finite, either (again, consider α=ω+1).

So... time to either try Math.SE, or try reading Jacobsthal...? Well, quite possibly ask MathOverflow -- even though it's not really a research question, I doubt MathUnderflow will be able to answer it. Maybe I should check out Hausdorff's book on set theory and see if it mentions it by any chance? Well, at least I should be easily able to look up about the exponential in the surreals...

Edit: Having checked out Hausdorff's book, it only discusses natural sum and natural product, not natural exponentiation. But I may finally have found another paper that discusses it... well, it discusses surreal exponentiation, anyway... actually it seems to be easy to find papers discussing that...

Edit again: None of these papers are helpful. Well, at least I now know that the surreals do have exponentiation and logarithms, and these do have the usual properties, and so they therefore have powers in the obvious way, and these have the usual properties. One thing of note: exp and log are both strictly increasing, meaning powers would be strictly increasing in the base as well as the exponent. Hm... a bit odd to have 2ω=ω but 3ω>ω... assuming 2ω=ω in the first place... damn people not bothering to verify compatibility!

-Harry
sniffnoy: (SMPTE)
Edit: Argh missed some obvious things -- (α|k) and ((α|β))' aren't so terrible after all; I can indeed write down recursions for them. Entry now updated to reflect this.

Note: Last entry has been substantially revised since it was written.

OK. So last entry I introduced α↓β and α↵β, which are analogous to multichoose. I expect these have standard names, but I'm going to hold off on looking them up until I've worked out some more basics myself. We can also consider (α choose k), as well as the reverse-lexicographic version of that, which for some reason I didn't consider.

Also I want to use some different names because the above are too unwieldy for writing here in HTML. So I'll abbreviate (n choose m) as (n|m), and (n multichoose m) as ((n|m)). So I'll write ((α|β)) instead of α↓β, and, uh, ((α|β))' instead of α↵β; similarly (α|β) instead of (α choose β), and (α|β)' for the reverse version,. (I think α↓β is a nicer operation than α↵β, so the latter will get the prime. Similarly with why (α|β)' gets the prime.)

Anyway. Let's state some properties of these. Most are just copied over from last entry (with some variable names changed), but some are new:

Properties! )

Notice how (α|k), and ((α|β)), can be expressed recursively via the above properties; we know the case α=0, the case of α+1 is given by the analogues of Pascal's Identity, and finally, both these operations are continuous in α, handling the limit ordinal case. I do not know of a similar recursion for (α|k)' and ((α|β))'; this, along with their discontinuity, is why I consider them less nice. Although ((α|β))' does have better monotonicity/cancellation properties than ((α|β)). Still, these I have had to come up with "tricks" for computing each time.

Addendum: Oops -- now included above are ways for computing these as well (more "Pascal's Identities"); they're not nearly as bad as I thought, even though they lack continuity. Thing is that you have to primarily recurse on β (or k), rather than on α. Of course, really both involve recursing on both...

Examples up to ω, as last time, but now actually correct (I hope):
Examples! )

Questions I still have: Is (α|k)≤(α|k)'? For α infinite and k≥2, is this strict? Can an inequality between ((α|β)) and ((α|β))' be obtained if conditions are placed on the sizes of α and β? (Probably not.)

OK, probably about time I stop thinking about this and just look it up (i.e. ask math.SE...)

Addendum: But not before checking whether I can perhaps prove this via the above recursions, now that I have them...

-Harry
sniffnoy: (Sonic)
Last edit: OOPS! Some computations below are wrong. I've fixed them, made some strikeouts to some other things, etc.; any further corrections will be put in a new entry.


Edit next day: Strict increasingness in "base" added; continuity question answered. Also note about finite exponent.

OOPS next day: Some of the computations below were incorrect. Fixed now.

Also: New question at bottom. Adding notes about (α choose n).

Another possible OOPS: Previously I asserted α↓n↵n for n finite, but rethinking it's not clear to me that this is true. Will have to think more on this.

I assume these have been studied before but I've no idea what they're called. I suppose I should ask MathUnderflow.

Julian posed the following problem the other day: Say we have well-ordered sets α and β, and consider the set of all weakly decreasing functions from β to α. Order them lexicographically; is this a well-order?

The answer, unsurprisingly, is yes; this isn't too hard (though it took him and Jordan and I like an hour to figure it out), so I'll not give a proof here. Actually Julian was really only interested in the case α=β=ω, but he figured it ought to be true in general.

Furthermore, if we order reverse-lexicographically, this is a well-order as well. (This one is easier to prove.)

I'll denote the first of these by α↓β, and the latter by α↵β since I have no idea what they're normally called. (Actually I'm using \searrow and \swarrow, respectively, but those don't exist in HTML unless I want to look up the Unicode code points.)

Anyway. The point is, Julian and Ashley were looking at a certain decreasing sequence in ω↓ω, wanted to be sure it terminated, hence the question. But what's noteworthy is that the sequence was only well-defined because, for 1≤n<ω, ω↓n↵n=ω, which is lower than I would have expected.

It's easy to compute in the finite case; m↓n=m=(m multichoose n). I'd call the operation (α multichoose β) except there's two of them. It's tempting to define (α choose β) by restricting to strictly decreasing functions, but of course this is 0 for any β≥ω! Still, it's perhaps worth thinking about for n finite...

To state some trivialities, obviously (α choose 0)=α↓0↵0=1 for all α, (α choose 1)=α↓1↵1=α for all α, 0↓α=0↵α=0 for α>0, and 1↓α=1↵α=1 for all α. And α↓β and α↵β are strictly increasing in α (for β>0), and weakly increasing in β (for α>0). While (α choose n) is strictly increasing in α for n>0, and weakly increasing in n for α infinite. Also α↓(β+γ)≤α↓γα↓β, and α↵(β+γ)≤α↵βα↵γ. And of course (α choose n)≤α↓n↵n≤αn for n finite.

Since ω↓ω was the original motivation, I've computed α↓β and α↵β for α,β≤ω. At least, I think I've computed these correctly -- other people may want to check. In the below, n and m will always be finite. I'm not listing special cases which are obvious and which would be trouble to list.

m↓n=(m multichoose n)
ω↓n=ω for n≥1
m↓ωm-1+1 for m≥2
ω↓ωω

m↵n=(m multichoose n)
ω↵nn
m↵ω=ω(m-1)+1 for m≥1
ω↵ωω+1

One thing worth noting: Assuming my computations above are correct, α↵β is not continuous in either variable. α↓β, however, while discontinuous in β, might be continuous in α. I would expect not, but it is consistent with the above computations.

Update next day: Yeah, it's continuous in α; this is actually pretty easy, though I personally still find it a little surprising. (α choose n) is continuous in α as well.

New question: Is α↵β≤α↓β? I have no idea why this might be so, but it holds in all examples so far...

Answer added in last edit: No, I miscomputed ω↵n; the correct value shows this is false.

So that's kind of neat. I don't know, anyone know anything about these operations? (Or have any corrections?) I guess I should probably ask Math.SE and find out what they're actually called.

Tangentially, I think I'll also try asking there if anyone knows anything about this Jacobsthal natural power operation Wikipedia mentions...

-Harry

June 2025

S M T W T F S
1234567
891011121314
15161718192021
2223 2425262728
2930     

Syndicate

RSS Atom
Page generated Jul. 2nd, 2025 06:08 am
Powered by Dreamwidth Studios