![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
RETRACTION: This proof doesn't work. It's not actually surjective like I said. It does, however, work when γ=2. That's... something, but not at all what I claimed. Oops. The lower bound works, but is kind of pointless without the upper bound. Anyway, original entry follows.
So. Let's talk about transfinite sequences with finite image over a WPO. We have a WPO X; we have an ordinal α and we are looking at sequences of length less than α with elements drawn from X, but each sequence is only allowed to use finitely many elements of X. I'll denote the result sα(X).
Note that when α≥ω, then assuming |X|≥2, this is no longer antisymmetric, so I'm implicitly considering it modulo equivalences.
Anyway. This, as proved by Nash-Williams, is a WQO. We do need the finite-image restriction; the theorem doesn't work without it (unless our starting WPO was in fact a BPO, in which case it's a theorem that we get a BPO out). Of course, if α≤ω, we also don't need to include the finite-image restriction, because, hey, our strings are finite anyway, that restriction is implicit. (This obviously also happens if X is finite, although in that case it's also true that X is a BPO.) When α=ω, sω(X) is just X*, the set of finite words on X. This case is, of course, just the well-known Higman's Lemma, which is not too hard. Higman's proof was published in 1952.
But proving it for general α, apparently, was harder. In 1954 Rado managed to prove it for α=ω³. In 1959 together with Erdös he managed it for α=ωω. Finally, in 1965, Nash-Williams proved it for general α.
But in fact, I claim, the general case is actually not much harder than Higman's Lemma, and follows from it. I have come up with an astonishingly simple proof of Nash-Williams's theorem, one whose essence I will relate in this blog post. (When will I write this up as a proper paper? Um... eventually. I have a lot of backlog.)
We're going to restrict to the case where α=ωγ, since obviously that is enough to prove the theorem. Before I give the proof of the general case, though, I want to give a proof of a special case that is nice and simpler, namely, when X is not just a WPO but in fact a well order -- say it's an ordinal β.
So, why is sωγ(β) a WPO? Because (β×γ)* surjects monotonically onto it. The map is simply to map each individual symbol (x,y) to a string of ωy copies of x, and then to concatenate the results together. Now, if you regard sωγ(β) simply as a set and ignore the equivalences -- remember, it's not antisymmetric! -- then this is not a surjection. But once you account for equivalences, it is. There you go. Done.
For the general case, though, this isn't good enough. Let's define D(X) to be the set of lower sets in X which can be written as the downward closure of a finite set (these are ordered under inclusion). (This is the same as considering finite subsets of X and ordering them under a certain common order which also lacks antisymmetry, or rather, it's the same once you mod out by equivalences.) Our alphabet, now, is no longer just going to be X×γ. Instead, our alphabet is (X×{0})∪((D(X)\{∅})×(γ\{0})). But the proof is much the same. Nash-Williams's theorem proved with a simple application of Higman's Lemma.
But that's not all! Because, you see, this proof can be used to get upper bounds on the type of sα(X). And the result blows the existing bounds out of the water.
Well, OK -- I say "the existing bounds", but in a sense there aren't really existing bounds. Schmidt gave upper bounds for o(sα(X)) in her habilitation thesis, but her proof actually has a hole in it, that to my knowledge has never been corrected. (I cannot take any credit for spotting this hole; it was pointed out to me by Andreas Weiermann.) Nonetheless, they're still what I compared against. And they were after all correct, seeing as I was able to drastically improve on them! :)
Like, the bounds you get from this, are, at worst, triply-exponential in β, and doubly-exponential in γ (so, you might say, singly-exponential in α). Schmidt's bounds are, um, much larger than exponential. We're talking Klammersymbols and Veblen functions. So yeah. This is much better.
The question is, are the bounds we can get from this proof the best possible? (Excluding the trivial case where β≤1.) Note of course this case only directly applies to when α is a power of ω, but when it's not, you can get bounds by putting together cases where it is, basically. I'm not going to go into detail on that case here; I want to focus on the core case where α is a power of ω. My suspicion is yes, but proving corresponding lower bounds can be quite difficult. So far I can prove this to be optimal in a few cases, but not much. I don't know if I'll actually get much further with this anytime soon, or really have time to work on this further, but with such a large improvement I just had to put this out there...
ADDENDUM: OK actually I found a simple way to get lower bounds, although not matching lower bounds; just lower bounds of roughly the same order. Although in a few special cases it happens to match.
So. Let's talk about transfinite sequences with finite image over a WPO. We have a WPO X; we have an ordinal α and we are looking at sequences of length less than α with elements drawn from X, but each sequence is only allowed to use finitely many elements of X. I'll denote the result sα(X).
Note that when α≥ω, then assuming |X|≥2, this is no longer antisymmetric, so I'm implicitly considering it modulo equivalences.
Anyway. This, as proved by Nash-Williams, is a WQO. We do need the finite-image restriction; the theorem doesn't work without it (unless our starting WPO was in fact a BPO, in which case it's a theorem that we get a BPO out). Of course, if α≤ω, we also don't need to include the finite-image restriction, because, hey, our strings are finite anyway, that restriction is implicit. (This obviously also happens if X is finite, although in that case it's also true that X is a BPO.) When α=ω, sω(X) is just X*, the set of finite words on X. This case is, of course, just the well-known Higman's Lemma, which is not too hard. Higman's proof was published in 1952.
But proving it for general α, apparently, was harder. In 1954 Rado managed to prove it for α=ω³. In 1959 together with Erdös he managed it for α=ωω. Finally, in 1965, Nash-Williams proved it for general α.
But in fact, I claim, the general case is actually not much harder than Higman's Lemma, and follows from it. I have come up with an astonishingly simple proof of Nash-Williams's theorem, one whose essence I will relate in this blog post. (When will I write this up as a proper paper? Um... eventually. I have a lot of backlog.)
We're going to restrict to the case where α=ωγ, since obviously that is enough to prove the theorem. Before I give the proof of the general case, though, I want to give a proof of a special case that is nice and simpler, namely, when X is not just a WPO but in fact a well order -- say it's an ordinal β.
So, why is sωγ(β) a WPO? Because (β×γ)* surjects monotonically onto it. The map is simply to map each individual symbol (x,y) to a string of ωy copies of x, and then to concatenate the results together. Now, if you regard sωγ(β) simply as a set and ignore the equivalences -- remember, it's not antisymmetric! -- then this is not a surjection. But once you account for equivalences, it is. There you go. Done.
For the general case, though, this isn't good enough. Let's define D(X) to be the set of lower sets in X which can be written as the downward closure of a finite set (these are ordered under inclusion). (This is the same as considering finite subsets of X and ordering them under a certain common order which also lacks antisymmetry, or rather, it's the same once you mod out by equivalences.) Our alphabet, now, is no longer just going to be X×γ. Instead, our alphabet is (X×{0})∪((D(X)\{∅})×(γ\{0})). But the proof is much the same. Nash-Williams's theorem proved with a simple application of Higman's Lemma.
But that's not all! Because, you see, this proof can be used to get upper bounds on the type of sα(X). And the result blows the existing bounds out of the water.
Well, OK -- I say "the existing bounds", but in a sense there aren't really existing bounds. Schmidt gave upper bounds for o(sα(X)) in her habilitation thesis, but her proof actually has a hole in it, that to my knowledge has never been corrected. (I cannot take any credit for spotting this hole; it was pointed out to me by Andreas Weiermann.) Nonetheless, they're still what I compared against. And they were after all correct, seeing as I was able to drastically improve on them! :)
Like, the bounds you get from this, are, at worst, triply-exponential in β, and doubly-exponential in γ (so, you might say, singly-exponential in α). Schmidt's bounds are, um, much larger than exponential. We're talking Klammersymbols and Veblen functions. So yeah. This is much better.
The question is, are the bounds we can get from this proof the best possible? (Excluding the trivial case where β≤1.) Note of course this case only directly applies to when α is a power of ω, but when it's not, you can get bounds by putting together cases where it is, basically. I'm not going to go into detail on that case here; I want to focus on the core case where α is a power of ω. My suspicion is yes, but proving corresponding lower bounds can be quite difficult. So far I can prove this to be optimal in a few cases, but not much. I don't know if I'll actually get much further with this anytime soon, or really have time to work on this further, but with such a large improvement I just had to put this out there...
ADDENDUM: OK actually I found a simple way to get lower bounds, although not matching lower bounds; just lower bounds of roughly the same order. Although in a few special cases it happens to match.