sniffnoy: (Sonic)
By "irresponsible speculation", I mean "speculation without having done my homework first", i.e., without having even tried to look at the existing literature beyond this paper or actually really tried anything at all.

So, Juan Arias de Reyna recently pointed out to me the following paper: Commuting Probabilities of Finite Groups, by Sean Eberhard.

EDIT DEC 1: The paper has been updated; the following refers to the original version of the paper. See the comments for discussion of the updated version.

So: If you have a finite group, you can consider the probability that a randomly chosen ordered pair of elements from it commutes. You could then consider the set of all probabilities obtained this way; this is some set of rational numbers between 0 and 1.

In this paper, Eberhard shows that this set is in fact reverse well-ordered! Its order type is of the form ωα where 2≤α≤&omega². (Though Juan Arias de Reyna points out that it's not too hard to find examples that show that in fact α≥ω, so the order type is at least ωω.) He also shows that all the limit points of this set are rational. I think it should be pretty obvious why I'm interested in this! (Even though I have no plans to actually study this, my hands are full as it is.)

Now for the irresponsible speculation:

1. Instead of just doing finite groups, one could generalize to compact groups; one can consider probabilities there as well. How would this affect the set? It would be really nice if this just gave you the closure of the probability set for finite groups! Though Arias de Reyna tells me it's conjectured that the finite group commuting probability set is closed in (0,1], so that would be saying that using general compact groups only gives you 0 in addition. (I mean, it certainly does give you 0 in addition!)

2. I'm reminded of some work of John Wiltshire-Gordon and Gene Kopp. They considered probabilities of randomly chosen elements from a compact group satisfying some general word; the case of commuting
probabilities is the case where the word w is aba-1b-1. I wonder if the same phenomenon might be seen for other words.

Probably it would be best to first look at words in one variable. Obviously using w=a generates an ω if we stick to finite groups and an ω+1 if we allow general compact groups -- not ωω, but still well-ordered. As for a² or a³... well, I don't know, and I don't really intend to work on it as I said above, but it seems vaguely plausible and it's an interesting question!

So yeah that's basically all I have to say on the matter.

-Harry
sniffnoy: (Kirby)
Hey hey hey here you go!

So yeah, obviously this was mostly written quite a while ago (it's just Chapter 4 of my thesis, after all), but I'm only putting it on arXiv now.

...I'd write more about it, but honestly, if you're still reading my LJ, chances are you've heard a lot of it before. (At least, I think I've written about it here before...) So I'll stop here. Well unless someone goes in the comments and posts "What the hell is this?", then probably I'll explain. :P

-Harry

It's done!

May. 27th, 2014 08:31 pm
sniffnoy: (Kirby)
And so today I turned in the final version of my thesis and got my certificate of completion. If there are any more mistakes in there, I'm not fixing them. At least, not until I try to get Chapters 4 and 5 published (though Jeff tells me Chapter 5 will need to be split before it can be published).

It's going on ProQuest of course but also I figure I may as well just upload it to my website at UMich? (Hm, that should really be updated.) There's no reason it shouldn't be freely available. I'll also finally put the code up, for those of you who want it. A warning -- the code is far from readable; I didn't really have time to go back and make it readable, and since it wasn't officially submitted to the school, I didn't really have to. I did at least take the time to document all the functions, but I didn't take the time to put them in a sensible order. (Juan Arias de Reyna, meanwhile, tells me that he may rewrite the whole thing in Python.) Well -- I'll post an update here once I've done that.

EDIT: OK, website is updated, with the thesis and code. You wanted to see how awful my code is, Mickey? :)

Anyway, hooray! Now I actually have some free time for a bit. Will I actually write here more often? Uh, maybe...

-Harry

The defense

May. 9th, 2014 07:28 pm
sniffnoy: (Kirby)
Today was my defense. It went well. I "broke everybody's legs", as Heidi wished me yesterday. Jeff had me make slides for it a few days ago, so I mostly used the slides rather than the blackboard; fortunately we didn't have much trouble getting the projector set up.

Committee was Jeff, Andreas Blass, Alexander Barvinok, Martin Strauss, and Kevin Compton (from the CS department). Karen Smith showed up as well. Also Julian (my academic older brother) happened to be in town so he was there. And various other people; this wasn't supposed to be a complete list, so let me head off the possibility before it turns into one. And of course a number of my housemates showed up -- Beatrix, Angus, Seth, Noelle... perhaps one or two more I'm forgetting? Not sure that any of them understood it besides Beatrix.

Still, the topic obviously was pretty accessible to those with a bit of math background (though I did make a point of stopping to explain what ωω is; that's been a sticking point when I've talked about this before). Which meant that I could actually get into the proofs quite a bit. Most defenses I've gone to, the subject and the statement of the results takes so long to explain that there's barely any time to go into how any of it was proved, but I actually got to do quite a bit of that. I was afraid I would go over time, and I did a little, but only by like 5 minutes. Which is probably by about how much I started late anyway.

Kevin (er, by which I mean Kevin Carde, not Kevin Compton) mentioned to me afterward that he really liked the argument for why, when k is an integer, the order type of D∩[0,k) is exactly ωk and not any larger. (Look at what happens for k-ε and take a limit.) That was a pleasant surprise.

Most unexpected question at the defense (I forget whether this Compton or Strauss): "What if you allow addition, multiplication, and nim-addition?" My response was that, since 3k⊕1=3k-1, well-ordering should fail for the same reason as when you allow subtraction. But up till then, in all the variants of the problem I had considered, it had never occurred to me to consider nim-addition. (Or nim-multiplication, or nim-inversion.) Also Compton asked about ωω showing up in other contexts, which gave me a chance to say a bit about how it's possible it might show up with addition-multiplication chains as well... as well as another less speculative generalization which I then forgot to mention, but oh well.

Spent a bunch of time talking to Kevin and Julian and Karen afterward. Mentioned the fact that the closure of the defect set is the set of numbers of the form "a defect plus a whole number"; they were surprised that the latter was a closed set. Since the arguments for these facts were largely unloaded from memory at the time, all I could really say was, "You know, when you put it that way, it does seem weird."

In addition to the usual little math department celebration (Jeff got me sparkling grape juice, since I don't drink), when I got home I found Beatrix had gotten me a cake as well! When Noelle mentioned this via IM to Justine (who has moved out for the summer and is down in Texas), she said [something along the lines of] "Is it an ice cream cake? It should be an ice cream cake." I told Noelle she can tell Justine, I'm glad to hear she cares, but the cake is fine.

EDIT: OK, actually Justine just asked if it was an ice cream cake; she didn't say it should be. Since it was Justine, I imagined that was the implication, but perhaps that wasn't intended.

Anyway, that's done. I mean, aside from writing up the revisions to my thesis and getting those turned in...

-Harry
sniffnoy: (SMPTE)
So! Juan Arias de Reyna and Jan Van de Lune recently posted to arXiv their paper "Algorithms for Determining integer complexity". In it, they A. improve the known bounds on time complexity for computing ||n||, and B. get a new bound on ||n|| that works for almost all n.

(Hey, I'm blogging about things adjacent to my work again! Well, a little, anyway. More will have to wait.)

To review -- a big problem in integer complexity is determining the limsup of ||n||/(log n), which I'll denote Cmax. Certainly Cmax≤3/(log 2), or about 4.328, since ||n||≤3log2n for all n≥1; this bound follows from writing n in base 2. And Josh Zelinsky, in a paper he still needs to write, has lowered this to ||n||≤27log1260n for all n≥1, so Cmax≤27/(log 1260), or about 3.782. The highest known value of ||n||/(log n) occurs at n=1439, for a value of 26/(log 1439), or about 3.576. This seems to be a unique occurrence, so one ought to have Cmax<26/(log 1439), but the current best upper bounds have not beaten this milestone.

For comparison, if it is indeed the case that ||2k||=2k for all k≥1, then one would have Cmax≥2/(log 2), which is about 2.885; Josh has previously suggested that perhaps the limsup is exactly this number. (Though Iraids et al suggested that perhaps it's even larger.) However, at present, nobody's even been able to prove that the limsup is any greater than the liminf, which is 3/(log 3), or about 2.731 (and which is also an absolute lower bound). And indeed, various people have suggested that perhaps the limsup simply is the liminf[0]. Josh and I attempted a while ago to show it was at least slightly larger, but that ended up not working out, though I'm of the opinion it's worth re-exploring.

Anyway. So the state of Cmax is not good. But we can also define a number I'll call Cavg: We'll define Cavg to be the inf of all C such that ||n||≤Clog(n) for almost all n, i.e., on a set of density 1. (So certainly 3/(log 3)≤Cavg≤Cmax). And it's here that Arias de Reyna and Van de Lune have made an improvement. (But first a bit more history, if you don't mind.)

A number of people have noted that based on writing n in base 2, one can show Cavg≤5/(2log2), or about 3.607. (Already this is better than Josh's bound.) Richard Guy and John Isbell took this a step further and tried writing n in base 24, yielding a bound of Cavg≤265/(24log24), or about 3.474. (This is even better than 26/(log 1439), which the current Cmax bounds are not!) Well, now, based on writing n in base 2936, they've shown that in fact

Cavg≤15903451/(2936log(2936))

which is about 3.321. Quite an improvement, in my opinion!

(As for the true value of Cavg, who knows. Maybe it and Cmax are both equal to 2/(log 2). Well -- if Cmax is at most 2/(log 2), then so is Cavg, and if Cmax is equal to 3/(log 3), then so is Cavg; and if either is true, then even this new bound on Cavg is quite a ways away from the true value. Still. A substantial improvement.)

So what's going on here? How did they do this? Well, it's the same method as Richard Guy and John Isbell used, just applied with a much higher base with the help of a bit of automation. Let's go into a bit more detail.

Let's define D(b,r), as Arias de Reyna and Van de Lune do, to be the smallest number of ones needed to turn a number x into the number bx+r. That's not a formal definition, but it's not too hard to write one down; you can write down a recursion similar to that for integer complexity, allowing you to compute it algorithmically. For r=0, D(b,r) will simply be ||b||. (And for b=0, D(b,r) will simply be ||r||, though we won't care about that here.) We'll only be considering here D(b,r) for 0≤r<b, though one can consider it for r≥b as well. Note, by the way, that (excluding r=0 or b=0) one always has D(b,r)≥max(||b||,||r||). (Actually, so long as r>0, one has D(b,r)>||b||, and so long as b>0, one has D(b,r)>||r||.)

With this language, we can make some nice statements. The method of getting upper bounds on Cmax by writing numbers in different bases simply becomes the statement that, for any b≥2,

Cmax ≤ max0≤r<b D(b,r)/(log b)

...or does it? It's possible I'm neglecting some subtleties with the initial digit here. Well -- it's ultimately irrelevant, because so far nobody's ever found a base that does better than base 2 for getting an upper bound on Cmax! And in base 2 one does not have to worry about such things.

But what certainly is true, and much more useful, is that

Cavg ≤ avg0≤r<b D(b,r)/(log b);

this is the method of writing things in base b to get upper bounds on Cavg. Well, Arias de Reyna and Van de Lune have done this with b=2936, and gotten a pretty nice bound.

But -- and now we get to part A -- this isn't the only thing they've done with the values D(b,r)! They present also in their paper an algorithm for computing ||n||, which is apparently due to Martin Fuller. Now, we already knew that you could do better than O(n2) in terms of the time required to compute ||n||; Srinivas and Shankar presented an algorithm that showed that the exponent 2 could be lowered to log23, or about 1.585.

Arias de Reyna and Van de Lune present a time analysis of Fuller's algorithm, and show that it runs in time O(nα), where α is about 1.246. Interestingly, their time analysis depends on the numbers D(b,r) -- once again, you pick a base b, do computations involving D(b,r) for 0≤r<b, and get out a bound. In this case, the computation involved is rather more complicated than just taking an average, and doesn't yield a number that can be written down in a nice short exact form, but it still is a computation based on D(b,r) for 0≤r<b. The best base they found for this problem was 66, which yielded the above value of α.

So, pretty neat! A little different from what I'm mostly working on, but, it's not like integer complexity is a widely-studied thing. And it shows that the numbers D(b,r) have more than one purpose, and may be worth further study.

-Harry

[0]Indeed, if instead of the integer complexity one considers the addition chain length l(n), which is similar in a number of ways (which reminds me, I've never really posted here about any of my work regarding addition chains; I should probably get around to that sometime), one has that the liminf and limsup of l(n)/(log n) are both equal to each other, both being equal to 1/(log 2).
sniffnoy: (Dead face)
So I decided today to try to get some work done other than writing. But what? I could work on this hard problem that I've gotten nowhere on... or that hard problem I've gotten nowhere on... oh! I'm currently writing the paper where I explain the computational side of what I've done, so this would be a good time to go back over the code and improve it -- there's a few optimizations I've intended to implement for a long time. Well, one big one, anyway.

Surprisingly, playing around with the program, I found a bug I never noticed before: When run in "all information" mode[0], and input a number which is divisible by 3 a fair number of times, it sometimes just crashes rather than output anything.

I have no idea why this is -- well, OK, I haven't sat down and looked into it, but knowing how my code is structured[3], I already have a guess. Still. Guess I'd better fix that before worrying about how to speed things up...

-Harry

[0]Meaning, give information about ||3kn|| for all k≥0, instead of just outputting the stabilization length and stable complexity.
[3]At least, I think I remember...
sniffnoy: (Kirby)
Here's a link.

For those of you that are just tuning in: Let's define ||n|| to be the complexity of n, by which in this context we mean the smallest number of 1s needed to write n using any combination of addition and multiplication. (Note that this is the number 1, not the decimal digit 1. Allowing you to write "11" and say you've only used two 1s would just be silly.)

Then we can define the defect of n, denoted δ(n), to be the quantity ||n||-3log3n; and we can then consider the set of all defects δ(n), as n ranges over all natural numbers. Surprisingly, this set is well-ordered, and its order type is ωω.

...OK, this won't be surprising to you if you've spoken to me anytime in, say, the past several years. But it should be a surprise to almost everyone else. And it's pretty damned neat regardless.

Thanks are of course due to Joshua Zelinsky -- who, after all, defined the object of study of this paper in the first place (defect, that is, not complexity) -- and to Juan Arias de Reyna who not only helped a lot with the editing but also helped organize several of the ideas in the paper in the first place. And other people, but, well, you can check the acknowledgements if you really care about that.

We'll see where I can get this published. In the meantime, this should be quite a bit more readable than the old draft sitting around on my website.

Now I guess it's on to the next paper (for now)... or rather, it already has been for a while...
sniffnoy: (Chu-Chu Zig)
EDIT: Clarity.

Here's another entry from the file.

It's an old idea that addition and multiplication of whole numbers don't get along very well. I attended a talk by my advisor recently on the subject. But one thing that struck me from the examples he used is that the idea of "addition and multiplication don't get along very well" seems to take two distinctly different flavors.

One of his examples was the abc conjecture, and some related ideas; another was Gödel's incompleteness theorem. But the first of these, essentially, says that addition predictably destroys multiplicative structure. While the second says that the interaction of addition and multiplication is so tangled as to be unpredictable.

(His third example was my own well-ordering result regarding the defects of integer complexity, but honestly I'm not sure it even fits into this category at all. The set of defects has some really nice structure! But that gets into stuff (due to a combination of Juan Arias de Reyna and myself) I haven't really talked about here and probably won't get to for a while.)

Anyway, I don't really know where I'm going with this. I think my point just is, "Addition and multiplication don't get along" seems to really be two different ideas actually.

-Harry
sniffnoy: (Dead face)
So. As you probably know, I'm currently writing a paper in which I prove that a certain subset of the real numbers is well-ordered with order type ωω. And yes it should have been done like a month ago (or probably earlier). Real life has introduced some delays.

Anyway. The point is, in order to do this, I need to cite various set theory/order theory/point set topology stuff, to handle ordinals embedded in larger totally ordered sets (which here is always the reals). I have a section of the paper where I introduce all the stuff I need from there. I'd held off on actually writing it until pretty recently though. I didn't actually know where I'd cite the stuff from; I figured it wouldn't be too hard to find.

So Jeff suggested I ask Andreas Blass where I might find citations for these things. His response on hearing just what I wanted was, yes, that's certainly easy, but I'm actually not certain you'll find it in the literature. Try the following really old books; they cared a lot about this sort of thing around the turn of the century.

So I got the books Andreas suggested out from the library, and, unfortunately, they mostly didn't help. So, I'm having to write most of the proofs myself.

Anyway. The point of this entry was a particular thing. Say we have an ordinal α, and we let β be the set of all limit ordinals less than α. How can we express β in terms of α? Sounds easy, right? I mean it's basically just "dividing by ω", right?

Well basically yes, but if you actually want the order type on the nose, well, it's surprisingly easy to screw up. Now actually all I really need is that if α<ωn+1 (n finite), then β<ωn, so I don't need to prove a whole formula. (Actually, I guess if you just replaced n+1 with 1+n, this statement would still be true for n infinite.) But today I sat down and figured out just what the correct formula was (because you know I coudln't find this listed anywhere) so I thought I'd record it here for reference.

First off, if α is finite, there are no limit points. Otherwise...

Say α has Cantor normal form ωγ_kaγ_k + ... + ωγ_0aγ_0, and assume that γ_0=0 (we're going to allow the a_i to be 0). Define α/ω (this may have a standard meaning but I forget, oh well) to mean ω(γ_k)-1aγ_k + ... + ω(γ_1)-1aγ_1. (When I write γ-1 for an ordinal γ, I mean subtracting the 1 off the beginning. So for a finite ordinal this is subtracting 1, and for an infinite ordinal it does nothing.)

Then the order type of β is α/ω-1 if a_0=0, and is (α/ω-1)+1 if a_0>0. (Yeah, that notation is kind of crappy -- it looks like (γ-1)+1 should just be γ (and often it is). But we're subtracting 1 off the beginning, not the end, so rather we have 1+(γ-1)=γ, while (γ-1)+1 is only γ if γ is finite. I guess what we need here is some additive equivalent of the \ that gets used for division-on-the-left in certain contexts. Oh well.

-Harry
sniffnoy: (Kirby)
Numbers with Integer Complexity Close to the Lower Bound, by me and Joshua Zelinsky, on arXiv.

It's to appear in INTEGERS as well, but of course that will have to wait.

Again, many thanks to Jeff Lagarias, Juan Arias de Reyna, and... oh, hell, just go see the acknowledgements if you want to see everyone who helped on this.

I should go update my UMich website now...

-Harry
sniffnoy: (Kirby)
Good (if expected) news! (Though not expected this soon, at least not by me.)

Josh's and my paper, "Numbers with Integer Complexity Close to the Lower Bound", has been accepted for publication in INTEGERS (assuming we fix two typos). And then do a proofreading pass, which I guess one of us will have to actually do.

I guess it's now a bit different than whatever the last version I uploaded to my UMich website was, but, eh, not going to bother fixing that; when it's actually published I'll just change the link...

-Harry
sniffnoy: (Kirby)
It worked!

Time it took Jānis Iraids to compute ||n|| for n≤1012, and thus verify ||2a||=2a for 1≤a≤39, presumaby with access to a supercomputer: 3 weeks[0]

Time it took me to verify not only that ||2a||=2a for 1≤a≤40, but that ||2a3k||=2a+3k for a≤40 and a+k≥1 on my laptop: 12 minutes.

Hooray for better algorithms!

Of course, none of what I've written here is verifiable until I've posted the code, but first I want to reorganize it, add some additional features, and ideally actually make it readable...

(I'm also going to hold off on more computations till then, as one thing I'd want to do is add an "indefinite" mode where you can just cut it off at any point, so I don't have to decide beforehand how high to compute up to...)

-Harry

[0]Source: I asked him on Twitter. I didn't ask about the computer used.
sniffnoy: (Chu-Chu Zig)
What with the conclusion of writing this first paper (gods willing) -- and the end of classes -- I have some brief time to actually do some *thinking* about integer complexity again before Jeff makes me get started on writing the next one[0]. (I believe much of what I am thinking about has already been solved by Juan Arias de Reyna, but I will read his solutions later; first I want to think on this myself some more.) Sorry, no details right now -- well, unless you think you can infer them from the rest of this post!

In earlier drafts (that included material we definitely did not have time to cover in this first paper) I defined a class of ternary families I called "primary ternary families". I am thinking now this was not such a good name. (Also, people keep telling me "ternary family" is a terrible name, and I assume they're probably right, but since right now I'm just talking about naming things for the purposes of my own thinking, I'm going to ignore that right now.)

Rather I am thinking that would be a better name for a slightly more general class of ternary families. (The old meaning is still important, but slightly less so.) But I've used this name long enough that I would have trouble switching it over in my mind. And perhaps a more suggestive term could be used, anyway.

The intutition is that these "primary" ternary families are somehow bulky or substantial. And with the more restricted meaning, they're even more bulky. I like the idea of calling "bulky" what previously I called "primary". But what about the more general class? "Substantial ternary family" is a bit of a mouthful. "Meaty?" No, that's a little too out there. "Solid?" Unfortunately, we ended up using "solid number" to refer to additively irreducible numbers, so that's out. Though maybe we can still change that back to the term we were using previously -- we called such numbers "chunks". I still like this better. Actually, so did Josh, but Jeff had problems with it and we decided it was more important to get the thing done than to bikeshed about the term. But I suspect Jeff slightly misunderstood how we intended to use the term, so perhaps if that is made clear he will change his mind.

Yeah, I think I'll stick with "substantial". (Anyway, these things might well end up as polynomials, and "substantial polynomial" is not so bad to say.)

-Harry

[0]Also some time to do some more coding on the classifier, which is most of what I've been doing. The core of it has been done for a long time, of course, but it still needs to be changed to use exact arithmetic, and I'd also like to add some additional output capabilities. Like maybe, "enter a real number (in a format I can work with), get back the order type of defects up to that real number". I don't know if I'll actually implement that one. Computing order type is straightforward from the existing core tools, of course; it's the reading in a real number in a potentially quite general format I find daunting. :P But I guess I can always just implement the actual function itself, and then say "open the program up with ghci if you want to use these additional features"...
sniffnoy: (SMPTE)
Here's a revised version of Numbers with Integer Complexity Close to the Lower Bound, resubmitted over the previous version posted here. I guess that one's gone from the internet now, since I uploaded this one to the same URL, but that can hardly be called a loss. Mostly just typos fixed, but also a small gap in the proof of the main theorem. Many thanks to Juan Arias de Reyna for going over the paper so thoroughly.

-Harry
sniffnoy: (Kirby)
Well, essentially done. We'll have to see what the referee says the second time around. And of course this is only the first of, what, five papers now?

For now I've just put it up at http://www-personal.umich.edu/~haltman/ogshort.pdf ; I'll update the website's front page shortly. (Edit: That's done now.) Then I'll have to ask Jeff about putting it on arXiv...

So, yeah -- a lot had to be cut out in order to keep it at a reasonable length. In there is Josh's method, computations up to 12δ(2) (did I mention we're denoting defect of n by δ(n) now?), proof that 2213k works, and proof that Ak(x)=Θ((log x)k+1).

Gone are the formulae for Er(k), and long gone is anything related to well-ordering. Those will have to wait. (This is part of why there are so many papers now.)

Now Jeff wants me to get started on writing the well-ordering paper like immediately... oh well...

Addendum: Heh, Jeff has already emailed me with a few typos. Now fixed in this version. We'll see if the referee catches them.

Further addendum: OK, let's not turn this into a "point out the typos" thread. I've found more since then and I've been correcting them as I find them...

-Harry
sniffnoy: (Sonic)
Karlis Podnieks's groups has finally gotten around to writing up their results!

There's a lot of scattered stuff in there; I haven't read most of it. For those of you who haven't seen any of this before -- well, and who care about this -- I think Kaspars Balodis's upper bound on ||2n-1|| is definitely worth noting.

(As for us -- well, the first paper is, at long last, pretty much done. Now Jeff wants me to get started on writing the next one like immediately... oh well...)

-Harry
sniffnoy: (SMPTE)
(There is no "part 2"; the "part 1" was just to signal that more was coming.)

In this entry I will consider Juan Arias de Reyna's conjecture 8, and some variants thereof (some of which are almost certainly false).

Note: To make things easier, I will phrase everything in terms of defects rather than ratios.

Let's start with some definitions. Define a natural number n to be stable if for all k, ||n*3^k||=||n||+3k. Define a stable defect to be the defect of a stable number. (If two numbers have the same defect and one is stable then so is the other, so there is no problem there.)

I will use "D" to mean the set of all defects. (In the paper we will use a curly D.) Ď will indicate the set of all stable defects. (Because it's more convenient to type in HTML than some of the other alternatives.) I will use "cl" for topological closure. "Da" will indicate the set {d(n): n≡a (mod 3), n≠1}, and Ďa will be used similarly. Since all of these are well-ordered sets with order type ωω, I will use Sα to mean the α'th element of S. (Except in one special case which... well, you'll see.) Also, in this entry, N will include 0.

So. Conjecture 8 deals with the self-similarity of the set of defects. We know it has order type ωω; conjecture 8 says that left-multiplication by ω roughly corresponds to a numerical increase of 1. Of course, we have to be careful how we formalize this.

Last time I discussed conjecture 8, I actually messed it up slightly. Let me begin with a slight variant / reformulation of that form:

Version 1: For k≥1, lim_{β→ωk(α+1)} Dβ = Dα+k. Furthermore, lim_{β→ωk(α+1)} Da+kβ = Daα+k.

Unfortunately, this version is almost certainly false; unstable numbers really wreck it. Which is why what Arias de Reyna actually conjectured is a bit different.

Version 2: For k≥1, lim_{β→ωk(α+1)} Ďβ = Ďα+k. Furthermore, lim_{β→ωk(α+1)} Ďa+kβ = Ďaα+k.

(His original conjecture is the "furthermore", in the case k=1. The rest is some natural extensions / variants of it.) This seems to work! But it's a bit of an ugly statement. Granted, it can be prettified a bit, in particular in the case k=1. (In fact, the case k=1 implies the general case, so that's all that's needed. Why do I insist on stating it for larger k? Well, you'll see.) For instance, we could write the nicer-looking

Version 2.1: limn→∞ Ďωα+n = Ďα+1. Furthermore, limn→∞ Ďa+1ωα+n = Ďaα+1.

This is equivalent, but looks nicer, and is also closer to the form Arias de Reyna originally wrote it in. But let's keep looking! Let's see if we can find some other similar, plausible statements. (Which may or may not be equivalent.)

Why is instability a problem? Well, if r is a defect, r+1 may or may not be a defect. Restricting to stable defects gets rid of this problem by ensuring that r+1 is not under consideration regardless. But how about solving this problem the opposite way? Instead of throwing r+1 out when it appears, let's try throwing it in when it doesn't... by considering cl(D) instead of D. This, too, is a well-ordered set with order type ωω. And it should be equal to D+N=Ď+N, though I can't currently prove this. But more importantly right now, it's a closed set, and the order topology on it coincides with the subspace topology, which makes the language a lot nicer. (This is true in general for closed subsets of linear continua.) I do not know whether or not it is equivalent to version 2 above, but looking at the numbers strongly suggests the following:

Version 3: cl(D)ωα=cl(D)α+1.

Now that looks nice, doesn't it? Except it isn't quite right. After all, as written, it implies that 0=1! Rather, this appears to be correct, so long as we 1-index cl(D). So:

Notational convention: Closed well-ordered subsets of R will be taken to be 1-indexed. Other well-ordered subsets of R will be taken to be 0-indexed, as usual. (Trying to 1-index D itself, e.g., seems to be a bad idea.)

If we read version 3 with this notational convention, then it does appear to be a correct and very nice statement. (I mean, look at that self-similarity! Just look at it!) And with this one, it's truly immediately obvious that the k=1 version implies the same for higher k. Also note that this version doesn't require a special exception for k=0. Of course, in the above, I've left out the versions for Da. So let's include those:

Version 3.1: cl(D)ωα=cl(D)α+1. Furthermore, cl(Da+1)ωα=cl(Da)α+1.

Pretty nice, no? Now that is some self-similarity. (Offhand, cl(Da) ought to be equal to (Da+3N)∪(Da-1+3N+1)∪(Da-2+3N+2), as well as being equal the same thing with D replaced by Ď.)

Now wait, what about cl(Ď) and cl(Ďa)? These are probably just equal to cl(D) and cl(Da) and so already covered.

Unfortunately there remains the problem of actually proving any of this. Version 1 is presumably false, of course, but versions 2 and 3 are probably true. Well, I'm not working on it right now -- I have too much else to do, most notably actually writing up the first of these 3 papers -- but I do have an idea for an approach. If it works, this idea will require proving it for all k simultaneously rather than just for k=1, which is why I made sure to state it for higher k above. Also, I suspect that version 2 (Arias de Reyna's original conjecture, pretty much) is the one that should be tackled first; from what little I've done, it seems to be the easier one to get a handle on. I think, however, that if one can prove version 2, one will also be able to derive version 3, as well as the rest of the statements above (at least, the ones that are actually true). This idea, if it works, may also have implications for some generalizations of Arias de Reyna's conjecture 2; unfortunately, it would probably not get us anything close to conjecture 2 itself.

As for what this idea actually is, I will write about that when it is more than just a bit of mindfuzz. Which will not be for a while, because I don't have time to work on it right now. So now you are all roughly up to date on what Josh and I have been doing with integer complexity.

-Harry
sniffnoy: (SMPTE)
About time I got around to writing this. This is going to be a pretty brief overview, I can go into more detail if people want.

(Reminder: Tags now work; one can go here to see only my entries about integer complexity.)

The main thing that is being done now is rewriting. Need to rewrite this draft into something readable. Again, many thanks to Juan Arias de Reyna for his help there. Current plan is to split it into 3 papers. First one will introduce the notion of defect, prove Arias de Reyna's conjecture 1, introduce Josh's method, compute up to 12d(2), prove 2^a 3^b works for a≤21, prove my formulae for Er(k), and prove that Ar(x)=Θ(log x)⌊r⌋+1. Second will introduce ternary families, reprove that last bit using ternary families, and prove well-ordering of defects, and generally include the relevant bits that got cut for length. The third will cover new material that we've come up with since getting back the referee report, which is what I intend to tell you about here.

(Note that when I said "3 papers", that's excluding Josh's work on upper bounds on ||n||. Which by the way he says he's finally going to get around to writing up!)

The third paper will introduce what was section 10 in the old long draft -- what happens when you take a ternary family, and restrict to those numbers below a fixed defect? Well, sort of -- actual defect is a little unpredictable, so instead we consider cutting off based on a sort of "expected" defect, which is still quite useful. The conclusion of section 10 was that when you do this, you still get ternary families.

What I did not realize at the time was that this was exactly the missing step needed to turn Josh's method into an actual algorithm! I wrote up some Haskell code to run the computations, and it has proved that 2^a 3^b works for a≤37. (Well, probably -- floating point arithmetic was used, and I thought it was arbitrary-precision when it wasn't. So, I'll have to go back and close that gap.) Why stop at 37? Well, 37=28+9, and above 28d(2), one starts having to consider +6s. Well, not exactly -- in reality they don't become relevant till long after that, but I wanted to avoid relying on such special-case reasoning. But in fact, it's not hard to get the algorithm to work beyond +1s; I just haven't bothered to do it yet because A. I initially didn't realize it would work in those cases, so I wrote the +1-only version first and B. I have a ton of other stuff to do so I haven't gotten around to writing it yet. It's a pretty simple modification, though; so when I actually have time, we can start proving this for even higher values of a. Point is, it can do in minutes what took me months by hand. And yes, the algorithm verified that I got my hand computations right... but the table is wrong, because I made some mistakes when copying my results into there.

(In actual fact, the algorithm can't do everything my hand computations could; in fact, the part that caused them to take so long is one of the things it doesn't bother doing. So much of the speedup comes from it not doing things that are irrelevant for its purposes. But what it does do, it does quite well.)

But we can use this result for much more than just computation! Previously, we could only put upper bounds on Ar(x) by building up a larger set that contained it; sure, we could attempt to filter out the things that didn't belong, but who could say what the result would look like? Now we can actually get a rough idea. So at last we can prove, for k a natural number:

Ak(x) ≤ (k+1)^(k-2)/k!^2 (log3x)k+1 + Ok((log x)k)

...and my 19-case generalization thereof. Of course, we'd like to have that ≤ be an =, but getting lower bounds would require some sort of "small intersection" theorem. I believe I can prove some special cases of this, actually, but, unfortunately, not the relevant cases.

Unfortunately, if we want to prove that ||n|| is not asymptotic to 3log3n, the above isn't good enough -- it's got an Ok in it, and we need to know the dependence on k. Our attempts to determine the dependence on k have generally concluded that it's not good enough to make things work. But I don't think Josh has taken a detailed look at it yet, and he's rather better at this sort of thing than me. I'm not optimistic, but we'll see.

We can also use this method to make some conclusions about order type, allowing finally a proof of the connection betwen integral defect and order type. Basically we can prove a lot of my old conjectures from section 11 of the old long draft.

There's still the question of Arias de Reyna's conjecture 8, which still seems to be out of reach; but more on that later.

-Harry
sniffnoy: (Chu-Chu Zig)
A good way to phrase some of the questions about the overall growth rate of integer complexity ||n|| is, what's limsup (||n||/log n)? (We know liminf ||n||/log n = 3/log 3).

So, e.g., we have the obvious bounds that it's at least 3/log 3 (>2.73) and at most 3/log 2 (<4.33). I'd like it if we could show that it's strictly greater than 3/log 3 (show that ||n|| is not asymptotic to 3log3n), but that doesn't look like it's actually going to work. If ||2^n||=2n for all n>0, then it has to be at least 2/log 2 (>2.88). Josh showed that it's at most 27/log 1260 (<3.79), and has suggested based on the numerical data that it ought to be at most 2/log 2. (<2.89), which might make it exactly 2/log 2. (Of course, others have suggested that, contrary to what Josh and I have tried to show, that it ought to be exactly 3/log 3 after all; I'll admit to not having stared too closely at the numerical data on this myself, so if you actually care about that you should probably look up what other people have written on the matter...)

Of course, some of these statements are a bit stronger -- Josh showed that ||n||≤27log1260n for all n, not just sufficiently large n, and to say that there are actually infinitely many n satisfying ||n||=2log2n exactly[0] is a bit stronger than just saying that the limsup is at least 2/log 2, but it's pretty close and it's a nice way of summing it up.

Why is this worth noting even though it's obvious? Because somehow Josh and I managed to talk about 2/log 2 being the limsup without ever using that term (i.e. writing out the statement explicitly). A bit embarassing to accidentally reinvent standard concepts without noticing...

[0]Note that this would actually force ||2^k||=2k for k>0, since 2log2n can only even be an integer if n is a power of 2; and if it holds for some k, it holds for all smaller k.



Something that started bothering me a bit ago is the curvature of space and the asymmetry of non-Euclidean geometry. In physics, lengths are dimensioned quantities, and areas are lengths squared, etc. But in a space of constant non-zero curvature, the scaling symmetry doesn't work, and you get relations based not on squaring but on trigonometric and hyperbolic functions (which don't even work with dimensioned quantities); I assume with varying curvature it can be even worse. How then do the facts that A. physical space isn't perfectly Euclidean and B. physical length is very definitely not dimensionless fit together? Duh, curvature (Edit: more generally the metric tensor or whatever) is a dimensioned quantity, and mediates the conversions. Kind of obvious, but somehow I missed that and this was really bugging me for a while...

(Yes, I would never have made that mistake if I had ever actually learned any differential geometry. But I haven't.)

-Harry
sniffnoy: (SMPTE)
OK. I have finally added tags to every entry dealing with integer complexity (even if only tangentially, so you'll find a lot of other stuff in there too). Or at least I think I have. So for those of you only reading this because of that, you can now see just integer-complexity-related entries here. At least, once it stops returning 500 errors; it's not working currently.

Quite a bit has happened with that, actually -- we've found some new stuff -- but I haven't written about any of it. I intend to write an entry actually stating what's happened with that sometime soon... maybe I'll actually get around to writing that entry about "ways to reduce grinding" too sometime...

-Harry

July 2017

S M T W T F S
      1
2345678
9101112131415
16 171819202122
23242526272829
3031     

Syndicate

RSS Atom
Page generated Jul. 26th, 2017 12:49 pm
Powered by Dreamwidth Studios