sniffnoy: (SMPTE)
Here's a revised version of Numbers with Integer Complexity Close to the Lower Bound, resubmitted over the previous version posted here. I guess that one's gone from the internet now, since I uploaded this one to the same URL, but that can hardly be called a loss. Mostly just typos fixed, but also a small gap in the proof of the main theorem. Many thanks to Juan Arias de Reyna for going over the paper so thoroughly.

-Harry
sniffnoy: (Kirby)
Well, essentially done. We'll have to see what the referee says the second time around. And of course this is only the first of, what, five papers now?

For now I've just put it up at http://www-personal.umich.edu/~haltman/ogshort.pdf ; I'll update the website's front page shortly. (Edit: That's done now.) Then I'll have to ask Jeff about putting it on arXiv...

So, yeah -- a lot had to be cut out in order to keep it at a reasonable length. In there is Josh's method, computations up to 12δ(2) (did I mention we're denoting defect of n by δ(n) now?), proof that 2213k works, and proof that Ak(x)=Θ((log x)k+1).

Gone are the formulae for Er(k), and long gone is anything related to well-ordering. Those will have to wait. (This is part of why there are so many papers now.)

Now Jeff wants me to get started on writing the well-ordering paper like immediately... oh well...

Addendum: Heh, Jeff has already emailed me with a few typos. Now fixed in this version. We'll see if the referee catches them.

Further addendum: OK, let's not turn this into a "point out the typos" thread. I've found more since then and I've been correcting them as I find them...

-Harry
sniffnoy: (Sonic)
Karlis Podnieks's groups has finally gotten around to writing up their results!

There's a lot of scattered stuff in there; I haven't read most of it. For those of you who haven't seen any of this before -- well, and who care about this -- I think Kaspars Balodis's upper bound on ||2n-1|| is definitely worth noting.

(As for us -- well, the first paper is, at long last, pretty much done. Now Jeff wants me to get started on writing the next one like immediately... oh well...)

-Harry
sniffnoy: (SMPTE)
(There is no "part 2"; the "part 1" was just to signal that more was coming.)

In this entry I will consider Juan Arias de Reyna's conjecture 8, and some variants thereof (some of which are almost certainly false).

Note: To make things easier, I will phrase everything in terms of defects rather than ratios.

Let's start with some definitions. Define a natural number n to be stable if for all k, ||n*3^k||=||n||+3k. Define a stable defect to be the defect of a stable number. (If two numbers have the same defect and one is stable then so is the other, so there is no problem there.)

I will use "D" to mean the set of all defects. (In the paper we will use a curly D.) Ď will indicate the set of all stable defects. (Because it's more convenient to type in HTML than some of the other alternatives.) I will use "cl" for topological closure. "Da" will indicate the set {d(n): n≡a (mod 3), n≠1}, and Ďa will be used similarly. Since all of these are well-ordered sets with order type ωω, I will use Sα to mean the α'th element of S. (Except in one special case which... well, you'll see.) Also, in this entry, N will include 0.

So. Conjecture 8 deals with the self-similarity of the set of defects. We know it has order type ωω; conjecture 8 says that left-multiplication by ω roughly corresponds to a numerical increase of 1. Of course, we have to be careful how we formalize this.

Last time I discussed conjecture 8, I actually messed it up slightly. Let me begin with a slight variant / reformulation of that form:

Version 1: For k≥1, lim_{β→ωk(α+1)} Dβ = Dα+k. Furthermore, lim_{β→ωk(α+1)} Da+kβ = Daα+k.

Unfortunately, this version is almost certainly false; unstable numbers really wreck it. Which is why what Arias de Reyna actually conjectured is a bit different.

Version 2: For k≥1, lim_{β→ωk(α+1)} Ďβ = Ďα+k. Furthermore, lim_{β→ωk(α+1)} Ďa+kβ = Ďaα+k.

(His original conjecture is the "furthermore", in the case k=1. The rest is some natural extensions / variants of it.) This seems to work! But it's a bit of an ugly statement. Granted, it can be prettified a bit, in particular in the case k=1. (In fact, the case k=1 implies the general case, so that's all that's needed. Why do I insist on stating it for larger k? Well, you'll see.) For instance, we could write the nicer-looking

Version 2.1: limn→∞ Ďωα+n = Ďα+1. Furthermore, limn→∞ Ďa+1ωα+n = Ďaα+1.

This is equivalent, but looks nicer, and is also closer to the form Arias de Reyna originally wrote it in. But let's keep looking! Let's see if we can find some other similar, plausible statements. (Which may or may not be equivalent.)

Why is instability a problem? Well, if r is a defect, r+1 may or may not be a defect. Restricting to stable defects gets rid of this problem by ensuring that r+1 is not under consideration regardless. But how about solving this problem the opposite way? Instead of throwing r+1 out when it appears, let's try throwing it in when it doesn't... by considering cl(D) instead of D. This, too, is a well-ordered set with order type ωω. And it should be equal to D+N=Ď+N, though I can't currently prove this. But more importantly right now, it's a closed set, and the order topology on it coincides with the subspace topology, which makes the language a lot nicer. (This is true in general for closed subsets of linear continua.) I do not know whether or not it is equivalent to version 2 above, but looking at the numbers strongly suggests the following:

Version 3: cl(D)ωα=cl(D)α+1.

Now that looks nice, doesn't it? Except it isn't quite right. After all, as written, it implies that 0=1! Rather, this appears to be correct, so long as we 1-index cl(D). So:

Notational convention: Closed well-ordered subsets of R will be taken to be 1-indexed. Other well-ordered subsets of R will be taken to be 0-indexed, as usual. (Trying to 1-index D itself, e.g., seems to be a bad idea.)

If we read version 3 with this notational convention, then it does appear to be a correct and very nice statement. (I mean, look at that self-similarity! Just look at it!) And with this one, it's truly immediately obvious that the k=1 version implies the same for higher k. Also note that this version doesn't require a special exception for k=0. Of course, in the above, I've left out the versions for Da. So let's include those:

Version 3.1: cl(D)ωα=cl(D)α+1. Furthermore, cl(Da+1)ωα=cl(Da)α+1.

Pretty nice, no? Now that is some self-similarity. (Offhand, cl(Da) ought to be equal to (Da+3N)∪(Da-1+3N+1)∪(Da-2+3N+2), as well as being equal the same thing with D replaced by Ď.)

Now wait, what about cl(Ď) and cl(Ďa)? These are probably just equal to cl(D) and cl(Da) and so already covered.

Unfortunately there remains the problem of actually proving any of this. Version 1 is presumably false, of course, but versions 2 and 3 are probably true. Well, I'm not working on it right now -- I have too much else to do, most notably actually writing up the first of these 3 papers -- but I do have an idea for an approach. If it works, this idea will require proving it for all k simultaneously rather than just for k=1, which is why I made sure to state it for higher k above. Also, I suspect that version 2 (Arias de Reyna's original conjecture, pretty much) is the one that should be tackled first; from what little I've done, it seems to be the easier one to get a handle on. I think, however, that if one can prove version 2, one will also be able to derive version 3, as well as the rest of the statements above (at least, the ones that are actually true). This idea, if it works, may also have implications for some generalizations of Arias de Reyna's conjecture 2; unfortunately, it would probably not get us anything close to conjecture 2 itself.

As for what this idea actually is, I will write about that when it is more than just a bit of mindfuzz. Which will not be for a while, because I don't have time to work on it right now. So now you are all roughly up to date on what Josh and I have been doing with integer complexity.

-Harry
sniffnoy: (SMPTE)
About time I got around to writing this. This is going to be a pretty brief overview, I can go into more detail if people want.

(Reminder: Tags now work; one can go here to see only my entries about integer complexity.)

The main thing that is being done now is rewriting. Need to rewrite this draft into something readable. Again, many thanks to Juan Arias de Reyna for his help there. Current plan is to split it into 3 papers. First one will introduce the notion of defect, prove Arias de Reyna's conjecture 1, introduce Josh's method, compute up to 12d(2), prove 2^a 3^b works for a≤21, prove my formulae for Er(k), and prove that Ar(x)=Θ(log x)⌊r⌋+1. Second will introduce ternary families, reprove that last bit using ternary families, and prove well-ordering of defects, and generally include the relevant bits that got cut for length. The third will cover new material that we've come up with since getting back the referee report, which is what I intend to tell you about here.

(Note that when I said "3 papers", that's excluding Josh's work on upper bounds on ||n||. Which by the way he says he's finally going to get around to writing up!)

The third paper will introduce what was section 10 in the old long draft -- what happens when you take a ternary family, and restrict to those numbers below a fixed defect? Well, sort of -- actual defect is a little unpredictable, so instead we consider cutting off based on a sort of "expected" defect, which is still quite useful. The conclusion of section 10 was that when you do this, you still get ternary families.

What I did not realize at the time was that this was exactly the missing step needed to turn Josh's method into an actual algorithm! I wrote up some Haskell code to run the computations, and it has proved that 2^a 3^b works for a≤37. (Well, probably -- floating point arithmetic was used, and I thought it was arbitrary-precision when it wasn't. So, I'll have to go back and close that gap.) Why stop at 37? Well, 37=28+9, and above 28d(2), one starts having to consider +6s. Well, not exactly -- in reality they don't become relevant till long after that, but I wanted to avoid relying on such special-case reasoning. But in fact, it's not hard to get the algorithm to work beyond +1s; I just haven't bothered to do it yet because A. I initially didn't realize it would work in those cases, so I wrote the +1-only version first and B. I have a ton of other stuff to do so I haven't gotten around to writing it yet. It's a pretty simple modification, though; so when I actually have time, we can start proving this for even higher values of a. Point is, it can do in minutes what took me months by hand. And yes, the algorithm verified that I got my hand computations right... but the table is wrong, because I made some mistakes when copying my results into there.

(In actual fact, the algorithm can't do everything my hand computations could; in fact, the part that caused them to take so long is one of the things it doesn't bother doing. So much of the speedup comes from it not doing things that are irrelevant for its purposes. But what it does do, it does quite well.)

But we can use this result for much more than just computation! Previously, we could only put upper bounds on Ar(x) by building up a larger set that contained it; sure, we could attempt to filter out the things that didn't belong, but who could say what the result would look like? Now we can actually get a rough idea. So at last we can prove, for k a natural number:

Ak(x) ≤ (k+1)^(k-2)/k!^2 (log3x)k+1 + Ok((log x)k)

...and my 19-case generalization thereof. Of course, we'd like to have that ≤ be an =, but getting lower bounds would require some sort of "small intersection" theorem. I believe I can prove some special cases of this, actually, but, unfortunately, not the relevant cases.

Unfortunately, if we want to prove that ||n|| is not asymptotic to 3log3n, the above isn't good enough -- it's got an Ok in it, and we need to know the dependence on k. Our attempts to determine the dependence on k have generally concluded that it's not good enough to make things work. But I don't think Josh has taken a detailed look at it yet, and he's rather better at this sort of thing than me. I'm not optimistic, but we'll see.

We can also use this method to make some conclusions about order type, allowing finally a proof of the connection betwen integral defect and order type. Basically we can prove a lot of my old conjectures from section 11 of the old long draft.

There's still the question of Arias de Reyna's conjecture 8, which still seems to be out of reach; but more on that later.

-Harry
sniffnoy: (Chu-Chu Zig)
A good way to phrase some of the questions about the overall growth rate of integer complexity ||n|| is, what's limsup (||n||/log n)? (We know liminf ||n||/log n = 3/log 3).

So, e.g., we have the obvious bounds that it's at least 3/log 3 (>2.73) and at most 3/log 2 (<4.33). I'd like it if we could show that it's strictly greater than 3/log 3 (show that ||n|| is not asymptotic to 3log3n), but that doesn't look like it's actually going to work. If ||2^n||=2n for all n>0, then it has to be at least 2/log 2 (>2.88). Josh showed that it's at most 27/log 1260 (<3.79), and has suggested based on the numerical data that it ought to be at most 2/log 2. (<2.89), which might make it exactly 2/log 2. (Of course, others have suggested that, contrary to what Josh and I have tried to show, that it ought to be exactly 3/log 3 after all; I'll admit to not having stared too closely at the numerical data on this myself, so if you actually care about that you should probably look up what other people have written on the matter...)

Of course, some of these statements are a bit stronger -- Josh showed that ||n||≤27log1260n for all n, not just sufficiently large n, and to say that there are actually infinitely many n satisfying ||n||=2log2n exactly[0] is a bit stronger than just saying that the limsup is at least 2/log 2, but it's pretty close and it's a nice way of summing it up.

Why is this worth noting even though it's obvious? Because somehow Josh and I managed to talk about 2/log 2 being the limsup without ever using that term (i.e. writing out the statement explicitly). A bit embarassing to accidentally reinvent standard concepts without noticing...

[0]Note that this would actually force ||2^k||=2k for k>0, since 2log2n can only even be an integer if n is a power of 2; and if it holds for some k, it holds for all smaller k.



Something that started bothering me a bit ago is the curvature of space and the asymmetry of non-Euclidean geometry. In physics, lengths are dimensioned quantities, and areas are lengths squared, etc. But in a space of constant non-zero curvature, the scaling symmetry doesn't work, and you get relations based not on squaring but on trigonometric and hyperbolic functions (which don't even work with dimensioned quantities); I assume with varying curvature it can be even worse. How then do the facts that A. physical space isn't perfectly Euclidean and B. physical length is very definitely not dimensionless fit together? Duh, curvature (Edit: more generally the metric tensor or whatever) is a dimensioned quantity, and mediates the conversions. Kind of obvious, but somehow I missed that and this was really bugging me for a while...

(Yes, I would never have made that mistake if I had ever actually learned any differential geometry. But I haven't.)

-Harry
sniffnoy: (SMPTE)
OK. I have finally added tags to every entry dealing with integer complexity (even if only tangentially, so you'll find a lot of other stuff in there too). Or at least I think I have. So for those of you only reading this because of that, you can now see just integer-complexity-related entries here. At least, once it stops returning 500 errors; it's not working currently.

Quite a bit has happened with that, actually -- we've found some new stuff -- but I haven't written about any of it. I intend to write an entry actually stating what's happened with that sometime soon... maybe I'll actually get around to writing that entry about "ways to reduce grinding" too sometime...

-Harry
sniffnoy: (Dead face)
Yes, there's lots of free and open-source computer algebra systems out there, of which Sage is probably the most-used. I have access to Mathematica and others on the school's computers, but I think it's obvious why I'd prefer something open-source -- a proof should be verifiable, and if you use a CAS computation for a proof, then your CAS's source code is implicitly part of the proof.

You see, I've got this program that I've written in Haskell that needs to do a bunch of comparisons of numbers made from logs. Lots of numbers like 2-3log32 floating around, you see[0]. Now, so far double precision has been more than enough, but as I compute farther that may well cease to be the case. So I downloaded an arbitrary-precision arithmetic library, ERA, which unfortunately is rather slower than just using doubles, but does the job... mostly.

There's actually a serious problem with ERA -- when I say "arbitrary precision", that means I want comparisons to be *correct*. If I ask whether than one number is less than a number, it should compute until it can tell. Unfortunately that's not what it does, meaning it doesn't do what I care about. 40 digits is better than what you'll get with doubles, and almost certainly good enough, but do I have a guarantee that it's good enough? No? Then I don't have a proof.

And of course there's a more fundamental problem: No matter what your arbitrary precision arithmetic library, there's no way you can verify equality in finite time by computing more digits! Currently in my program I've worked around this by doing such tests via integer arithmetic whenever equality is actually possible... but if I want this thing to accept more general inputs, I'm not going to be able to predict in advance when that could happen and route around it. No, if you want to verify equality, you need to be able to do symbolic computations, and that means a CAS. Well, either that or rolling my own linear-combination-of-logs type and then converting all comparisons to integer arithmetic (since I'm too lazy to learn what the fast algorithms might be). Seriously though, this thing is slow enough already.

So why not just use Sage? Well, there's a good chance I will, but... Sage is based on Python. Now I wrote this program in Haskell. No, rewriting it in another language doesn't in general bother me, but there's a reason I picked Haskell. You know what I like about Haskell? It's really easy to define nice, simple, algebraic datatypes. You know, like if you wanted to define linked lists -- they're built in so you don't need to, of course, but if you wanted to, it would just be something like

data List A = Nil | Cons A (List A)

...and boom, you've got linked lists, and you can start defining functions on them.

Python... well, it's been a long time since I looked at Python, but as I recall, it goes the object-oriented route when it comes to defining new datatypes. And, well, ew. I want immutable datatypes I can define simply and operate on algebraically. Does it even support union types? I doubt it. I mean, you can work without union types, but I'm thinking in terms of Haskell which makes them simple and convenient, and to me they just seem the right way to do things, rather than having members which might be null or whatever the equivalent is. Not to mention Python is dynamically typed, so you can't even tell whether you've written something meaningful until you run it.

Now I don't need much, I just need the ability to compare numbers with logarithms in them (and take their floors and ceilings), and get the comparison correct, guaranteed, in finite time. You shouldn't, strictly speaking, need a whole CAS for that; but it's the sort of thing that you don't tend to find outside of one. I don't think I see a Haskell package that can do it (though I suppose I could have missed it). I don't want to take the time learn how to do it myself, and I don't want to convert everything to integer arithmetic, because I'm afraid that would be slow -- well, unless that that *is* how a CAS would do it, in which case I assume it really is the best way.

In short, I like Haskell because it's so algebraic, it's a lot like just writing math, and I would like a CAS that also has that property -- not just when you're using its built-in data types, but when working with your own weird tree type which you've implemented as a union type and you want it to do compile-time type checking so you can be sure you've written something meaningful.

...I guess I mostly just want a CAS based on Haskell, when you get down to it. I guess ML would be fine too. :P Anyway, I don't suppose there are any open-source CAS'es out there that do what I want? Anyone know one that would be close, at any rate? Or am I just going to have to end up either using Sage or rolling my own slow comparisons?

-Harry

[0]I'll write about just what it does later. I'm sure the constant involved has tipped you off that it's related to integer complexity.
sniffnoy: (Chu-Chu Zig)
To make up for recently posting a stupid rant here, I intend to post an actually intelligent entry[0] (or series, this might get long) sometime in the next few days. Well, if I have time, what with a paper to rewrite, an exam to grade, a different exam to prepare for...

Which reminds me that I ought to start using the tagging system which has existed on LJ for literally years now. While I mostly use this space as a dumb personal blog, it seems there's a few people actually following the math on here! So I should start tagging those (in particular the integer complexity ones) so people don't have to wade through crap to find them.

So what the hell's the purpose of this post? Why do you care about this? Well, mostly, you don't, actually. But you see, this post is tagged with the label "posts about tags", thereby breaking the pattern of there not being tags here! So now I will have to get this done at some point or the inconsistency will gnaw at me.

(At one point years ago I actually attempted to categorize and tag everything on here. I couldn't come up with a good system and I eventually removed them all. Now that I understand how to actually use tags, I won't be doing that again.)

[0]Well, as intelligent as I can be on a topic I don't really know very well, namely, game design.
sniffnoy: (Dead face)
Having finally had time to read the referee report (though not yet the paper rewrite the referee so generously provided), I can now resolve the problem listed in the entry below. It was us who screwed up, but in fact not very badly. More precisely, it was me who screwed up. The referee claimed that basically everything we did with the notion of integral defect was false, you see, and I was very confused as to how we could have gotten so much wrong. The answer is that we didn't; it's just that we defined integral defect incorrectly in the paper. Recall we defined integral defect D(n) to be the difference f(n)-L(n), where f and L are functions I don't care to define right now because that's not relevant to the story. Well, guess what? When typing up the paper, I got the definition of L mixed up. (In a loose sense, I used floors instead of ceilings.)

I suppose it's still possible that we actually messed up somewhere, but I suppose that will have to be determined later...

(Also: The referee report said there was a number missing from our giant table. This scared me, because if I had actually screwed up my computations... well, you remember how long those took. Fortunately, it turns out that the number was indeed present in my original computations file, and I had just left out the relevant line when turning it into the table. Actually it wasn't just one number, it was a one-parameter family that was missing, but that was the only member of the family small enough to allow him to detect the omission.)

-Harry
sniffnoy: (Kirby)
So we finally got the referee report back on our integer complexity paper! It's going to require a lot of revision, apparently. And, well, I have to say, the referee really went above and beyond the call of duty on this one... he kind of, uh, rewrote our paper for us. Even introduced a whole new thing about "backsteps" I haven't had time to look at.

Thing is he says we're wrong about quite a lot of things where it should be very easy to tell whether we're wrong or not. Meaning either we screwed up really badly, or he screwed up really badly. And it should be easy to tell which, but I've had so little time to look at this that I haven't even sat down and determined this yet! Perhaps Josh has...

-Harry
sniffnoy: (Kirby)
OK. Here is the long version of my third draft of Josh's and my paper on integers with complexity close to the lower bound... and here is a much shorter version which hopefully should actually be publishable, because it's, y'know, not 28 pages long. (Only 17.) I am hoping that my labelling it "Part I" isn't too hubristic and that the excised parts can actually be scrounged together into a second paper. Well, first we'll see what people think of this one. Or we'll just go ahead and submit it to INTEGERS since we only have a little over a week to do so if we want it in their special John Selfridge memorial issue.

Of course there's still the issue of that 7-page long table that isn't included in the draft proper...

-Harry
sniffnoy: (Chu-Chu Zig)
OK, I've put together a second draft of my and Josh's results on integers with complexity close to the lower bound. Josh's upper bound work is getting split off into a separate paper since it's really a separate idea, and also this one is really long.

I've been told it's hard to get things published that are longer than 20 pages, so this may have to be shortened further, but I'm not sure how. (Well, OK, if that actually turns out to be necessary first thing would probably be to cut section 10. :P ) See, it's shorter because some sections got cut or significantly shortened, but it's not much shorter because one of the problems with the first draft was that it wasn't really readable if you didn't already know something about the problem, which of course hardly anybody does, so I had to spend a bunch of space decompressing what I originally wrote.

Well, we'll see what people think of this one.



So due to me having to come back to New Jersey (and then go up to Boston) for Elana's graduation, I didn't really have the chance to go visit Chicago while school was still going. (It still is going there, but YKWIM. Leaving for New Jersey on Saturday, staying there 2 1/2 weeks, in Boston area for Elana's graduation weekend of the 28th).

So that's disappointing. Colin is in Chicago - teaching elementary/middle school, apparently - and has said I should come visit him and Aviva sometime, so I guess I'll do that, see who's around. (Are Jack and Sarah staying on for another year? I.e. can I go see Hannah and Sasha? :) ) But I have to wonder how often I'll have reason to go there after that - next year Winston and Youlian won't even be there... (I mean, I doubt Youlian will be there during the summer either, but YKWIM.) Ah well.

-Harry

Resolved!

May. 1st, 2011 11:18 pm
sniffnoy: (Kirby)
So the thing that has caused all this trouble recently integer complexity stuff has been pinning down the stopping point: What is the supremum of all r for which our method allows us to classify all numbers with defect less than r?

I've been saying for quite some time that it's 19-3log_3(73), or approximately 7.28. See, we know how to handle +1s, but +6s (and +8s and +9s and +12s) are a problem. Often we can rewrite them in another form so as not to be a problem, though - but for the infinite family of numbers 73(3^n+1)+6, we don't know how to do that. And this appears to yield defects with a supremum of 19-3log_3(73). Finitely many we can handle, sure, because any individual number can have its defect calculated by computer. But an infinte family is another matter.

The first snag in demonstrating this was that our main lemma wasn't quite good enough for this - I wanted to say, for an infinite family S, we don't have to worry about S+6 until we hit r+5, where r is the sup of the defects of S. But what the main lemma actually gives us is r+3+d(2), rather lower. Getting it up to r+5 required actually examining the structure of these infinite families some more.

But then there was the second snag: Even individual numbers cause problems! Because you see, every time I'd been talking about individual numbers, I should really have been talking about individual numbers times powers of 3. We can compute the defect of n by computer, sure, but not n3^k for all k. And to get past this I had to tweak and specialize some old lemmas to handle +6s until...

...well, until we could handle +6s. And now we can. All this time trying to show we can get up to 73(3^n+1)+6, now I've shown how to go past it. It's not a stopping point at all. The method no longer has an obvious stopping point; whatever comes up, we can (it would appear) tweak the lemmas to handle it. This is all a non-issue.

Well, I've got a whole lot of text to delete now. That'll bring the page count down some...

-Harry
sniffnoy: (Chu-Chu Zig)
Oh hey it's another "Harry hasn't posted in a long time so here's a bunch of stuff that's happened to him even though most of it isn't that interesting" entry.

Now here was something scary: I fainted in the shower Tuesday. Never had something like that happen before. I was taking a shower and I started to feel a bit lightheaded. After a while I figured, hey, maybe I should stick my head out of the shower for a bit. That didn't really help very much. So I thought, maybe I should turn the water off for a bit. That didn't do too much either. And so after a bit I finally realized, crap, I should get out of here! I grabbed my towel and threw it around myself (didn't bother to put my bathrobe on), and just I was stepping out of the shower I just collapsed on the floor. Woke up a few seconds later, rather surprised to find myself on the floor outside the shower, and also pretty rattled. Put towel back on, stumble out the door and back to my room (fortunately right across from the shower room), collapse on the bed (still all wet and with towel and shower shoes). I stayed there for like 5 minutes before deciding that I should probably go back and finish washing my hair... (though on reentering the shower room the first thing I did was to sit down on the chair that happened to be there for a few minutes). Nothing went wrong that time. (And no, I didn't hurt myself, I just hit my leg some.)

I am beginning to notice that the house has a lack of good quick single-player video games. We should have gotten the 360 instead of the PS3 so we could have Geometry Wars. :P (That's what I was saying at the time...) We have Mr. Driller, but... unrelatedly, I also seem to have gotten much worse at Kongai (which is I guess what happens when you don't play regularly); I'm only SR 32 now. (Meanwhile, when the hell is R4 going to start?)

Now that my computations are complete I've been rewriting my draft of our integer complexity paper to actually be readable. Unfortunately we kind of hit a snag in the "limitations of the method" section, the section I was doing all those computations for... there's a few things we forgot to account for... so I spent some days frantically patching that up, then got back to rewriting, and now I notice another snag! The old snag I don't feel like explaining right now, but the new one is, I had said, well, when we have a specific concrete number n, and we want to find out if d(n+6) is less than some upper bound, this is easy, because n+6 is just some number and we know how to compute d. Problem: It's not just d(n+6) we have to worry about, it's d(3k(n+6)) for arbitrary k. That's infinitely many numbers. Pretty sure I have an idea how to patch this one (hooray, an ad-hoc refinement of the main lemma!), but blech... this is turning out to be much less simple than it was supposed to be.

(I have no idea what we're going to do about the conjectures section. Note we're already splitting of Josh's upper bound work as a separate paper because it really is a separate idea, but this paper is all essentially based on one idea - the main lemma that Josh came up with and its consequences - so I have no idea how we'd split it if editors insisted on that. Even without the conjectures section (which is a terrible mess and probably needs to be redone largely from scratch) it's already 22 pages! With it, it's 33 pages...)

Funnily enough, the patch for the first snag required either: 1. Doing a bunch of tedious case-checking, or 2. Proving a weaker version of an old conjecture about ternary families I made. I was totally stuck on it before, but as soon as I needed it to avert some case checking... :P (Although it's possible I didn't think of the weaker statement earlier - the actual original statement remains beyond what I can handle.) Actually this kind of changes the structure of the paper; previously we had all the concrete stuff and then all the abstract stuff, now I'm going to need to use some of the abstract stuff to prove some of the concrete stuff...

So apparently Kevin Poenisch, from PROMYS '04 is living in Truth House next year. We had a barbeque on Friday with some of the new people and he recognized me... I didn't recognize him. He still does math, it seems. Ran into ADan right about then too. "Didn't you explicitly tell me that the next time you had a vacation, you weren't just going to come back to Ann Arbor?" Well, seems he did anyway...

This computer has been having a number of problems. I assume installing Linux probably voids the warranty, right? :-/ Among them: Well there was that overheating incident... I figure there must also be something wrong with the wireless card or something because every now and then the computer will just refuse to connect[0] to any wireless network and the problem won't go away until I restart the computer. (And even then it often recurs quickly after happening once.) Sound - regardless of the program - often just *skips* on this computer - is this due to a lack of processing power? Of course I'm not getting the computer's full processing power right now due to having accidentally installed 32-bit Linux instead of 64-bit. :P But that can't be it, anyway - it's not like I'm seeing massive (or any) CPU spikes when that happens, so that idea is just totally wrong. Also the computer often fails to recognize that it is or is not plugged in, and if I fail to notice this, you get a case of the computer failing to hibernate when it runs out of batteries. (Oh, have I complained here yet about the design of this thing's keyboard, by any chance?) This bothers me...

I had no idea Game of Thrones would be such a big deal here. I expected it would mostly just be those of us who've read the books watching. Meanwhile the first two episodes have been played like 3 or 4 times each... heh, I forgot about it and made the mistake of signing up for dinner clean tomorrow; I may just start early so they don't overlap... I mean, not that we won't be recording it anyway... I hope I don't have to keep lending out my copies now, already Kayla and Dan have started reading it, because I am going to want to reread them to reorient myself for when ADwD comes out in July...

Board games may be coming back in the house? Last year, second semester, we used to have game nights on Sundays, but at the time I was scribe for the ICC board and so I was always really tired on Sunday nights... not to mention that was the semester when I was shut away in my room most of the time. And then most of the people who played left the house after last year. (I have to say, the new people we got this year are just in general not as cool as the old people we lost after last year. We'll see who we get next year.) But the other day Hannah and Lenora were talking about how we should play board games sometime and - well, I was pretty certain they were not much of gamers and had more of something like Trivial Pursuit in mind - but I was like "hey I have a whole bunch that I don't typically bring out because I'm bad at starting things" and they ask what and I say, come see and I'm like, you know Scotland Yard? And Lenny had played before and I said, we should play some time, and then decided to add, hey, we could play right now (actually I wanted to work on patching up that first snag, but I figured if I didn't start it now it was unlikely to start later) and we found some people (Doug, Angus, Bowyer) and actually managed to start a game. Which did not run perpetually, unlike last time I played (that was at PROMYS, and Watson was one of the detectives...). I was Mr. X, and - well, actually we played two games, because the first game I made a terrible blunder (failed to notice a bus route) and it was over very quickly. Then we played a second game with me as Mr. X, which I managed to win after they all clustered to corner me and I escaped by ferry, leaving them clustered far away from me. Of note: The special "Mr. X" hat my edition came with is important, because if Bowyer is a detective he *will* look at where you're looking on the board to figure out where you are.

I just may have to start playing Magic again, actually. Dunno how it started but suddenly everyone's breaking out their old (or new) Magic decks. Of course I have no cards because, well, I have no idea where my old ones went... oh well. Could go to GYGO for the New Phyrexia prerelease. They've moved, they're on State Street now, much more convienent. :) (Though much smaller...) Kristin is actually working there now.

Well, first things first. Gotta pass topology qual. Heh, I still haven't bothered to register for classes. Ooh, next semester Stembridge is teaching a course all about the combinatorics of Coxeter groups and root systems...

(So much time this summer. Already I figure I should learn how to cook. Another thought: I was thinking about how I don't know enough number theory - maybe a good idea would be to just start reading through Ireland & Rosen...)

-Harry

[0]Naturally I am omitting some detail here.
sniffnoy: (SMPTE)
(OK, this will probably be followable by all of no-one. Oh well; I can explain later...)

So I've drawn up the rewrite tables (all this stuff to be uploaded later when I am not so tired), and at last we can conclude: d(73)+5 is indeed the best candidate for a stopping point, as I hypothesized months ago. (Many thanks once again to Jānis Iraids (and of couse Karlis Podnieks as well) for the wealth of numerical data they supplied. Seriously, he computed up to a trillion!) Of course, we can't prove d(73)+5 is the stopping point with our current methods - if we could, it wouldn't be! But I'd say the evidence is pretty suggestive.

One actual mistake that turned up - what we really want, as evidence that 73(3^n+1)+6 is the stopping point, is not just finding an n such that this has complexity 19+3n (which Jānis computed holds for n∈{15,16,17,20,21}; 21 was the largest checked), but finding one which cannot be written most efficiently as a +1 or a product. Unfortunately, of those 5 cases, only n=21 actually exhibits this. (I had to check this by manually plugging in numbers - Jānis wrote his program to prefer additions over multiplications, the reverse of what we did and the reverse of what we need for this. Fortunately, 73(3^n+1)+6 factors as 2 times a prime, so this was a lot less tedious than it sounds!) In any case, finding one that exhibits it is a lot better than not finding any!

Now back to rewriting the draft...

-Harry
sniffnoy: (SMPTE)
Table for 22. Now let's never, ever, ever, do that again. Perhaps sometime I will figure out how to turn the method into an actual algorithm[0] and a computer can do it. But if anyone suggests I go to 23 by hand I'll, uh, I'll... I'll make them do it... and *then* I'll rip off their head and devour their innards.

And yes, as a side effect of having done this we now know that ||2^a 3^b||=2a+3b for a≤31 (a,b not both zero). If anyone cared.

Next up, the table of rewrites... I don't *think* that should take longer than a week, at the very longest... and then to actually rewriting the draft to be more readable...

[0]Actually, a tiny bit of thought has yielded some progress on this, namely, that most of the parts I thought would be hard to turn into an algorithm, aren't really needed. There's therefore just 2 sticky points and I can't claim to have yet seriously thought about either of them, or asked anyone, so, this may be doable. But first to check the whole "73(3^n+1)+6" thing and to rewrite the draft of what we already have.
sniffnoy: (Chu-Chu Zig)
Oh man... finally done with the semester. Now to get some real work done. And to have a much better next semester... have I mentioned, I'm realizing I don't really like algebraic geometry? It's like algebra made annoying.

Maybe time to actually write here some of the things I'd been intending? First I'd have to remember them. All I really remember offhand that I wanted to write is a rant about students not using proper notation. :P

I think I'll write for LW a quick introduction to how we actually use infinities in math and how we actually go about measuring infinite sets. Yeah, I'm sure a number of people there are finitists of one stripe or another. Still I see people making all sorts of confusions whenever infinite things get involved, would be nice to have a basic reference for that.

Definitely time to get the next draft of the integer complexity paper written. Jeff Lagarias and Mike Zieve both said they'd look at it but I have no idea if they've had time to yet. Josh hasn't had time to write up his improved upper bound proof, but in the meantime I can clean up the existing one. I can also (blech) go ahead and compute A22d(2), so that we can verify rigorously that d(73)+5 is indeed the first point that causes problems for our method...

So who's around now? Mickey is, up in New York, obviously... I expect ADan must be as well. I never mentioned, he actually was in Ann Arbor a few weeks ago, didn't have much of a chance to see him, I had an enormous amount of work at the time.

I brought Flash Duel with me, of course. :) (I hear Tom Vasel gave Yomi a 10/10? Maybe I should get that as well! Unfortunately he appears to have only done an audio, not text, review. Man, it looks like Sirlin is making inroads *fast* in the boardgaming world... not that I actually follow boardgames much anymore (hm, I still haven't played Dominion: Prosperity... I'm sure Nic or Ari must have it by now, right? Hey, Dave and Ari might be around here too! And a good deal closer than New York...), but I still read sirlin.net all the time...)

-Harry
sniffnoy: (Kirby)
Behold!

So now we have an actually more polished draft of our integer complexity paper. It's still obviously incomplete; for instance, all of Josh's upper bound stuff is still missing, aside from the most basic statements! But hey, it's a draft.

Note I didn't include big table of results from the computation, because even in its short form, it runs 4 pages long; that can be found separately here. Unfortunately its short form isn't very enlightening, I don't think, but oh well.

There's one claim in there that we haven't actually fully checked - that there are no other obstacles to the method before 73(3^n+1)+6. Josh and I have at any rate checked all of what seemed to be likely candidates; a full check will be very tedious (to make it work formally, I'll probably have to compute A22d(2)) so I don't think I'll be getting around to that for a while.

And yeah, the conjectures section kind of ballooned out of control and will probably have to be rewritten, maybe even split up.

...oh, wait, you want to know what the hell we've actually done? OK! Here's the quick summary off the top of my head.

Recall: We let ||n|| denote the smallest number of ones needed to write n using addition and multiplication. E.g., ||7||=6 because 7=(1+1+1)(1+1)+1, but it can't be written using only 5 ones. Also recall for n>1, 3log_3(n)≤||n||≤3log_2(n). ||3^n||=3n (for n>0), but it's not known whether ||2^n||=2n for n>0 (or of course whether more generally ||2^n 3^m||=2n+3m for n, m not both zero). Note in general that though the number 3 is relatively well-behaved as far as integer complexity goes, it's still not true in general that ||3n||=||n||+3. We will also define the "defect" of n, d(n)=||n||-3log_3(n).

So: New upper bound - Josh has improved the obvious upper bound of 3log_2(n) to Klog_2(n), where K=70/ log_2(102236160)=2.63085..., and we may later be able to get it down to 27/log_2(1260)=2.6215... .

Classification of numbers of low defect - Define d(n)=||n||-3log_3(n); we classify numbers with d(n)<21d(2)=2.2514..., and use this to show that ||2^n 3^m||=2n+3m for n≤30 and any m (n, m not both zero). We also use this to derive formulae for the r-th highest number writable with k ones, which work so long as k is sufficiently large relative to r.

Proof of some conjectures of Arias - Juan Arias de Reyna conjectured (essentially) that the set of defects is well-ordered, with order type ωω; we prove this fact. He also conjectured that for any n, there exists a K such that ||n3^k||=3(k-K)+||n3^K|| for any k≥K; we prove this as well.

An approach to proving ||n|| is not asymptotic to 3log_3(n) - Well, of course, showing ||2^n||=2n would prove this as well, but that seems to be really hard, so we've found another approach. Let A_k be the set of all n with d(n)<k, and let A_k(x) denote the number of elements of A_k that are less than or equal to x. Then I conjecture that for k a fixed whole number, A_k(x) is asymptotic to (k+1)^(k-2)/k!^2 * (log_3 x)^(k+1). Indeed, I prove that it is Θ((log x)^(k+1)), so the problem is just pinning down the constant. And Josh shows that if we assume a slightly more uniform version of this - one which leaves us quite a bit of wiggle room, in fact - then ||n|| is not asymptotic to 3log_3(n).

Anyway, I'm probably leaving stuff out, so maybe you should just go read it!

(I've also made my "homepage" on umich.edu a tiny bit nicer. :) )

-Harry
sniffnoy: (Sonic)
So today I got an email from one Jānis Iraids (affiliated with this project, which you may recall has come up here before) who has computed the values of ||n|| for all n≤1012. That's a little over 3 orders of magnitude better than what Arias and Van de Lune computed to. According to the website, this took him 2 and a half weeks[0]. Naturally, he now has quite a few conjectures to check! Website says that already he's checked that ||2^n||=2n for all 0<n≤39.

But what he told me in the email was that he had also - as I had asked Podnieks about some time before - checked the complexities of the numbers 73(3^n+1)+6. And indeed, let it be known: The upper bound ||73(3^n+1)+6||≤3n+19 is tight. Equality is achieved when n=15, 16, 17, 20, or 21, but *not* when n=18 or 19 (a little bit of irregularity that acts as further evidence that this is indeed an obstacle to our method as I hypothesized)!

Admittedly, I never actually fully checked that there couldn't be an earlier obstacle - I was only looking at likely-looking candidates in the first place. Perhaps we should actually go back and do that now, now that we've got this candidate pinned down more concretely.

Man, with this we can probably cut whole paragraphs out of the paper. :) (Although if we do check all those candidates, perhaps we'll have to add in some stuff about how we did so, maybe a table of how to rewrite things...)

This is so cool. Hooray for terabytes of data! (Not to mention, I'm assuming this will all be put up somewhere public, which will allow us to avoid the annoyance of citing the not-publically-available preprint of Arias and Van de Lune...)

Now for something unrelated! Here's something neat (and, keeping with the theme, numerical) I saw on MathOverflow:

Let χ be the quadratic Dirichlet character mod 7. Then the identity
L(2,χ)=(24/7√7)π/3π/2log|(tan t+√7)/(tan t-√7)|dt
holds to 20,000 decimal places. But if it's actually true, well, nobody has a proof. Apparently it comes up in quantum physics somehow. (Note, by the way, that arctan(√7) does lie between π/3 and π/2). Though I must say I have to wonder why it comes up with specifically those numbers; seems like the kind of thing that, if true, would be proven by first generalizing...

-Harry

[0]On rereading this sentence, I suddenly got the mental image of a guy computing these by hand. Oh well, I'll leave it as it is anyway.

August 2025

S M T W T F S
     12
3456789
1011 12 13141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom
Page generated Aug. 26th, 2025 07:24 pm
Powered by Dreamwidth Studios