sniffnoy: (SMPTE)
So some time ago I decided I no longer liked that old write-up I did of my proof of formulae for the r-th highest number writable with k ones (so long as k is sufficiently large relative to r). I mean, the proof was fine, but the context was out of date; a lot of the things I bothered to point out seemed silly, for instance, because I didn't have the concepts down very well when I wrote it. And I'm not even using that same notation anymore!

So a few days ago I went and edited it, cut out all that stuff so it could serve as sort of an "appendix" to the paper. Then I realized, well, I want to put this up where people can see it, but nobody will be able to read it in that form. So today I went and put back in some introductory stuff and other parts I had edited out. (Just the relevant parts, of course.) So now that's back up on my little website, here.

I didn't rewrite the proof itself at all, so if that was hard to understand before, it still will be. Still, I think this makes the rest of it a lot nicer.

UPDATE 10/29: The errors Josh points out below have been fixed now.

-Harry
sniffnoy: (Kirby)
Finally! A first draft of most of our integer complexity paper!

No, you can't see it yet.

-Harry
sniffnoy: (Sonic)
OK. Presented without explanation, here is my most recent fix and generalization for the conjecture I previously erroneously stated here and here.

Let ||n|| denote the complexity of n, the smallest number of 1s needed to write n using addition and multiplication. For r≥0, define A_r to be the set of all n such that ||n||-3log_3(n)≤r. (Note: I actually define A_r with strict inequality, but my notation for nonstrict is to put a bar over the A, and I don't want to figure out how to do that right now.) Let A_r(x) denote the number of elements of A_r that are at most x.

Let r=k+s, with k an integer and 0≤s<1. Then I conjecture that A_r(x) is asymptotic to C(k+1)^(k-2)/(k!^2)(log_3 x)^(k+1), where C is determined as follows:

For 0≤s<2-3log_3(2), C=C0=1.
For 2-3log_3(2)≤s<4-6log_3(2), C=C1=C0+k+1.
For 4-6log_3(2)≤s<6-9log_3(2), C=C2=C1+(k+2 choose 2).
For 6-9log_3(2)≤s<8-12log_3(2), C=C3=C2+(k+3 choose 3).
For 8-12log_3(2)≤s<10-15log_3(2), C=C4=C3+(k+4 choose 4).
For 10-15log_3(2)≤s<5-3log_3(5), C=C5=C4+(k+5 choose 5).
For 5-3log_3(5)≤s<12-18log_3(2), C=C6=C5+k+1.
For 12-18log_3(2)≤s<6-3log_3(7), C=C7=C6+(k+6 choose 6).
For 6-3log_3(7)≤s<7-3log_3(10), C=C8=C7+k+1.
For 7-3log_3(10)≤s<14-21log_3(2), C=C9=C8+(k+1)^2.
For 14-21log_3(2)≤s<8-3log_3(14), C=C10=C9+(k+7 choose 7).
For 8-3log_3(14)≤s<9-3log_3(20), C=C11=C10+(k+1)^2.
For 9-3log_3(20)≤s<16-24log_3(2), C=C12=C11+(k+1)(k+2 choose 2).
For 16-24log_3(2)≤s<10-3log_3(28), C=C13=C12+(k+8 choose 8).
For 10-3log_3(28)≤s<11-3log_3(40), C=C14=C13+(k+1)(k+2 choose 2).
For 11-3log_3(40)≤s<9-3log_3(19), C=C15=C14+(k+1)(k+3 choose 3).
For 9-3log_3(19)≤s<18-27log_3(2), C=C16=C15+k+1.
For 18-27log_3(2)≤s<8-3log_3(13), C=C17=C16+(k+9 choose 9).
For 8-3log_3(13)≤s<1, C=C16+(N-2)(k+1), where N=⌊-log3(3^((1-s)/3)-1)⌋.

Now is this version, which should finally be correct, good enough to deduce (if it's true, which we don't know how to prove) that ||n|| is not asymptotic to 3log_3(n)? I don't know, Josh'll figure that out...

(Of course we only need the simple version, when r=k is an integer, to determine that, but finally we have a version we can actually *state* in generality, so I wanted to...)

EDIT: D'oh, it's actually possible to do these sums non-horribly. Oh well, I'm not going to do that here, because that would mean more binomial coefficients, which I don't have a nice way of writing here...

-Harry
sniffnoy: (Chu-Chu Zig)
So I went and talked to Lagarias. Basically he told me two things:

1. I should start writing up our work immediately.

2. I should go look up the connection to P-vs-NP for Blum-Shub-Smale machines.

As for the latter, I'm not so certain that there really is such a connection? What Lagarias described was a bit different - straight line computations - which, well, are computations, so once you make a number you can use it again freely. Allows you to make much larger numbers - more like 2^2^n instead of 3^(n/3). To that, he said, well, it's some sort of complexity measure of some sort of computation, so you should at least note that. :P In any case, well, I suppose I should actually go look it up.

As for the former - well, he also gave me an additional reason to write it up quickly: He's going on sabbatical soon, and no, he's not going to be willing to look at it then. So I've written up a... not really an actual draft, more like a sketch. Lots of stuff is omitted; notation is used inconsistently (is A_k defined with a strict or nonstrict inequality? Does N include 0 or not?); I haven't included any actual references, just "Selfridge showed such-and-such", "Rawsthorne showed such-and-such", "Arias conjectured such-and-such"; but I think it should be good enough for a "give me the basic ideas" document.

No, I'm not going to post it on the web or anything. Like I said... it's really sketchy. Note, for instance, that Josh told me Glenn and Rohrlich said they'd take a look at what we had - but I don't think we're really under any time constraint when it comes to showing it to them. So may as well actually flesh it out a bit first.

But hey! I have something written! That's progress!

(Yeah, I couldn't make the "small intersection" idea work. I'll have to send Josh my thoughts on it, see if he can make any more sense of it...)

-Harry
sniffnoy: (Chu-Chu Zig)
1. I still haven't gotten around to sitting down and working out whether "small intersection" makes sense! I gotta do that, like, tomorrow. Or the day after or so. Meanwhile Josh has already written up a draft of all his upper bound stuff. It's a huge amount to check, so of course I don't intend to, mostly. Meanwhile I'm acting as something of a proofreader. :P I'm glad my enormous computations are ones that can be safely left out of the paper itself...

(Which reminds me - it would be nice if we could refer to that ||a^n-1|| result those Lithuanian guys discovered. I have to wonder what the appropriate way of giving them credit is, if we refer to it? I don't want to somehow "scoop" them, if that idea makes sense, and if they had planned to write about it.)

Lagarias is back now; I guess I should go talk to him at some point? I'm hoping he didn't find my earlier emails totally creepy... I'm bad at this...

2. I ran into Ginny on the street! And she's living in Ann Arbor now! Working at the hospital. That was unexpected!

3. Man, having an office with just 4 people is going to be dull. Am I the only one who likes having a big common office? How are we going to get people for Dominion now? Maybe I really should buy Flash Duel. :)

4. I brought the PowerJoy Voyager with me back to Ann Arbor. Let's hope it confuses some people. :)

5. Unrelatedly, allow me to state the obvious and note that here are some good ways to quickly change a subject: "Unrelatedly"; "Tangentially"; "That reminds me". (I guess maybe not everyone does this, but when I say "that reminds me", I typically just mean, "that has triggered my brain to think about this", and not that there's any clear relation between it and what was just said.) Further statement of the obvious: Illusion of transparency is a bitch.

6. It is not so easy to find a good copy of the SSBB soundtrack on the internet. (Though I did find something including a number of songs that were cut from the game, which is quite interesting.) Which reminds me, I'm starting to like Josh's idea that we should call USB drives "data crystals" to make the present sound more science-fictiony.

7. Unrelatedly, here is a silly conversation with Erick from yesterday.
Cut because it's kind of incongruous )
(After that, the conversation turned to the Film Board of Canada...)

-Harry
sniffnoy: (Chu-Chu Zig)
(So does posting so much. I guess I'll just have to go silent for a while after this. :) )

...with the tale of Harry and Josh and the Quest for the Unspecified Constant!

SO! When we last left our heroes, they were attempting to prove that neither numbers of the form (2*3^m+1)3^n+1, nor those of the form (4*3^m+1)3^n+1, could be large powers of 2; and had just gotten ahold of this ancient scroll paper from 1979 on writing numbers to two different bases simultaneously. Contained therein was a proof that (among many other things - see the paper for generality!) the number 2^m, when written to base 3, has at least (approximately; I'm going to be loose here) (log m)/(log log m + C) nonzero digits... where C was some constant well-known by the ancients but lost to us that the author hadn't bothered to specify.

Well, that's not great, but it didn't seem *too* bad, as long as the constant isn't too large. So I guess I have to comb through the scroll paper to see just what that is.

The trouble is, the scroll paper was writtten in cryptic language a weird way that didn't make it at all clear where the constant even came from. Most of the proof was by contradiction. Eventually I puzzled it out: The proof involved choosing an arbitrary constant, c_1, and then splitting into two cases based on whether or not some constraint based on c_1 was satisfied. If it was, you were done, with C approximately equal to c_1. The "if not" was the rest of the proof; it showed that that case was impossible. But how, when c_1 was arbitrary? Well, it wasn't really arbitrary; it involved defining a bunch more constants, c_2 through c_10, and at one point it used the assumption that c_1>c_7+c_8 - because c_1 was chosen arbitrarily, and none of the other constants depended on it, so we can certainly choose it to be at least that large. So *really* c_1 is not arbitrary, but rather is c_7+c_8+ε.

OK. So now it's just a matter of determining c_7 and c_8, which of course means determining c_2 through c_6 as well... (c_9 and c_10 were irrelevant). Well, I go through all that, and put it all together (though I misread one thing, which turned out to be irrelevant), and it didn't look terrible. It was approximately 1+2C_3, where C_3 (not to be confused with c_3, of course!)... where C_3 was some *again* unspecified constant, pulled in from another ancient text paper.

So, OK. If C_3 is, like, 2, then that's about 5, which isn't great, but is still probably doable, right? As long as C_3 is small.

Go to the library, track it down, open it up... fortunately this one was rather more straightforward about the constant. It was 48600.

...about then I realized that we didn't have to prove that numbers of the forms above couldn't be large powers of 2 in the first place, so a lot of time was wasted but we got the result we wanted. So yay.

(I should note, the proof was based on the theory of linear forms in logarithms; what I had to go to the library to track down was one of Baker's old papers where he set out explicit lower bounds on these. I also took out a more recent book by Baker on transcendental number theory, from about 1990, since it was right near by, in the hope that it would contain a better bound, but no. However, I realize now that I could have saved some time there if I had just looked at Wikipedia - it lists a better, more recent bound from just three years after that book (and links to the paper, too). Unfortunately, it still involves a constant which is in this case 1174136684544*log(6).)

-Harry
sniffnoy: (SMPTE)
OK, guys, time to finally write this. I've written repeatedly about the ridiculous tedious computation I did to classify n with d(n)<21d(2); I've mentioned I started writing a program to do it automatically for d(n)<18d(2), but I never finished it; I've said about how, despite this computation being *almost* algorithmic, I haven't been able to figure out how to automate the process in general. And I've said I would post here a fulls tatement of just what it was that I was trying to automate and ask if anyone has any ideas how this can be fully algorithmized. So... time to actually do that.
Long explanatory background )
So what's the problem, anwyay? )
So... we've got a data representation problem. I'll admit I haven't banged my head against this as much as I probably could have, but still, I don't think I can figure this out myself. Any ideas, guys?

(I *think* I've pretty much covered everything, but perhaps not. Do ask if anything's unclear...)

-Harry
sniffnoy: (Kirby)
So I don't know if I'd mentioned this before. Suppose you take n, and you know that the only powers of 2 with defect less than nd(2) are those that are less than 2^n. Then you can conclude that ||2^a*3^b||=2a+3b for all a≤n+9. (Barring a=b=0, as always.)

Now I'd classified all numbers of defect less than 21d(2), so we should be able to prove this for a≤30, right?

Thing is - well, it's easy to check an individual number is not a power of 2, but how about an infinite family? (Just that it contains no *large* powers of 2, of course.) Say it's numbers of the form c*3^m+1 (c fixed). Well, one thing you can do is to note that c has to be 5 or 7 mod 8, which eliminates lots of things right off the bat. But then for what remains, you can note, if c*3^m+1=2^k, well, 2 is a generator mod every power of 3, so 2*3^(m-1) | k, in particular 2*3^(m-1)≤k, and so you've got exponential≥super-exponential, so you can get some pretty strict bounds on a and m (so for given c, you only need check finitely many m, and vice versa).

OK, that was simple. But once you get to 19d(2), you have to deal with a more general sort of infinite family, (3^n+1)3^m+1. Well, that's not a problem, that doesn't work mod 8. Or even mod 2, for that matter. OK, let's go to 20d(2). Now you have to deal with (2*3^n+1)3^m+1. (Also 2(3^n+1)3^m+1, but mod 2 takes care of that.) This is a bit more problematic - mod 8 doesn't cause a problem, and indeed, this *can* equal a small power of 2 (n=1, m=2 gets you 64), so no sort of modular arithmetic will suffice on its own. OK, how about the above trick? But here c=2*3^n+1 can be arbitrarily large, so that no longer suffices.

Hm. OK, let's set that aside for now. Go on to 21d(2). Aside from stuff that doesn't work mod 2, we also have (4*3^n+1)3^m+1. Same problem as before. Now here we don't know any examples where this is a power of 2, so maybe some sort of modular arithmetic can prove this one? It's certainly not obvious.

So here we should have had this true for a≤30, but instead only had it true for a≤28. Josh comes up with some more modular arithmetic constraints, and then I do a computer search of some more, but they don't resolve the problem. I feel like there has to be some approach we're missing.

I think, you know what would be convenient? If there were some lower bound on the number of nonzero digits that had to appear in ternary expansions of powers of 2. That would provide a nice quick solution. Josh says he thinks he's seen something like this in Unsolved Problems in Number Theory, but it turns out to not be quite relevant[0]. (There's a conjecture that every sufficiently large power of 2 must contain a 0 when written in ternary, but that's kind of the opposite of what we're looking for!) Searching internet turns up a preprint of Lagarias actually titled "Ternary expansions of powers of 2", but again it doesn't talk about this problem. (Apparently Erdös conjectured that every sufficiently large power of 2 contains a 2 when written in ternary, and it concerns a generalization of this.) Couldn't find anything else quite relevant.

So I decided to ask on MathOverflow. I thought perhaps I would generalize the question, to nonzero digits of powers of a, written to base b (assuming no power of a is a power of b), but decided against it, on the basis that it seemed these sorts of problems were hard even in specific cases. Well, I mentioned the generalization in the question, but the actual question itself was just about the specific case.

Well, within a day, two people pointed me to this paper, which does indeed prove it in generality! In quite a bit more generality than I had even thought of, really. (To recover what I wanted, just plug in, uh, what I called "a" for a, and 0 for α.)

The bound's kind of small, so we may have to check quite a few cases, but that should be easy, and hey! It's a bound! It should be doable! And it also means that if we wanted to continue further, past 21d(2), this could, in principle, handle whatever cases arose (though possibly with exponential difficulty).

So, yay.

...assuming we can figure out how to compute that constant, anyway. The paper doesn't seem to make that immediately clear, and with the small numbers we're working with, it'll probably be relevant...

EDIT 2 hours later: Yeah, we're now seriously considering the possibility that this is going to be too small to actually be able to use effectively. Still, unless we come up with some other ideas, we have to check it out...

-Harry

[0]Of course, it must seem a bit odd to be discouraged that what you're looking for doesn't appear in Unsolved Problems in Number Theory. After all, there's a good chance that just means the problem is solved.
sniffnoy: (SMPTE)
This is where things start to look complicated. Let's see if they actually are.

Once again, wherever Arias has written βω, I'm going to assume he meant ωβ. (Do some people use the reverse convention for ordinal multiplication, or something?)

Google Translate seems to be having a little more trouble with these ones, but I think with some common sense I can figure out what's meant.

Note - here I'm just transcribing what Arias wrote (except for the ordinal multiplication switch), so that means that for now we're back to his a,b,c, not the ones I used last time.

Well, no, one more thing stands out as "obviously a mistake", so I'll fix that too - the last line of these conjectures, he talks about aγ and aβ in all three. I'm going to assume that was supposed to vary appropriately. I'm also going to multiply by 1/3 in the last case, as it doesn't seem to make sense otherwise.

Conjecture 9 (fixed): The numbers of the sequence bωβ+n that converge to aβ=b/3^a (with ||b||=3a) are the numbers of the sequences p(q3^j+1)/3^(a+j), where b=pq and ||p(q3^j+1)||=3a+3j+1, and those terms of the sporadic succession 2^(3j+2)/3^(2j+1) which are contained between supγ<βaγ and aβ.

Conjecture 10 (fixed): The numbers of the sequence cωβ+n that converge to bβ=b/3^a (with ||b||=3a+1) are the numbers of the sequences p(q3^j+1)/3^(a+j), where b=pq and ||p(q3^j+1)||=3a+3j+2, and those terms of the sporadic succession 2^(3j+1)/3^(2j) which are contained between supγ<βbγ and bβ.

Conjecture 11 (fixed): The numbers of the sequence aωβ+n that converge to cβ/3=b/3^a (with ||b||=3a-1) are the numbers of the sequences p(q3^j+1)/3^(a+j), where b=pq and ||p(q3^j+1)||=3a+3j, and those terms of the sporadic succession 2^(3j)/3^(2j) which are contained between (1/3)supγ<βcγ and cβ/3.

...hm. So part of this is just a formalization of what I was saying yesterday, and part of it appears to be, well, wrong. I could reformulate these in terms I would prefer, but I'm not going to, because I get what these are saying so I don't think that's really necessary.

For the rest of this, I'm just going to consider #9, as the rest should be analogous, but I don't want to think about how to modify appropriately.

Let's start by summing up: He's saying that the convergents to b/g(||b||)*appropriatefraction discussed previously discussed are some sporadic stuff, plus precisely what you get from the infinite families p(q*3^j+1), where pq=b, and (additional requirement). Now, we're going to immediately need an additional requirement, that ||b||=||p||+||q||, i.e., that this representation is minimal; otherwise, stuff won't line up. But I think his next requirement might cover this - he requires that ||p(q3^j+1)||=3j+||b||+1. Which looks stronger offhand; is it? A quick check says "I can't tell". Well, whatever. So far this is basically a formalization of the correspondence I mentioned before.

Now before we continue it's worth noting that while I do expect this correspondence to hold up down in the regime of +1s, I see little reason to expect it to continue to hold once +6s and such become relevant. Maybe conjecture 8 is still true, but it's pretty doubtful. I was talking to Josh yesterday and the conclusion seemed to be that this would be a good problem to not work on. If it's true, we don't have the tools to prove it; and if it's false, a disproof would require absurd amounts of computation (barring the automation that I still haven't talked about here), if we even had the tools at all.

Anyway, back to 9. 9 says the relevant numbers are those infinite families, plus some sporadic stuff. What sporadic stuff? Powers of 2. Well... that part just isn't right, except at low defects. *searches through files to find a counterexample* ...OK, let's take 32, with its corresponding sequences 32*3^k+1, etc. ||32||=10, add 1, that's 2 mod 3. Now, the range condition on those sporadic things - if you think about it, and apply conjecture 8, it's just saying, "stuff inbetween this collection of infinite families and the previous one of the same complexity mod 3". In this case, the latter would be 4*3^k+1, etc. So let's look between those two at stuff with complexity congruent to 1 mod 3, and check whether it all either is a power of 2, or actually falls into one of 32*3^k+1, etc.

First up is 320, but that's 32(3*3+1). Next is 35, which is... which is... none of those. It's not even, and it's certainly not 32*3^k+1, but there it is anyway. So that part of conjectures 9, 10, 11, is false. (Technically, this is only a counterexample to one of them; but IINM, 35, 70, and 133 together make counterexamples to all three. And there should be plenty more.)

As for the rest? Well, that's what I described above... although, seeing as the conjectures as written are trying to write down *all* the numbers in that range, removing the second part, well, offhand I'm not sure how you'd formalize the result. I guess just say that those numbers approach it, and everything else is bounded away from it?

So, conclusion:
1,3,4,5,6,7 are resolved (either proved or salvaged).
2 is unresolved.
The parts of 9,10,11 dealing with what the sporadic stuff is, is false. The rest can be folded into 8.
8 is unresolved, but too hard to handle without additional tools.

So the point of all this, is that just leaves #2 as something possibly to work on. Now enough of this.

-Harry
sniffnoy: (SMPTE)
...you know what? I don't need super-detailed ordering info below, say, 18d(2). So let's start with what looks reasonable, and worry about the details later.

I'm going to stick with my assumption from last time, that he meant ωβ rather than βω, because the latter is kind of silly if, say, β=2. Also, I'm just going to say a,b,c instead of a',b',c', because I can redefine things if I want to. :)

So! Table time![0]
 β   a_β   b_β   c_β lim(a_{ωβ+n}) lim(b_{ωβ+n}) lim(c_{ωβ+n})
| |     |     |     |             |             |             |
 0     1     1     1           2/3           3/4           2/3
 1   8/9   8/9   8/9         16/27           2/3         16/27
 2 64/81   5/6   5/6           5/9         16/27           5/9
 3   7/9 64/81 64/81       128/243          7/12       128/243
 4 20/27   7/9   7/9         14/27           5/9         14/27
 5 19/27 41/54 20/27         40/81           ???           ???

OK, so far it works. But, had I been paying attention beforehand and thinking, I would have realized, of *course* it works at these low values of β. Why? Because, if the n'th number appearing in the sequence for, say, 0 mod 3, is m, then the n'th collection of infinite families appearing in the sequence for 1 mod 3 will be m*3^k+1 and similar. And similarly for 1 mod 3 and for 2 mod 3.

So, this is good reason to suspect that it works, but it's less clear that this will work when β is no longer finite. I guess the question now is, can we formalize this in a more general setting? Have we developed enough tools for this? Maybe, but I don't know.

Status: I guess I've now officially begun thinking about this! I guess I'll hold off on 9, 10, and 11 until I've actually thought about this some.

(Actually, while making the table, I realized the reason why of course it works after 2 or 3 rows, and then got a nasty shock when I made row 5 and found that it didn't work! I was very confused about this, as I was certain I had very good reason that it should work. Fortunately, I eventually realized the problem was having swapped rows 2 and 3, because I was copying them from somewhere where I had listed the 2 mod 3 values before the 1 mod 3 values.)

-Harry

[0]Reminder: Counting the leftmost column as column 0, we have that column 4 is supposed to be 2/3 of column 3, column 5 is supposed to be 3/4 of column 1, and column 6 is supposed to be 2/3 of column 2.
sniffnoy: (SMPTE)
OK! Let's do this. Or, rather, let me comment on our lack of having done this.

Arias precedes these last four conjectures with the comment, "What follows is more tentative and is based on only a few cases." Which makes them more interesting!

Conjecture 8: For all ordinals β<ξ - well, we know what ξ is, so we can say, for all ordinals β<ωω -

limn→∞ aβω+n=cβ/3 ;
limn→∞ bβω+n=aβ ;
limn→∞ cβω+n=bβ

(Reminder: He's using a_α for the 0 mod 3 sequence, b_α for the 4 mod 3 sequence, c_α for the 2 mod 3 sequence. Also reminder: So far as I am concerened right now, the number 1 is not 4 mod 3, but rather is a special case to be excluded.)

(He notes afterward that this conjecture is the basis for his claim that ωξ should equal ξ - a rather different route than we arrived at it! We observed that it looked like defects less than n should have order type ωn, so you just get ωω as the limit of those.)

Thought: Did he mean ωβ, instead of βω? That would seem to me to make a lot more sense.

What I think he intended conjecture 8 to be: For all β<ωω,

limn→∞ aωβ+n=cβ/3 ;
limn→∞ bωβ+n=aβ ;
limn→∞ cωβ+n=bβ

Now this is interesting! But before I comment on the status of this - even though you can guess what that'll be - let's reformulate it. You'll notice it's not fully symmetric - it's got two 1s and a 1/3. Do I intend to make it symmetric? No. But two 1s and a 1/3 looks off to me - the familiar pattern that turns up in integer complexity is not 1, 1, 1/3, but rather 2/3, 2/3, 3/4. (Or 3/2, 3/2, 4/3, if you prefer.)[0] Now, as I am writing this, I have not actually *done* the reformulation I intend, so I don't know if that's what'll come out of it, but I'm betting it will.

What is the reformulation I intend? Well, like I said earlier, I don't like Arias's choice of denominator. So, allow me to define a'α, b'α, and c'α, which will of course be exactly the same except using a denominator of g(||n||) instead of 3⌊||n||/3⌋. Much more natural, isn't that? So a'_α=a_α, but b'_α=(3/4)b_α and c'_α=c_α/2. So, doing that, we get:

Reformulated version of above: For all β<ωω,

limn→∞ a'ωβ+n=(2/3)c'β ;
limn→∞ b'ωβ+n=(3/4)a'β ;
limn→∞ c'ωβ+n=(2/3)b'β

...yup, that's 2/3, 2/3, 3/4, just as expected.

Hm, now I'm curious - what would we get if we put it in terms of denominator 3||n||/3? Probably nothing nice, but let's do it anyway - after all, we have formulated pretty much everything in these terms so far, so it may make things easier. We'll have a''_α=a_α, b''_α=3-2/3b_α, c''_α=3-1/3c_α. Put this way, the conjecture becomes:

Reformulated (again) conjecture 8: For all β<ωω,

limn→∞ a''ωβ+n=3-2/3c''β ;
limn→∞ b''ωβ+n=3-2/3a''β ;
limn→∞ c''ωβ+n=31/3b''β

...aw, I was hoping it would come out symmetric, all factors of 3-1/3. Evidently not. I suppose I could figure out how to do that, but there's really no way that's worth my time. Interestingly the one here that's different is the 2-4 boundary, rather than the 4-0 boundary. (Well, Arias had the 0-2 boundary as the one that's different, but that's because he formulated it badly. :) )

In an case, what's the status? Well... we haven't worked on this. At all, to my knowledge. But hey, we have some tables - I kept track of ordering up to 16d(2) before I decided that was too tedious - so let's check if it they satisfy this! Hopefully I bothered to write down ||n|| mod 3...

...crap, no, I didn't. Well, that will make this rather tedious. OK, first things first, I'll make a version of the table that includes that, then check it, and then come back and report on the results, though I can, at least, tell you off the top of my head that it's true for β=0 (and I doubt Arias would have conjectured it otherwise...)

-Harry

[0]These correspond to the 0-2 boundary, the 2-4 boundary, and the 0-4 boundary, respectively, but since I guess things really go 0-4-2, I guess that should be 2/3, 3/4, 2/3. Well, it's a cycle, so it doesn't matter.
sniffnoy: (SMPTE)
Why am I doing this, by the way? Because I found myself unable to sit down and think about any of these conjectures, because, well, they were in Spanish, and I wasn't sure what we had already done on them! So, I'm writing that up so I can actually get to work on them.

Again, I'm just going to apply Google Translate, because these don't appear to be that complicated.

Thought on notation: I started using the notation "L(n)" for minm≥n||m|| because, well, it's logarithmic. (Earlier I was using ugly stuff like f-hat or f-twiddle.) Perhaps a better name for what I currently call "g" would be "E", for exponential? (Recall: By g(k) I mean the largest number writable with k ones.) Well, I've been using "g" so far in this series, so I'll continue with that.

Also note: The rest of Arias's conjectures all deal with his sequences a_α, b_α, c_α[0]. But these sequences as he defined them don't actually exist! Fortunately, it seems pretty clear that even though we've redefined these sequences, these are the "same" sequences he was thinking of. Well, mostly, anyway - you'll see what I mean in a moment. So, all his further conjectures about these sequences will be taken as being about our equivalents...

5, 6, and 7, are really just one conjecture, split in three. First, he makes a definition: Let A be the set of all natural numbers n such that ||3jn||=3j+||n|| for all j.

Then, he makes conjectures 5, 6, and 7: The sequence a_α consists of the elements of the set {n/3||n||/3 : n∈A, ||n||≡0 (mod 3)} sorted in decreasing order. The sequence b_α consists of the elements of the set {n/3(||n||-1)/3 : n∈A, ||n||≡0 (mod 3)} sorted in decreasing order. The sequence c_α consists of the elements of the set {n/3(||n||-2)/3 : n∈A, ||n||≡0 (mod 3)} sorted in decreasing order.

Status: Well, um... gee. We *redefined* these three sequences to be those, except letting n range over all of N, instead of just in A. So, by our definition, this is false as written but easily salvaged (just change A to N). But, perhaps you could use this to argue that we *aren't* thinking of exactly the same sequences as Arias after all, and that we *should* have restricted to A, making it *true*.

Alternatively, you could note that this is another instance of Arias noting that ||3n||=||n||+3 isn't always true, but failing to realize the implications, and therefore we can't really meaningfully speculate on which definition he "would have meant" had he realized this, and it's best to just note the truth and not worry about exactly what the mistake might have been. After all, if the sequences existed as he defined them, then they *would* meet our definition. They'd meet this definition too, of course, because then we'd be in a world where ||3n||=||n||+3, and 1=0 etc, but they would quite straightforwardly meet our definition.

So, uh... yeah. I don't intend to change our definitions. Excluding numbers not in A would mean excluding possible things you can do with certain numbers of threes, which doesn't seem right, to my mind.

Well, that was pretty trivial (given what we know so far). So far I've gone through 1-7, and only #2 is something we haven't solved. I'll come back later and write up 8; I'm pretty sure it, and the remaining three after it, are things we haven't considered.

-Harry

[0]Question: Does anyone know a nice, short, word for a transfinite sequence? Whether indexed by all ordinals or just some set of ordinals? I've never heard one and I get the idea there isn't one. Because "sequence" typically suggests an N-indexed sequence. Well, in any case, the context makes it clear.
sniffnoy: (SMPTE)
In truth, I don't think the fact that this is in Spanish would be a real problem, were it not for the fact that he was looking at the problem slightly differently from us.

Since these two conjectures are ones we've salvaged and proved, this is going to basically just going to be a review of stuff we've already done. But it may make explicit some stuff I've left implicit before.

Note on notation: While I'm using Arias's ||n|| notation, I'm still going to use 'g' and 'L' to mean the things I have before... Arias uses these to mean rather different things. (What I call 'L', Arias calls 'g', and what he calls 'L' is something we haven't had any use for. I don't think he gave a name to what we call 'g'?)

...actually, now that I run this through Google Translate (annoyance: accented chars don't copy/paste correctly from the PDF, need to enter those manually), and can read the motivation for the conjectures, the mistakes make a little more sense. ("The mistakes" that I've seen so far being basically that even though he noted that ||3n|| and ||n||+3 aren't always equal, he didn't really take this to heart, and conjectured several things that would require this to be true.)

He was looking at the highest numbers writable with n 1's, and the largest few *do* obey this law, as I showed[0]. Interesting to see the different perspectives we originally came at the problem from. When I was doing my original "enormous tree" approach, I honestly didn't even think to look at the actual data, since I already had the method...

Anyway! Let's get on with this. I'm going to consider these two together. These are coming mostly straight from Google Translate.

Conjecture 3. There are three transfinite sequences of rational numbers (aα)(α<ξ), (bα)(α<ξ), (cα)(α<ξ), such that the (greater) numbers of complexity 3n (respectively 3n+1, 3n+2) are the (first) natural numbers contained in the sequence (3naα) (resp. (3nbα), (3ncα)).
ξ is an infinite countable ordinal such that ωξ = ξ.

Conjecture 4. The three sequences are decreasing. The denominators of each term of a_α, b_α, or c_α are powers of 3.

Status: OK. So. Before I point out why it's false as written, I'd like to reformulate conjecture 3 a little. As he wrote it, it bugs me a little. First thing I'd like to do is say, we should replace "3n+1" with "3n+4". In this context, {0,2,4} is a much more natural set of representatives for Z/(3) than {0,1,2}. This excludes the number 1 from consideration - which it should. 1 is a special case. Similarly, it bugs me that he used 3^n, 3^n, 3^n for the multipliers, rather than 3^n, 4*3^(n-1) [sticking with his use of 3n+1 instead of 3n+4 here], 2*3^n.

The second thing I'd like to do is, as long as I'm going to point out why this conjecture is false, let's turn it into a bolder conjecture!

Hypothetical bolder version of conjecture 3: There is an transfinite sequence of rational numbers (aα)(α<ξ) such that the (greater) numbers of complexity n are the (first) natural numbers contained in the sequence (g(n)aα). Etc.

Of course, I'm going to guess that the reason that he didn't make this hypothetical bolder version, is that it's false even for the low defects Arias was looking at. (Though back when I was programming this, due to a bug in my program, it took me quite a while to notice this!) Why? Suppose it were true. Then consider the number 82. ||82||=13, and g(13)=4*3^3=108. So the term in the sequence giving you 82 would be 82/108=41/54. But 54=2*3^3=g(11), so 41=g(11)*(41/54) would show up in the sequence for g(11), meaning 41 would be writable with 11 ones. Which it isn't; ||41||=12. The problem is that ||2*41|| isn't equal to ||41||+2. (Note - this is not the minimal counterexample; that would be 2*23=46. But it is the counterexample of minimal defect; 23 and 46 have way higher defect than what Arias was looking at.)

Splitting it into three sequences avoids this problem... but unfortunately there's no way to avoid the problem that ||3n|| isn't always equal to ||n||+3. As written, the conjecture is false. Suppose it were true and consider 321. ||321||=18, 18=3*6, so 321/3^6=107/3^5 would lie in the sequence of aα's. But then 107=3^5*(107/3^5) would have to be writable with 15 ones - which, again, it isn't; ||107||=16. In general, if true, this conjecture would imply that ||3n||=||n||+3 for all n. As I said above - he may have noted that this point is false, but he didn't really take the lesson to heart, evidently[3].

Before we get to salvaging this, let's actually look at conjecture 4. The second part of it, about the denominators being powers of 3, I don't understand why he put that there, because that's true by definition. (Though it would be modified to "power of 3, two times power of 3, or four times power of 3" if we made the modifications above.) Anything whose denominator wasn't a power of 3 could be safely excluded from the sequence. So, I'm just going to ignore that.

But OK, how can we salvage this? The whole idea of this sequence where you can multiply by a power of 3 (or perhaps a 2*3^n or 4*3^n) through to find what numbers have a given complexity, is simply wrong. Get rid of that and you're just left saying that a countable set can be enumerated by an ordinal, which doesn't say anything at all. Fortunately, we can turn it nontrivial again by combining it with #4: The requirement that the sequence be decreasing. Then what you're saying is, this set (or these sets) are reverse well-ordered. Though, I see more than one possible way to salvage this... let me list three of them.

Salvage of 3 and 4, version 1: The sets {n/g(||n||) : ||n||=0 (mod 3)}, {n/g(||n||) : ||n||=2 (mod 3)}, and {n/g(||n||) : ||n||=1 (mod 3), n≠1}, are reverse well-ordered.

(OK, excluding n=1 doesn't actually do anything there, it works perfectly well with 1 included, I'm just excluding it out of principle. :) )

This is closest to Arias's version, in that it's split in three; note that changing his 3⌊||n||/3⌋ to a g(||n||) doesn't affect anything, as it's just a constant factor applied to each of the three sets. Indeed, instead of changing to g(||n||), we could also just get rid of the floors there and use 3||n||/3. Same statement.

But hey, since ||2n||≠||n||+2 is no longer really a problem for us, why split in three at all? Let's merge it all back into one! The resulting statement will be equivalent - well-ordering is preserved under subsets and under finite unions.

Salvage of 3 and 4, version 2: The set {n/g(||n||) : n∈N} is reverse well-ordered.

But like I said, instead of g(||n||) in the denominator in the first version, we could also put 3||n||/3... let's merge those sequences, instead. It's equivalent, yes, but let's do it anyway.

Salvage of 3 and 4, version 3: The set {n/3||n||/3 : n∈N} is reverse well-ordered.

Well, these statements are all equivalent, and as we recently proved, they're all true.

I could include a version 4, based on 3⌊||n||/3⌋ like Arias did, but I see no reason to do so. Why include version 2? Because it's the most natural way to state it, I think, considering the problem. But then why include version 3? Ah, well, that would be because that's how we actually proved it. :) We've been talking about the set of defects being well-ordered, but if you haven't noticed, you take d(n)=||n||-3log_3(n), you raise 31/3 to that power and you take the reciprocal (reversing the order), and the notions are equivalent. Fundamentally, 3n/3 is multiplicative, whereas g(n) isn't, and that makes the former a lot easier to work with.

Somewhere along the way here I started ignoring the question of just what the (obviously countable) ordinal in question is, though. Arias called it ξ and conjectured that it satisfied ωξ=ξ. This is true as well - in fact, the order type of all five of the sets I've listed above is exactly equal to ωω. (Though equivalence here is not as obvious, and SFAICT relies on actually knowing some facts about these sets.)

Finally, to recover at least some of what Arias originally conjectured, we can note that if we stick to the split-in-three-sets version, this his idea is in fact true if we stick to indices less than ω². (That's not the absolute limit here, but I don't really think being more specific than that is worth my time, and this way it's true regardless of which of the three sets we're considering.)

ADDENDUM next day for clarity: That is to say, using the three-sequence formulation, at indices less than ω², if you multiply through the numbers you get will in fact have the appropriate complexity. And they will be in decreasing order, seeing as how you took a decreasing sequence and multiplied through by a constant.

Later ones will come... later. Then we'll be getting into ones we really haven't considered. I think you can see why I'm stopping here for now.

-Harry

[0](Actually, thought: My computations show this is true for n with D(n)≤2, but because the boundary between D(n)=2 and D(n)=3 doesn't depend on what ||n|| is mod 3, that means we can rewrite that statement more simply, as follows: If n>g(||n||)/3, then ||3n||=||n||+3. (And thus ||3^k*n||=||n||+3k.)
[3]This *is* the minimal counterexample; the counterexample of minimal defect would be 683*3=2049.
sniffnoy: (SMPTE)
OK. In this post I'm going to write Arias's conjectures in English, and state what we know about them. For most of them the status will of course be, "We haven't even considered this question." Note that I don't actually know Spanish, but these all seem pretty clear anyway, as they're almost entirely in math notation; Conjecture 3, the one that wasn't, I asked Hunter, he knows some Spanish, he confirmed it said what it looked like. (Well, his commentary on the conjectures is of course in Spanish, but I'm going to just ignore that.)

Unrelatedly, I'm starting to like Arias's notation of ||n|| better than Guy's notation f(n), because I often want to call other functions f. That shouldn't matter here, but I think I'll switch to that for this.

Conjecture 1. For every n, there exists a such that ||3jn||=3(j-a)+||3an|| for every j≥a.

Status: This is true, we proved it quite a while ago.

Conjecture 2. For all natural numbers p and q, there exists a such that ||p(q3j+1)||=3j+1+||p||+||q|| for every j≥a.

Status: As stated, this is easily false, for two different reasons. The first is that it runs into the problem that ||3n|| is not always ||n||+3. For instance, ||107||=16, but ||321||=18. So if we pick q=107, we'll find that for j≥1, ||p(107*3j+1)||≤3(j-1)+18+1+||p||=3j+15+1+||p||<3j+16+1+||p||. The second reason is that it has a problem if p=1. In that case, including ||p|| in the sum is clearly a mistake. Fortunately, these are both easily salvaged. (Including ||q|| if q=1 could also be considered a mistake, I suppose, but it's a submistake of the ||3n|| vs ||n||+3 mistake.)

Salvage of conjecture 2: For all natural numbers p and q, p>1, there exists an a such that ||p(q3j+1)||=3(j-a)+1+||p||+||3aq|| for all a≥j.

Obvious generalization of salvage: For all natural numbers p and q, there exist a,b such that ||p(q3j+1)3k||=3(j-a)+3(k-b)+1+||3bp||+||3aq|| for all a≥j.

(Note that this generalization does *not* follow immediately from conjecture 1 (theorem 1?) + salvage, because the former does not give any bounds on how far out you have to go! And neither does our proof of it. So there's an additional uniformity condition here.)

Status: We know many special cases of this to be true from my computations, but as for the general case, we haven't really worked on it. I have an idea or two but I don't really expect them to work. I also have an idea or two for how to generalize this further - however, I also have computations showing the most obvious generalization is false! Probably salvageable, of course, but I'm guessing we should work on this before worrying about that...

...and, taking a look at these again, I'm realizing, whoops, these are a *lot* less clear than I'd realized. 3 I understand because I asked Hunter about it; 4 is all in Spanish and not so clear; 5-6-7 (really all one thing) aren't so clear either. 8 is entirely clear, but 9-10-11 again I'm not so sure of either.

OK, that was silly! I'm going to stop here for now, then, because I don't want to do 3 without doing the subsequent ones.

-Harry
sniffnoy: (Kirby)
I mean, if cardinals are red, I guess ordinals can be blue, right?

Blue Kirby: Someone thought to buy Smash 64 for the Virtual Console for the house's Wii. I think Dan was a bit surprised I was so good at it. :D I realize I'm not by the standards of old Tufts House - I haven't played in quite a while, I keep failing to smash, I never learned to Z-cancel - but I'm certainly good against the people here. Much better than at Brawl, that's for certain. Need to play more, remember more. Took me a bit to remember how to play Falcon.

Obvious-but-funny-anyway-observation: The controls have been modified to match those of Melee. Hence Z-cancelling would now be done with the L button. I don't expect anyone to start referring to it as "L-cancelling", though.

Blue Water Fangs: I was down at GYGO and saw they had the Xbox version of Metal Slug 3 for cheap, so I picked that up. Hooray, Metal Slug 3 with a non-ridiculous number of continues. Heh - this TVTropes page refers to the Xbox version as a "porting disaster" because of this. Bah! If you can't beat a level in five lives, you aren't beating it properly, I say!

Well-ordering: Let's talk about integer complexity. I finally finished classifying n with d(n)≤21d(2), which means I can classify n with D(n)≤1, and... it's terrible. I decided to not even go through and finish it; it may be easy, but it's still not worth it. {n: D(n)=0} has a nice form, so I guess that's worth noting, but {n: D(n)≤1} really doesn't, so... why bother? (OK, I guess I can't say this for *certain* as I didn't finish, but it seems pretty clear.) Well, at least the statement that D(3n)=1 ⇒ D(n)=1 is kind of nice.

But! Stopping work on that means I can actually get back to work on interesting stuff. A few days ago I had the idea to, y'know, actually formalize the notion of certain types of infinite families of integers we'd been using implicitly the whole damn time, and, well it's been very helpful. For instance, we've been able to finally prove that the set of defects is well-ordered! (It has order type ωω.) I was hoping to prove that the set of defects less than n had order type ωn, but the best I have so far is that it's at least ωn, and less then ωn+1. Still - well-ordering! And this is basically the LHF[0]! I definitely think we can milk this concept for a bit more.

Maybe we should contact Arias again and tell him we've proved well-ordering? Then look over his paper more carefully and see if we can prove any more of his conjectures. :D (Well, that we should do anyway...)

-Harry

[0]OK, I suppose nobody who wasn't in Tufts House the past few years is going to recognize that acronym[3]. I don't think I've mentioned Nathan's strange acronym-slang anywhere here previously, have I? Well, this one stands for "low-hanging fruit".
[3]Actually, it does show up on acronymfinder.com, so I suppose there are indeed other people out there using them. But I've never seen it elsewhere personally.
sniffnoy: (Chu-Chu Zig)
I finally (after how many months?) got around to going down to GYGO and picking up a replacement A/V cable for my Dreamcast. (Unrelatedly, there actually seem to be people playing board games down there during the summer - and now I actually have free time! Maybe I should go see about that.) The very next day Doug decided to do a cleanup of all the cables lying around the TV, and, guess what, we found the old A/V cable for it, sitting on top of the television. So, uh, now we have two Dreamcast A/V cables. In case, y'know, someone else brings a Dreamcast, but doesn't have an A/V cable, and we want to use both at once. Y'know. Just in case.

So I don't think I mentioned that we now have a copy of Metal Slug Anthology for the Wii, which means, yes, Metal Slug 3. But unlike the Xbox Metal Slug 3, this is emulating the arcade versions of the games... and the options are rather restricted. The only options for how many continues you get are "limited" and "unlimited". Let's say I play Metal Slug 3, on easy, with "limited" continues. It gives me 30 of the things. Maybe that goes down if I set the difficulty higher, but, well, I don't want to do that. Now for what it's worth, unlike when playing the Xbox version, these continues are only 3 lives, not 5. But also unlike it, you don't have to restart the level when you use one. Which is much more important than the number of them - of course I could beat the game, given just 25 lives! If I still really knew the game, of course - I don't remember it so well. With 60, though, it should just be trivial. There's like, no tension. It just isn't fun. I did see a copy of the Xbox Metal Slug 3 at GYGO - maybe I should go see how much it costs...

In the "news I feel compelled to report even though I have a hard time caring about it" department, Sega is delaying Sonic 4. Considering how for many years Sega's business plan seems to have been "Make shitty games that are rushed to market", this is good news. But I still have a hard time caring about it. They've also announced a strange-looking game called "Sonic Colors", which... eh, go look it up. It's undeniably strange.

OK! Now we get to the original main point of this post. Let's talk about integer complexity. After a semester of having no time, followed by several weeks of procrastinating, I finally sat down and completed the classification of numbers n with d(n)≤20d(2). I'm going to do one more iteration of this, to get to 21d(2), because that gets me to 2+2d(2) and hence allows me to classify all numbers n with D(n)≤1. Will it be anywhere near as nice as my classification of n such that D(n)=0? Probably not. Will it be terribly tedious to do? Yes. But it's so close... earlier I was considering going to 22d(2), because along the way we should find the lowest-defect instance (other than the trivial example of 3) of a number n with n=3k, f(n)≠f(k)+3, but that would just be too tedious. We can demonstrate that without computing out the whole thing, if we can just get to 21d(2) - in fact, we probably have enough to demonstrate that already! And no more than a bit of trivia anyway. (For the record, the number in question is 2049.)

So what am I going to do then? Well, for one thing, I could start thinking more seriously about how to automate this process, so we can truly push it as far as it can go. Everytime I've thought about this before it's looked like one giant programming clusterfuck with no obvious solution. If I intend to do this, I'm not going to be able to do it alone; when I get to that point, expect me to post here a call for help with problem details, exactly what process I'm trying to automate, why it seems hard, etc.

But I should probably also at that point start writing things up. Josh wants to move into the writing-things-up-phase; he's apparently written up a draft of his 2.639log_2(n) upper bound (that's how far he's gotten it). Though I suppose there's not much to write up - writing up the theorems involved will just require going back over Josh's and my old interpersonal communications and extracting the relevant parts, fixing the errors, and adding our more recent developments, describing the procedure I used for hand computation, and then appending the big table. Maybe make a note on the older "enormous tree" method that you'll recall I wrote up earlier, but which we don't intend to put in our final version due to our newer methods mostly obsoleting it. Hm, I guess I'd need to write up a base case or two as well, to get it method started - we used the results of the "enormous tree" method as base cases here, but if we're not going to use that we'll have to prove the first few inductively.

But we're not out of ideas yet! As I said, I do want to at least try looking into automating this process, and meanwhile Josh has a new idea for bringing his upper bound all the way down to (5/2)log_2(n). We'll have to see whether it works out.

OK! Now for the strange part! The other day Josh IMs me with the question, "Do you know who this is and what language it is in?", linking to this site. It discusses integer complexity, and mentions me by name, linking to what stuff I had put up on the web, and to my LJ - strangely, it does not mention Josh by name, despite mentioning old upper bound results of his. (Hm, I really should get around to tagging the integer complexity entries - if I've actually got guys looking through here to find out about it, they must be pretty annoyed!)

Josh notices that the domain name ends in ".lv" and hence it's presumably Latvian. OK, that's one mystery solved. Now - who are these people? I comment how fragmented this seems - we're working on this in English, there's the stuff Arias has written on it in Spanish, and now these guys in Latvian... (which reminds me, I never did sit down and try to read much of Arias's paper. Later I should also go back and take a better look through the literature, now that I have a better idea as to how I might find some of it.)

Eventually, Josh makes the suggestion - could this be some Latvian PROMYS-like program? And I have to say this seems pretty likely. A lot of their ideas seemed decidedly, uh, implausible. They did have one result we hadn't seen elsewhere, though, that we hadn't seen elsewhere: A neat upper bound on f(2^n-1), showing that the amount by which it exceeds 2n, is at most logarithmic in n. Though, they hadn't extracted the maximum strength possible from their proof. Neither had they thought to generalize it to f(a^n-1) (the proof does generalize, though the bound gets a little weaker). I won't state what it is here in case they are by some chance reading this, as it's pretty straightforward and they should work it out for themselves. :) (If we want to write about this, I do have to wonder what the proper way to credit them would be...)

Also, in case they are reading this:
Hello! My collaborator's name is Joshua Zelinsky; he is responsible for those general upper bounds you cite! He is also responsible for many of the ideas that I am currently using to classify numbers of low defect! Also, your upper bound on f(2^n-1) can be strengthened, and also generalized to f(a^n-1) (in a somewhat weaker form!).

I don't find it very likely that they'd actually still be reading this, but, may as well. For good measure, I should probably translate it into Latvian, too! Of course, I don't know Latvian, and I'm not sure I know anyone who actually does, so as a poor substitute I will run the above paragraph through Google Translate. Not that they were incapable of doing that themselves, but... ah, hell, I admit it, I don't have any real justification for this other than my own amusement. So, here is the above paragraph, in Google's version of Latvian:

Hello! Mans līdzstrādnieks vārds ir Joshua Zelinsky, viņš ir atbildīgs par šo vispārējo maksimālo robežas jums citēt! Viņš ir atbildīgs arī par daudzām idejām, kas es esmu pašlaik izmanto, lai klasificētu skaitu ar zemu defektu! Arī jūsu augšējā robeža attiecībā uz f (2 ^ n-1) var stiprināt, kā arī vispārējas līdz f (a ^ n-1), (jo nedaudz vājākā formā!).

Hm. Running it back through Latvian -> English, it calls Josh my "assistant" rather than "collaborator". Yikes. Oh well...

-Harry
sniffnoy: (Kirby)
Time required: 45 hours and 15 minutes.
Disk space used: 15944324947 bytes, or about 14.8493 gigabytes.
Conclusions drawn: ...hell if I know, it just finished!

Addendum, 2 hours later: OK, I think we can forget about the database idea... I think I've got most of what I wanted from it.
sniffnoy: (Golden Apple)
Slight edit: Fixed mistake in "Only a few minutes ago..." paragraph.
Later edit: Oops, I misremembered how the program works - it does print things out as it calculates them, rather than writing it all to disk afterward, which is why the rate of growth has slowed.

So lately I've been thinking about, this method I've been using to classify numbers of small defect, at what point does it break down? And I think I have a pretty good idea, though checking it fully would be tedious. (I also don't intend to actually carry the computation that far unless I can find some way of automating it, because it looks like it's going to get considerably worse than I earlier realized.)

But I wanted to know more. A few months back - well, more than a few months back at this point - Josh had emailed Richard Guy to ask about the status of what was known about integer complexity; in his reply, Guy mentioned that there was a counterexample to the conjecture that f(p)=f(p-1)+1 for p prime, and listed it: Take p=353942783, then f(p)=63=f(p-1). His source for this was "The question 'How many 1's are needed?' revisited" by J. Arias de Reyna and J. van de Lune, which could be found... nowhere. It was a preprint, and it doesn't seem to be on arXiv or the authors' webpages. Google doesn't turn it up either.

But I wanted more information about this counterexample, as I figured it could help determine to what extent I was right about at what point the procedure broke down. So Wednesday morning I emailed Richard Guy and asked him where I could find this paper.

But meanwhile, there was another check on my idea that I wanted to make, one which also required more data than we currently had. So, why not. Let's set the computer on it. Now, how far do I want to go?

Well, let's think big, why not! Let's go all the way up to 2 to the goddamned 29th.

Now you may notice something: 2^29 is bigger than that counterexample! So what was the point of emailing Richard Guy and asking him where I could find more information about that counterexample, if I was just going to go ahead and compute it anyway? I'm wondering that myself.

I set the thing to go at 10 AM, just before leaving for algebraic topology. So the computation has now been running for over 24 hours (surprisingly, without slowing the computer down that much. I mean it's slow, but still quite usable). At some point I began to regret writing the program to output everything at once at the end, rather than outputting things as it found them, so that if I grew impatient I could have at least cut off the computation and see what I had gotten. But I didn't, so I just had to wait it out.

I don't remember when it was when I noticed the file I had set it to write to had become nonempty - i.e. the computations were done and it was just time to write it all to disk. But it sure wasn't recently. So, the file it's writing? As of now the file is 9.9 gigabytes and counting. Yes, gigabytes. Makes you think I should have figured out beforehand how big it's going to get, huh?

Only a few minutes ago did I actually think to pull out the calculator and compute just how big this thing will be when it's done. I don't have an exact number, of course, but here's my upper bound: 25 gigabytes. That's right. And this thing has been writing to disk - or more accurately, I'm sure, copying things from one place on the disk to another, as I'm sure this program must use enough memory to fill my physical memory several times over - for quite a large portion of its runtime. And when it's done, I'll have a 25 gigabyte file clogging up my hard drive. And I won't care about most of it, and it'll take a long time to grep it for the parts I do care about, I'm sure. Shit, I only have 30GB free (and that's as of now); this file could take up up to half my remaining disk space. Sure, 15GB is enough that I'm unlikely to be running into problems anytime soon, but seeing as I've somehow managed to use 75GB so far, the day will come when I'll need to actually clear out my hard drive, and, well, deleting this file will *be* clearing out my hard drive. I won't even need to do anything else. It'll get overwritten *fast*, its corpse providing food and shelter for smaller, nimbler files for years to come.

(Man, remember when a 1GB hard drive was a big deal? That was what my family's first computer had. I'll be honest, I'm really not too clear on how I've used up that much disk space in the first place. Of course, copying over everything from the old computer sure helped...)

Anyway, hopefully I do get a copy of that paper, as more information always helps, right? They may well have done more analysis, or prioritized different things in their computation. But yeah... I think after computing up to 2^24, I said I wasn't going any further. Guess I was wrong about that, but I *won't* be going past 2^29. Why? Because I can't go very far past it, I think, before my computer doesn't even enough have enough memory to run it in the first place! Making it very much not worth it.

So, this was pretty dumb. Oh well. Numbers are good.

-Harry
sniffnoy: (Chu-Chu Zig)
OOPS! Remember that conjecture from earlier? Well, it's false, but not because something unexpected came up; just because we were not being careful in formulating it in the first place. Instead of number of partitions of n, what we want is number of rooted trees on n+1 nodes. Is this still good enough to prove f not asymptotic to 3log_3, if it's true? Josh says probably.

-Harry
sniffnoy: (Chu-Chu Zig)
Yuck, yuck, yuck.

I'd introduced several speedups into my computation and so recently I made it as far as my program would have gone (up to 18d(2)) had I fixed it. But, the natural stopping points are still quite a ways away. Run 1 more step and I've gotten to 2; run 3 more steps and I've gotten to 2+2d(2) and classified everything with D≤1. Those are nice benchmarks, but it feels silly to stop when we *have* identified a place where the method probably breaks down, and that's not it. (Yay brains. If we haven't identified a place where we're forced to stop, then stopping becomes arbitrary; if we have, then we'd better keep on going till we make it there, hadn't we?)

The natural stopping point we've identified is when numbers of the form 19(3^n+1)+6 become relevant. Yes, yes, we can still probably approach it arbitrarily closely, but A. I'll worry about that when we come to it B. that's an arbitrary decision and C. OK, I lied about A. and in fact I do think I have a good way of picking just how "arbitrarily" close I want to go. Maybe. Not sure. I guess I'll have to see when I come to it. OK, I guess A. was true after all.

Basically, there's two steps here: We want to know what numbers have defect less than k. So, first we generate candidates via the theorems we've proved; then, we go ahead and compute their complexity, so we can tell which ones actually have defect less than k. (The actual process of computation is a little different, but...) The first step, of course, can't break down, we've proved it works. But we could get families of numbers whose complexities we don't know how to compute. In particular, we don't see any way to compute it for 19(3^n+1)+6, so, stopping point.

Actually, originally I was thinking stopping point would be (3^n+1)(3^m+1)+6, but then Josh showed how we could actually handle that case. Meanwhile I had failed to notice that the (earlier!) 19(3^n+1)+6 *does* seem to give us actual problems. For instance, if our current methods sufficed to handle it, then I'm pretty sure that would mean it would have to have complexity of the form k+3n for n sufficiently large, but I've computed it up to about 12 or so and f(19(3^n+1)+6)-3n still hasn't stabilized - considering "sufficiently large" shouldn't be more than 4 or so, I'd call that pretty good evidence.

It is of course possible that I've missed an even earlier stopping point... I didn't feel like checking all the possibilities, just the ones that seemed likely. We'll see when we get there.

If, of course, we get there. Because this computation is taking forever.

Remember I was so excited when Josh proved we could actually use step sizes bigger than d(2)? How that proved that conjecture we had? And then, when it came to concrete computation, I didn't actually use it?

That was because I figured that the speedups I had introduced into my calculation recently meant the step size didn't much matter. Previously I was recomputing a lot each step, now I'm precomputing heavily. So more steps no longer means more recomputation, hence fewer steps doesn't mean increased speed, right?

Well, now I'm thinking that's not so. See, larger steps means fewer infinite families broken up across step boundaries. Which means less time re-consolidating them for my summary chart afterward. And just less unnecessary special cases appearing in the first place. So a rethinking of things might be in order.

Now in theory we can use any step size less than 1, but my computations have stricter requirements than that; step sizes have to be less than 1/2, and just how much less is limited by how far we've already gone. (So yes, in theory I could keep increasing it as we went on, but remember, changing step sizes is damned inconvenient.) So far we know enough to use a step size of up to, by my previous calculations, about 0.38 or so. (In fact, I think we could probably go larger; there's some things I failed to take advantage of when computing that.) So I figured, well, let's go with 1/3, that's a nice, round, number. And so last night I sat down to start over with step size 1/3, and after a bit I realized: Yuck. This isn't going to be nice.

The problem, I realized, was that using a step size that's not an integer multiple of d(2) invalidates one of the computational tricks I'd been using. This trick isn't necessary, but it certainly makes things easier. So, perhaps 3d(2) instead? Or perhaps take another look at that supposed upper bound and see if I can use 4d(2)?

Actually, now that I think about it, using a step size that's just an integer multiple of d(2) would be really convenient - it would just mean taking what I've already done and consolidating every 3 steps! (Or 4, if I can manage that.) Yay, writing stuff down is helpful. (I just came up with this while writing this.) OK. New plan: Reevaluate how large a step size you can use and pick the largest integer multiple of d(2) that's less than that. Potential danger: Once you switch to 3d(2), it'll be hard to switch to 4d(2) as 3 doesn't divide 4. (5d(2) is more than 1/2 and so is just out of the question.) Perhaps, if I don't have enough information to switch immediately to using 4d(2), I should just keep computing with d(2) until I do. Depends on how much it requires, though, I guess. Meanwhile I suppose I could immediately switch to 2d(2).

Meanwhile, I have to think that the *correct* solution is to automate! But figuring that out just looks to be a mess. Or, you know, not do this by myself. But passing it back and forth sounds like another way to take longer. Oh well; if I can use 4d(2), that's pretty good, I figure. Using that would take reaching 19(3^n+1)+6 from happening at the 38th step, to happening at the 10th step.

ADDENDUM: After a bit of examination, it turns out I *do* know enough to be able to use 4d(2) after all! Yay!

LATER ADDENDUM: ...unfortunately using higher step sizes will make checking for exceptions harder. Crap. Not sure what I should prioritize here.

-Harry

August 2025

S M T W T F S
     12
3456789
1011 12 13141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom
Page generated Aug. 27th, 2025 12:19 pm
Powered by Dreamwidth Studios