sniffnoy: (SMPTE)
Since I don't think I've stated it before explicitly here, here's the final form for what is really, I think, the main thing I've proved:

Let f(n) be the smallest number of ones needed to write n using addition and multiplication.

A number n is greater than all numbers of strictly lower f, if and only if it can be written as either 2a3k for some a,k with a≤10, or it can be written as 2a(2b3m+1)3k for some a,b,m,k with a+b≤2.
sniffnoy: (Kirby)
OK. My initial writeup of this is *finally* done. I hope it's clear enough. Well, I'll show it to Josh before anything else. Interesting thing: When compiling it, TeX gave me several "Underfull \vbox" messages - but not all the badnesses were 10000! Take a look:

Underfull \vbox (badness 10000) has occurred while \output is active [6]
Underfull \vbox (badness 10000) has occurred while \output is active [7]
Underfull \vbox (badness 2213) has occurred while \output is active [8]
Underfull \vbox (badness 5316) has occurred while \output is active [9]
Underfull \vbox (badness 1546) has occurred while \output is active [10]
Underfull \vbox (badness 10000) has occurred while \output is active [11]

Is 10000 just what it caps badness at, or something?

(Oh, and that modification to the code I made? Made absolutely no difference in the output at all.)

-Harry
sniffnoy: (Chu-Chu Zig)
...so when writing up the description of the algorithm, I of course referred to my code to remember just how it worked. And, guess what, writing it up, and comparing it to what I actually wrote, looks like there's an error in the code. It shouldn't affect anything - it deals with addition - but I better go back and recompute things again, to be sure.

...actually, thinking about it, it definitely doesn't affect anything. Still, it'll make the proof easier if I fix it.

-Harry
sniffnoy: (Chu-Chu Zig)
Still writing up my result. First I wrote up the easy parts; then I spent a bunch of time rearranging the easy parts to make more sense; then I spent a bunch of time editing that so that it flows better. Now I'm writing the hard parts. I'd say there are three hard parts, of which I am almost done with the second (describing how I've computed g_r(k) without the abstraction).

It occurs to me that I really haven't checked the articles in Guy's email to Josh about this - since Guy didn't mention anything that seemed relevant, I assumed there wasn't, but I should really check... but now I've changed my point of view on what I've actually proved, maybe there was. It would be pretty annoying if someone else proved the same thing, but never used it to show that f(2^18 3^m)=36+3m, and therefore wasn't mentioned... I should also check the references on Sloane, it has some sequences related to f in it. (And you can be sure once this is done, I'm going to submit this sequence of numbers. By which I mean, numbers n such that f(m)<f(n)⇒m<n, which are given by the two forms I've listed earlier. Because that sequence is not in there. Which I guess does suggest nobody has done this before. :) )

Lie algebras is cool, alpha[0] topology is boring so far, teaching 105 is hard; but you probably guessed all that.

Regarding topology: Yeah, I have to take alpha topology. When it was QR signup time, I figured of course the algebra, I'll try my hand at the topology, and I won't even bother with the analysis. Well, algebra was no trouble. Topology... it's Saturday morning, I sit down with the first half of the test, realize I don't know half this stuff, and decide "screw it, I'm going back to sleep".

Regarding 105: My second lesson was a real disaster as I didn't prepare properly, but that hasn't happened since. My boardwork still really needs work. You know what else I'm terrible at? All the non-teaching parts of the job, or that require explaining things other than mathematics (course policies, etc). To them it's crucial, but to me it's quite secondary and I tend to forget about it when not explicitly reminded. Now on Fridays they have me down in the MathLab helping kids individually, and there, of course, I feel much more at home. None of my students has yet to show up to my office hours, though, and I'm inclined to think that's a bad thing.

...enough math for now.

So Aubrey[3] left for Australia for some solar car competition this Monday, leading to a game night on Sunday. Played Shadows over Camelot. If you haven't heard of it, it's a co-op game, with the possibility that one player is a traitor. The way that's determined is, there are 8 loyalty cards, and each player gets one; one of the loyalty cards reads "traitor". So with the maximum of 7 people, which we had, there's probably a traitor. Well, guess what. We had no traitor, and we had 7 people, and we still only barely pulled off a win... I guess I see why I'd seen people saying on BGG[4] that they often play without one. I guess co-op games have to be hard by nature, huh? In the same way that one-player games have to be hard. It's not really something I'm used to outside of video games. Oh, and I should note, we also didn't realize that the rulebook said that you shouldn't discuss the specific contents of cards in your hand/secret cards that you'd seen, so we had another advantage that way. I think that's a bit silly; what's the point of limiting player communication? I suppose the point is to actually make it a co-op game, with different roles. Basically because people can communicate so much, in a board game, that's pretty hard, and I'm inclined to think it's not really something very well-suited to the boardgame format at all. Video games can accomplish this pretty easily, by contrast. In any case, I would think better than telling the players not to communicate is to, you know, give them some incentive not to, such as by having a traitor... or multiple, if one is not enough... (mind you, I have no objection to rules against people revealing their stuff to prove what they say, just to rules limiting what people can say.)

Still on the whole Truth House is kind of boring so far compared to Tufts House, i.e. people don't seem to be around as much. I'm guessing the fact that the lounge isn't the center of the house like in Tufts House contributes to that... we also have very few videogames in the lounge, which is kind of sensible, as there have been some break-ins... but right now we just have the house Wii (which is bolted to this frame which is... yeah) with pretty much just Brawl for it, and my Dreamcast (as who's going to steal a Dreamcast?). Still, I've seen other games being played, just nobody is willing to leave them down there.

So I found the third game store, the one I saw on the way in; it too isn't too far away. And it's pretty cool - it has old used video games for lots of old systems; there were Atari 2600 games there. No Dreamcast though. :-/ It's called "Get Your Game On". And, get this, they're hosting a Zendikar prerelease. Maybe I should go? Maybe this is a time for me to get back into Magic? I'll admit, though, that I haven't actually computed how much money I'll have - this first month I've been really just living off of what I have stashed away without too much thought; soon I'll have to actually start figuring out just how much disposable income I actually have. But hell, maybe it's worth going to this anyway. Of course, knowing me, there's a good chance the correct answer is "No, you shouldn't go buying cards, because that would require finding people to play against, which would probably largely mean going to organized events, and you're too lazy for that."

I swear, there was somehing else I wanted to mention, the same thing I forgot about last time - I know this because I remembered it briefly; but I've forgotten it again.

I haven't started playing Kongai again. The internet connection here is really pretty inconsistent. I guess Kyle[5] has appointed a network steward now, no? Not that he'd likely know immediately just what's going on, as I think he's a first-year here too, but oh well. It's certainly the sort of thing he should be trying to fix...

Of course, why do I think I'll have so much time? I'm teaching 105, and largely I've had time so far because only now have I had to begin actually grading things, not to mention my own homework which has basically just begun...

-Harry

[0]Alpha sequences: At Michigan there are basic sequences in algebra, topology, analysis, for grad students. Unlike at Chicago, these really are pretty basic, and certainly not mandatory... but you do have to take them if you don't pass the appropriate qualifying review. Well, or you could just take the QR again later, but if you didn't pass the QR, you probably need to take them... or at least when you do as badly as I did... see above.
[3]Mentioned him briefly before - another math PhD student, who lived in Truth House, and plays a lot of boardgames, and does parkour, and is generally pretty awesome, but will not be here this semester.
[4]I really rarely visit BGG anymore...
[5]The work manager, responsible for assigning people's jobs. This is the same Kyle who actually plays in Brawl tournaments. There is another Kyle, who is, IINM, the network steward I was just speaking of.
sniffnoy: (SMPTE)
What I've really done, is classified which numbers can be made with k+1 1s, that are higher than any number that can be made with k 1s.
sniffnoy: (Kirby)
...barring any errors in the program, anyway. And some of its output was a bit suspicious. Regardless, I'm pretty certain of it by this point, and failing all else I can always check it all by hand, which would be a bunch of case checking, but not nearly as much as it would have been previously. I really doubt any mistakes it may have made are ones that would affect the final result.

So, here we go: Let f be the function Josh and I have been working on; let k∈N. Then if k is not of either of the following forms:
A.) 2m3n for m≤10
B.) 2a(2b3n+1)3m for a+b≤2
Then the following holds:
1.) If ⌈3log3k+(5-6log32)⌉≡0 (mod 3), then f(k) is at least this quantity.
2.) If ⌈3log3k+3(1-log32)⌉≡0 or 1 (mod 3), then f(k) is at least this quantity.
3.) f(k)≥3log3k+1.

(Note that 1, which gives the largest bound, is only a tiny improvement over 3, with a difference of only about 0.21 - and of course this whole exercise is such a tiny improvement on the standard bound in the first place! - but it can matter. For instance, say we want to prove f(2183n)=36+3n; the last inequality isn't going to cut it. Note that as far as 2a3b goes, these bounds are only good enough for a≤18, not 19; 19 was a mistake.)

EDIT 9/12: There was a typo in the lists below; Instead of "4*3^n+1" I wrote "4*3^n+4".

Furthermore, not all the exceptions listed above necessarily are exceptions; the completely reduced lists of numbers for which 1, 2, and 3 fail (where the condition is in fact met, obviously) are:
1.) 16*3^m, 128*3^m, 1024*3^m, (4*3^n+4)*3^m for n≥1, (4*3^n+2)*3^m for n≥1, (4*3^n+1)*3^m.
2.) 2*3^m, 8*3^m, 16*3^m, 64*3^m, 128*3^m, 512*3^m, 1024*3^m, (2*3^n+2)*3^m for n≥1, 40*3^m, (2*3^n+1)*3^m for n≥1, 14*3^m, 38*3^m, 5*3^m, 13*3^m.
3.) 2^n*3^m for n≤9 (except when k=1), (3^n+1)*3^m, 20*3^m, 40*3^m, 7*3^m, 19*3^m, 14*3^m, 38*3^m, 5*3^m, 13*3^m.

And that's all, folks!

...OK, OK, at some point I'm going to have to start writing up a proof. But really! There it is!

Yaaaaaay!

-Harry
sniffnoy: (Chu-Chu Zig)
Well, I will, but not as soon as I thought. I said I had made a big error, that I wrote "2/3" where it should have been "3/4" in one of the cases - but I mean, my numerical data had pointed towards 2/3 in all cases. So, of course, I had to go back and investigate my program and see where things were going wrong. Turns out, there's a pretty basic mistake in the algorithm that... well, suffice it to say that I'm going to have to modify one of my data structures some, and, well, yeah, back to programming nonobvious stuff. (Basically, looking at products of things, I used "is product of less things" when I should have used "is smaller in numerical value". I still don't intend to use the latter, because that would complicate things immensely, but, yeah, this can be fixed.) And then I have to investigate another, similar, mistake I made, and see how much has to be changed there. It's annoying, because that one's going to be harder to fix, but it's also a case that shouldn't actually come up. Still, I think it'll be easier to fix the program than to directly prove that that case doesn't come up.
sniffnoy: (Dead face)
...did I say my proof worked for a≤19? I meant 18, apparently. Did I say 3log3(3/2)? I meant 3log3(4/3) (minus the exceptions - though the better bound is still often true). Oops. (Still have yet to prove *this*, mind you, but I'm pretty certain.)

-Harry
sniffnoy: (Dead face)
So, does it generalize like I said? Oh, it generalizes alright. What this method ultimately tells me is that, pretty much, for all n except of certain exceptional forms, f(n)>3log3(3/2n). In other words, I took the standard lower bound, and added a goddamned constant. Not even a large constant - 3log3(3/2) is about 1.107. If you want to be more discriminating, you're occasionally instead adding 2+3log3(3/4), or 4+3log3(3/8), which are about 1.214 and 1.322, respectively. So yeah, this is just enough of an improvement on the standard lower bound to prove f(2^a3^b)=2a+3b for a≤19. But man, this is way suckier than I earlier realized. For a few days I thought I might have gotten a handle on the general case of 2^n, but that's basically the opposite of what is true.

-Harry
sniffnoy: (Chu-Chu Zig)
So I've been thinking about what I've doing as being working on the 2^a3^b problem, but in fact powers of 2 don't seem to enter much into it. I think I should be able to take this stuff (once I've properly proved the regularities in g_r(k), I mostly know how but I haven't done all of it - and actually I still haven't checked that they're good enough) and prove something of the form "For all numbers n not of the following forms, [list of forms], f(n)≥[something bigger than 3log_3(n)]". Depends how good the numbers are, though.

-Harry
sniffnoy: (Chu-Chu Zig)
Good news: I have found quite a bit of unexpected regularity in my data! Something to investigate, certainly.

Bad news: If these patterns hold up, they make it harder to prove the conjecture, not easier. (Note: Earlier version said if these patterns hold up, these numbers are useless for a≥20. But now I realize that may not be the case.) (Edit again: No, wait, crap, it is.)

However, I still have yet to go and try to actually use these numbers to prove that it works, say, for a≤19, so perhaps I should do that first. (Later addition: Yeah, they do prove that. Well, now I have my answer to "how far can I push this horrible proof": 19, assuming the patterns hold up.)

-Harry
sniffnoy: (Kirby)
And, according to the program, for 3 not dividing k, k≥16, g5(k)=... 7/9g(k) !

OK, I haven't technically verified that absolutely everything is working properly, but, it is giving me actual results for r>2, and that's new. And I have checked a lot. And all its low results match up with my hand computations.

Man, each of these computations took me, like, hours before. Computers are so great. Now to start making a table of results! Later I'll see what these actually allow me to prove.

(Basically, so far, there's never been a constant term; and what I want is for that to remain the case, and for the size of the linear term to get small faster than the number of 2s in its numerator gets large. I don't think the second part should be a problem, but I suspect that eventually a constant term will appear, and then I won't be able to make use of it (or at least not in the same way).)

-Harry
sniffnoy: (Dead face)
Why do I even have that recursion in the first place? It's to account for if beta gives me an output that's of the same value, but structurally different... but beta can't do that! Except in the trivial case. Which raises the big question: Why the hell am I calling it in the trivial case? Skip that one! Get on with the searching actual cases! And eliminate the dumb recursion to handle an unnecessary case!

...this will probably simplify things a lot, but it won't be enough to fix the function fully. Oh well. It's a good start.
sniffnoy: (Chu-Chu Zig)
...yeah, somehow I don't think I'm going to ever write down most of the things I said I was going to. Perhaps if I'm bored (which I assuredly will be) I'll get around to some of it. (Today I had a headache and spent much of the day napping.) I can't even transcribe the T-shirt for you, because I don't have one. I forgot to take one at first, and later, whoops, order was screwed up, they're out of mediums. So, they'll be sending one here, to the house.

I guess now I finally have time to get back to work on the (2^a)(3^b) problem. By which I mean, getting that program to work so I can see how far I can push the method I've been using. Today's progress: Removing the "FIX" tags from things I've already fixed, and writing a summary in the comments of what the *current* problem is! Whee. I think I may want to do some reorganization...

-Harry
sniffnoy: (SMPTE)
It's funny how the errors that should be incredibly obvious can fail to be so until you see an example. Of course "as long as the first element of the list is too high, run the function again" will never terminate if the first element of the list never gets removed!

...unfortunately, it's not a case of "just remove the first element of the list, then"; the entire recursion is wrong and will have to be redone...

-Harry

(Also, in neat non-math things: This week I learned how to work a 16mm projector!)
sniffnoy: (Chu-Chu Zig)
I know there was more stuff I intended to write, but I'm forgetting lots of it right now. So this is pretty random stuff, but I guess that's pretty usual.

I said I thought the program was pretty close to done, but I'm not so sure. A lot of the "debugging" I've done is not so much debugging but rather rewriting things so as to not be totally wrong. It's like, imagine I wrote a list length function, and defined it by len [] = 0, len (x:xs)=len xs - and not because I forgot to add one, but because I truly misthought things and concluded that was how you write length. That's the sort of errors I've been finding. Right now the program can't compute anything but the trivial r=1 case; earlier I made a fix that allowed it to compute r=2, 3|k, but then I made another fix and it broke that. :P At any rate I'm a lot closer than I was before, though whether or not that's actually "close" remains to be seen. Unfortunately I've not had much time to work on it each day.

Lucas pointed out to me that the axiom of infinity is over-specific: All you need is the existence of a Dedekind-infinite set. Then, take on it a non-surjective injection, pick an element that's not hit, and take the intersection of all subsets containing it and closed under this function, and you've got a perfectly good model of ω. And then, using axiom of replacement, you can show the existence of our ordinary ω. I wonder why set theorists do it that way? I guess it's a convenience thing - you can represent the whole numbers any way you want, but that's an especially convenient way. Much like Church numerals in lambda calculus.

Funny thing - I had been trying to prove the infinite version of Arrow's Theorem - in the infinite case it doesn't need to come from a dictator, just from an ultrafilter - since seeing it as an exercise somewhere, but failing horribly. Then Henry Cohn decided to do a combinatorics problem set that was just a 4-part proof of the finite version; parts 2 through 4 only applied to the finite case, but part 1 turned out to be just the hint I needed. Yay!

Didn't nap today; guess that's fixed.

Have not written anything about reunion. Perhaps will at some point.

-Harry
sniffnoy: (Chu-Chu Zig)
Since I haven't exactly been updating this.

Nazerke, unfortunately, had to return to Kazakhstan this Tuesday, and so has to miss the final two weeks. We're giving her a copy of last year's yearbook, so she can do the rest of the psets. Because she had to leave, her lab group presented on Monday; it was sums of squares, with the result that about half the first-years (i.e. those in lab) have now seen presented how to do sum of squares. (Heidi deliberately didn't go, because she had been working on the problem herself a lot and didn't want to see it, but that's the only case of such I know of.) (Also, there was no "How about 28?" this year. :) )

So Glenn came by on Tuesday, and, thankfully, was, as director of PROMYS, actually able to get in. Unfortunately, I was napping at the time, so I didn't get to see what happened. However, he is going to talk to the housing office about the ridiculous recent rules. On that note, here's a bizarre incident recently: A guard told me to put my shoes on. Why? Well, the floor is dirty, he says. Naturally I told him that's my own business, no such rule has existed or been enforced in the past, and that I'd do so if he could point to where in the rulebook it said to do such a thing, and at that he let it go. (Of course, if in reality it turned out there were such a rule, I'd complain about it.) But what strikes me as really nonsensical about this is that *lots* of people go around without shoes on. Remember how Lucas always went around barefoot back in '04?

Apparently Henry Cohn is going to be proving the Bruck-Ryser Theorem on Friday? I have to go see this! (Well, I mean, I normally go to Combo, but I couldn't today because it was at the same time as first-year lab presentations, and partitions was presenting.)

Funny thing from Combo on Wednesday:

Henry presents the usual proof of Ramsey's Theorem. (Quotes obviously quite approximate.)
Henry: Now, Ramsey died at age 26, which is really sad, because aside from the general human reaction that it's always a tragedy when someone dies so young, this is a really neat proof and I'd really like to see what else he could have come up with.
Me: Actually, Ramsey didn't come up with this proof; Erdos and Szekeres did. Ramsey's original proof was entirely nonconstructive and gave no upper bound at all.
Henry: Oh. Well, then I guess it's a good thing Erdos didn't die young!

I think my program may be pretty close to working; I've worked a lot of bugs out of one of the hard functions, but I haven't had the opportunity to work on it/test it much. Also, apparently Josh was asking Glenn about the problem (the upper bound stuff Josh is doing, I mean, of course), and he suggested Josh talk to Steve Miller; as it happened, Steve Miller gave a talk here on Tuesday. He didn't have anything offhand Josh hadn't already thought of, but maybe he'll think about it. (Also: Once I get my program working, maybe I should explore Josh's averaging idea he's talked about but hasn't done much with...)

Nakul is doing a small counselor seminar on combinatorial commutative algebra. Unfortunately I missed the first one where he proved a lot of the stuff he used today, but today he used this algebra and the Birkhoff-von Neumann Theorem to prove that the number of n-by-n nonnegative integral matrices with row and column sums all equal to r is polynomial in r for r sufficiently large. Neat stuff.

My sleep schedule has become a bit messed up after staying up too late on Monday or Tuesday, and now I've been taking naps after dinner, which means that I'm kind of failing at doing my job right now[0]; fortunately the weekend should reset that. Though I'm one of the people in charge of animation night this year, so I can't go napping on Friday!

I did mean to give a minicourse on maxflow-mincut, but due to scheduling problems it's just going to be one of the ones in the marathon. Oh well. Also, I'm doing continued fractions review with Fergie?

Maybe I'll actually write about the talent show after all this is done... people were anticipating that Lucas or Ila's slave time would be used tonight, but it didn't happen. Hm.

-Harry

[0]At night, that is - not in the morning/afternoon. But the kids are much more active at night (though I'm of course still there for late night). And I'm not behind on grading at all.
sniffnoy: (Chu-Chu Zig)
Well, initial coding is done. First test: r=1, k=0 mod 3. Simplest possible case, should hardly have to do any computation at all. Actual result: The program uses up a bunch of processor and doesn't return an answer. Yay. Now I'm going to have to learn how to debug Haskell.

-Harry
sniffnoy: (SMPTE)
Just finished the initial coding of the first of the two hard parts of the program, and I have to say, while writing this in Haskell has made things a lot cleaner and clearer, I have very little idea how I'll actually go about debugging this...

(Talent show entry forthcoming, at some point.)

-Harry
sniffnoy: (Chu-Chu Zig)
After much tedious computation, much of which accomplished little and the latest of which finally extended the result from a≤14 to a≤16, I think I finally see how to automate these computations. It's going to be a pain to program, though. I think I'm going to want to remember how to use programming languages other than C...

-Harry

August 2025

S M T W T F S
     12
3456789
1011 12 13141516
17181920212223
24252627282930
31      

Syndicate

RSS Atom
Page generated Aug. 28th, 2025 04:15 am
Powered by Dreamwidth Studios