<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dw="https://www.dreamwidth.org">
  <id>tag:dreamwidth.org,2017-04-20:3148436</id>
  <title>Chronicles of Harry</title>
  <subtitle>Happenings and thoughts - simple as that</subtitle>
  <author>
    <name>sniffnoy</name>
  </author>
  <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/"/>
  <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom"/>
  <updated>2026-04-13T07:06:01Z</updated>
  <dw:journal username="sniffnoy" type="personal"/>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:593756</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/593756.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=593756"/>
    <title>A breakthrough on integer complexity by Konyagin and Oganesyan!</title>
    <published>2026-04-13T07:04:28Z</published>
    <updated>2026-04-13T07:06:01Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So &lt;span style='white-space: nowrap;'&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/profile'&gt;&lt;img src='https://www.dreamwidth.org/img/silk/identity/user.png' alt='[personal profile] ' width='17' height='17' style='vertical-align: text-bottom; border: 0; padding-right: 1px;' /&gt;&lt;/a&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/'&gt;&lt;b&gt;joshuazelinsky&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; recently sent me a link to &lt;a href="https://arxiv.org/abs/2603.20876"&gt;this paper&lt;/a&gt; and wow!&lt;br /&gt;&lt;br /&gt;This paper contains just two theorems but both are huge advances in integer complexity; one on the upper bound, one on the lower bound.&lt;br /&gt;&lt;br /&gt;First the upper bound.  Let's review -- what upper bounds are known on integer complexity?  If one wants a bound that works for all n, well, there's the naive bound ||n||&amp;le;3log&lt;sub&gt;2&lt;/sub&gt;n, and then there's Josh's &lt;a href="https://arxiv.org/abs/2211.02995"&gt;improvements&lt;/a&gt; on that, and that's it.  The empirical maximum of ||n||/(log n) occurs at n=1439, but these bounds aren't good enough to prove that that value is indeed the maximum; they're substantial overestimates.  What if you just want bounds that work for all but finitely many n, bounds on the lim sup?  Sorry, we don't have any of those that aren't bounds for all n.&lt;br /&gt;&lt;br /&gt;But we do know of better results if you just want bounds that work for almost all n, bounds on what in &lt;a href="https://sniffnoy.dreamwidth.org/559919.html"&gt;this post&lt;/a&gt; I called lim sup ap ||n||/(log n), which are obtained by &lt;a href="https://arxiv.org/abs/1706.08424"&gt;the averaging method&lt;/a&gt;; these bounds are good enough to break the 1439 barrier, but of course they don't bound the actual lim sup.  And we know of even better bounds if your idea of "almost all" only requires logarithmic density 1, rather than natural density 1; this is what I've now been denoting lim sup ap* ||n||/(log n), and the bounds come from &lt;a href="https://arxiv.org/abs/1511.07842"&gt;Steinerberger and Shriver's method&lt;/a&gt;.  The current best bounds for both of these categories come, to my knowledge, from &lt;a href="https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ISAAC.2022.29"&gt;Kazuyuki Amano's work&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;Well.  Konyagin and Oganesyan claim that they have shown that the averaging method actually yields an upper bound on lim sup ||n||/(log n)!  Indeed, more than that -- you should just go click through to their paper and read their inequality.  They have well and truly broken the 1439 barrier, proving that only finitely many numbers can have a complexity that high.  Although, of course, their actual inequality comes with an error term, and having asked them, they say the error term is bad enough that their theorem doesn't currently prove that 1439 is the actual maximum.  But wow!  This is a massive improvement on the state of the art.&lt;br /&gt;&lt;br /&gt;But wait, their next theorem is equally impressive.  For a long time, nobody's been able to get a nontrivial &lt;i&gt;lower&lt;/i&gt; bound on lim sup ||n||/(log n) either.  By "nontrivial" here I mean "better than the known liminf, 3/(log 3)".  It was an open question whether ||n|| was asymptotic to 3log&lt;sub&gt;3&lt;/sub&gt;n, and some people thought it was, even though this would mean that ||2&lt;sup&gt;k&lt;/sup&gt;||=2k would have to fail for large k!  Well, Konyagin and Oganesyan say they've now done it -- they've proven a lower bound on lim sup ||n||/(log n) that is larger than the trivial one, showing that ||n|| is &lt;i&gt;not&lt;/i&gt; asymptotic to 3log&lt;sub&gt;3&lt;/sub&gt;n after all.&lt;br /&gt;&lt;br /&gt;Except, they're actually claiming something much stronger.  Getting a lower bound on the lim sup would mean showing that infinitely many n have ||n||/(log n) above the bound.  They say they've shown that in fact, &lt;i&gt;almost all&lt;/i&gt; n do.&lt;br /&gt;&lt;br /&gt;So this isn't just a lower bound on the lim sup -- it's a lower bound on the lim inf ap!  That's basically as good as you could do!&lt;br /&gt;&lt;br /&gt;Their proof here is actually based on my work with &lt;span style='white-space: nowrap;'&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/profile'&gt;&lt;img src='https://www.dreamwidth.org/img/silk/identity/user.png' alt='[personal profile] ' width='17' height='17' style='vertical-align: text-bottom; border: 0; padding-right: 1px;' /&gt;&lt;/a&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/'&gt;&lt;b&gt;joshuazelinsky&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; on the defect.  At a high level, their approach is to first use our iterative classification theorem -- yes, the original one, they're not using low-defect polynomials -- to establish upper bounds on how many leaders below x have defect in the range (k-1)&amp;sigma; to k&amp;sigma;, where &amp;sigma; is the variable they're using to denote their step size (they pick &amp;sigma;=0.48), with these upper bounds, importantly, being uniform in both x and k.  (This is the hard part.)  Once they've done that, they apply this to count how many numbers n below a bound x have ||n||&amp;lt;(3+&amp;gamma;)log&lt;sub&gt;3&lt;/sub&gt;n, where &amp;gamma; is a number they've picked for this to work (they pick &amp;gamma;=0.06), and compute that, oh look, it's o(x).  Therefore, almost all numbers have ||n||/(log n) above this bound.  Tada!&lt;br /&gt;&lt;br /&gt;Josh and I actually worked on a similar idea many years ago (ours didn't require picking a step size below 1; we were looking at defects inbetween the integers k-1 and k, and of course were working based on low-defect polynomials), although I'm unsure if Josh's method would have been good enough to show that it worked for almost all n rather than just infinitely many; if it was, we didn't realize it.  But ultimately it didn't work out because we couldn't prove those uniform bounds we needed.  But Konyagin and Oganesyan say they've done it!&lt;br /&gt;&lt;br /&gt;I do have to wonder about the choice of &amp;sigma;.  I would expect larger values of &amp;sigma; to yield better results, so it's surprising to me that they picked it so far below what it could have been.  I have asked them about this, however, and they are of the opinion that larger &amp;sigma; would probably not yield much better results.  Still, we'll see if anyone manages to do any better with their ideas.&lt;br /&gt;&lt;br /&gt;(It's possible they picked &amp;sigma;&amp;lt;&amp;frac12; so that they could start with B&lt;sub&gt;&amp;sigma;&lt;/sub&gt; and B&lt;sub&gt;2&amp;sigma;&lt;/sub&gt; both already known, using Josh's and my work classifying numbers with defect less than 1.  Of course, my algorithms can be used to compute all numbers with defect less than 2, but maybe they didn't know about this.  I've since sent them the output of such a calculation, just in case they can make use of it.  Like I said, their opinion was that larger &amp;sigma; wouldn't be much better, but we'll see.)&lt;br /&gt;&lt;br /&gt;Now the question becomes, is it all correct?  Unfortunately, their arguments are quite analysis-heavy, and so I am not the best person to evaluate them.  So right now my answer can only be "I don't know".  But I'm hopeful!&lt;br /&gt;&lt;br /&gt;-Harry&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=593756" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:585926</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/585926.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=585926"/>
    <title>Ascension 20 woooo</title>
    <published>2024-12-23T22:53:00Z</published>
    <updated>2024-12-23T22:53:00Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So I haven't posted in a while but woooo I beat Slay the Spire on Ascension 20 (as Watcher, no heart). :D&lt;br /&gt;&lt;br /&gt;I, uh, did not expect I would be able to do that (without lots of going and looking up strategy, I mean, as opposed to figuring things out myself).  I was going to content myself with just getting all my characters &lt;i&gt;to&lt;/i&gt; Ascension 20.  But now I guess I've got to beat Ascension 20 with all of them!  That still seems daunting, though... (currently, I have Ironclad at 18, Defect at 19, and Silent at 20).&lt;br /&gt;&lt;br /&gt;When I beat Ascension 19 as Watcher I had very little health left and was like how will I ever beat 20??  Well uh part of the answer was having Fossilized Helix to tank some big hits near the end!  (And then having enough block to not waste it on small hits, due to Kunai + Duality.)&lt;br /&gt;&lt;br /&gt;(Beating Ascension 19 as Silent, uh, that was an Apotheosis run, so. :P )&lt;br /&gt;&lt;br /&gt;If I &lt;i&gt;do&lt;/i&gt; beat Ascension 20 as every character I think I'm just stopping for now -- I'm not going for heart ascensions, no thanks.  That's too much.  Maybe after another yearlong break from the game. :P&lt;br /&gt;&lt;br /&gt;(Slay the Spire 2?  Yeah that looks cool but uh I've still got the first one to play... also who knows how long it will take to emerge from early access...)&lt;br /&gt;&lt;br /&gt;Meanwhile other things going on!  I'm finally looking for work again.  Well -- I've been procrastinating on this, because it's a pain, but it's a thing I need to do!&lt;br /&gt;&lt;br /&gt;Andreas Weiermann wanted me to come out to Belgium again sometime in 2025 but I don't think it's going to work out, a new job is hardly about to let me take a month off like Truffle did...&lt;br /&gt;&lt;br /&gt;Speaking of math, a guy named John Campbell wrote to me recently to suggest a new variant of complexity: What's the smallest number of 1's you need to make a *multiple* of n?  I don't know that there's much to do with this (and usually it will equal the ordinary complexity, though not always; 1499 is a counterexample), but it's kind of neat.  I guess it satisfies f(mn)&amp;le;f(m)+f(n).  And it's computable, although I certainly don't know of any &lt;i&gt;good&lt;/i&gt; way to compute it... maybe somebody will find one?&lt;br /&gt;&lt;br /&gt;Uh, I had a large belated birthday party a few weeks ago!  I announced it well in advance in the hopes that some of the always-busy people would show up... it was partly successful at this.  Linda showed up so I finally got to show her that look I still have your drawings!  But Liz Goetz did not so I did not get to show her that look I still have your sign.  Oh well.&lt;br /&gt;&lt;br /&gt;We're down to just 163 books to give away, though...&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=585926" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:569160</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/569160.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=569160"/>
    <title>Josh finally posted it!</title>
    <published>2022-11-10T06:49:17Z</published>
    <updated>2022-11-10T06:49:29Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">Over a decade later, &lt;span style='white-space: nowrap;'&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/profile'&gt;&lt;img src='https://www.dreamwidth.org/img/silk/identity/user.png' alt='[personal profile] ' width='17' height='17' style='vertical-align: text-bottom; border: 0; padding-right: 1px;' /&gt;&lt;/a&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/'&gt;&lt;b&gt;joshuazelinsky&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; finally &lt;a href="https://arxiv.org/abs/2211.02995"&gt;posted to arXiv&lt;/a&gt; his paper with his upper bound on integer complexity!&lt;br /&gt;&lt;br /&gt;...the exposition isn't the best because he decided it was more important to just get it up there than to spend more time revising it.  But hey!  It's up!  Go, know about it!  Use it and cite it, it's up there! :)&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=569160" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:567007</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/567007.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=567007"/>
    <title>Shriver and another type of approximate limit</title>
    <published>2022-07-24T19:09:59Z</published>
    <updated>2022-07-24T19:10:15Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So I was only recently made aware of &lt;a href="https://arxiv.org/abs/1511.07842"&gt;this paper&lt;/a&gt; by Christopher Shriver.  In it he takes Stefan Steinerberger's method, and shows that actually it &lt;i&gt;can&lt;/i&gt; be used to get bounds on ||n||/(log n) that hold for almost all n... as long as you measure "almost all" using &lt;i&gt;logarithmic&lt;/i&gt; density rather than natural density.&lt;br /&gt;&lt;br /&gt;This means we need a name for the new type of &lt;a href="https://sniffnoy.dreamwidth.org/559919.html"&gt;approximate liminf's and limsup's&lt;/a&gt; that arise from this density! :)  Perhaps liminfap*? limsupap*? limap*? :)&lt;br /&gt;&lt;br /&gt;...of course the thing is that there are other densities beyond natural and logarithmic, so potentially one needs a more general notation.  But those are like the most common, so, eh, I'm OK singling those two out. :)&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=567007" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:562543</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/562543.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=562543"/>
    <title>8 years later, we finally wrote this up properly</title>
    <published>2021-11-02T01:54:28Z</published>
    <updated>2021-11-02T01:59:33Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>3</dw:reply-count>
    <content type="html">&lt;a href="https://arxiv.org/abs/2111.00671"&gt;Here it is&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;For now, I'm going to skip a long explanation of the contents of this paper.  Perhaps I'll come back and do it later, but I'm guessing a lot of the people reading this already know a fair bit about it.&lt;br /&gt;&lt;br /&gt;But goddamn what a relief it is to finally have this done.  I mean it's not &lt;i&gt;done&lt;/i&gt;, of course, I'm sure there will be changes necessary for publication.  But it's up.  It's out there.&lt;br /&gt;&lt;br /&gt;We weren't sure whether to include section 5, and in particular the "off-by-one" theorem, but, ultimately, we included it.  Otherwise it would have had to be a separate paper, which just makes for more headache, you know?  (There is other stuff that got cut for length -- by which I mean, never actually written -- that will have to be a separate paper, but, that was never realistically making it in here.)&lt;br /&gt;&lt;br /&gt;I actually haven't said as much on the internet about the contents of this paper as I have about previous paper... really, a lot of the stuff regarding integer complexity that the ideas in here led to, I just haven't really talked about so much.  Well, I should remedy that.  Maybe in the coming days or weeks I'll go back, and post about some of that stuff, that Arias de Reyna and I have coming down the pipeline in the wake of this.  I mean, I imagine it'll be a while before those get written up properly, but I'd like to talk about it some.&lt;br /&gt;&lt;br /&gt;But like the thing about this paper, that I've wanted to get this out there so bad is that like... I think this is kind of the best paper I've written, y'know?  And it might well be the best one I write for a long time.  Not in terms of the strength of the results (we've got stronger ones coming), or the quality of the exposition, but in terms of, tying things together. This paper just really ties everything together really nicely -- answering simultaneously what look like two unrelated conjectures -- and solves problems rather than raises them.  It does raise new problems, of course, but ones that don't seem as essential as the ones it solves.  And it finally resolves &lt;i&gt;all&lt;/i&gt; of Arias de Reyna's old conjectures!&lt;br /&gt;&lt;br /&gt;So, that's it.  It's up.  And I'm going to stop here now.  Maybe more on this later.&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=562543" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:559919</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/559919.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=559919"/>
    <title>lim sup ap, lim inf ap, lim ap</title>
    <published>2021-05-19T03:49:34Z</published>
    <updated>2021-05-19T03:50:44Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So, upper bounds on integer complexity.  We know that (for n&amp;gt;1) we have ||n||&amp;le;3log&lt;sub&gt;3&lt;/sub&gt;n, and &lt;span style='white-space: nowrap;'&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/profile'&gt;&lt;img src='https://www.dreamwidth.org/img/silk/identity/user.png' alt='[personal profile] ' width='17' height='17' style='vertical-align: text-bottom; border: 0; padding-right: 1px;' /&gt;&lt;/a&gt;&lt;a href='https://joshuazelinsky.dreamwidth.org/'&gt;&lt;b&gt;joshuazelinsky&lt;/b&gt;&lt;/a&gt;&lt;/span&gt; has managed to improve this although he still hasn't published it.&lt;br /&gt;&lt;br /&gt;So there are several questions here.  The first is, what is the maximum value of ||n||/(log n)?  This appears to occur when n=1439, and is about 3.58; however, it never recurs, no other number yields a value that high, so that's probably not what we're really looking for, is it?&lt;br /&gt;&lt;br /&gt;A better question would be, what is the limsup of ||n||/(log n)?  This is of course a hard question to which we don't know the answer.  We know the liminf is 3/(log 3) [about 2.73]; some people have speculated that perhaps this could be the limsup as well, with ||n|| ~ 3log&lt;sub&gt;3&lt;/sub&gt;n.  But if ||2^k||=2k for k&amp;ge;1, then this can't happen, as the limsup must be at least 2/(log 2) [about 2.88].&lt;br /&gt;&lt;br /&gt;Josh has previously speculated that the correct value might in fact be equal to 2/(log 2).  But really it's hard to get a good read on it; recently I tried graphing it for some pretty large n and I gotta say if I had to guess based on that alone I'd say it's more like 3.2 or so.  But who knows?  Some other calculations I did recently, which I won't go into here, seem compatible with Josh's idea that it's 2/(log 2)... or maybe it's just 3.  We can't really say.&lt;br /&gt;&lt;br /&gt;But famously we can do better if we only want to find c such that ||n||&amp;le;c log n for &lt;i&gt;almost all&lt;/i&gt; n, in the sense that the set of exceptions has natural density 0.  So one might ask, what is the infimum of such c?  The best known bound on this currently are due to &lt;a href="https://arxiv.org/pdf/1706.08424.pdf"&gt;Cordwell et al&lt;/a&gt;, and it's about 3.29, so, better than the conjectured value for the maximum!&lt;br /&gt;&lt;br /&gt;But here's another question one might ask -- what does one &lt;i&gt;call&lt;/i&gt; this value?  This may sound like a silly question, but, y'know, this is a useful concept, and it deserves to have a name.&lt;br /&gt;&lt;br /&gt;So, more formally: Say (a&lt;sub&gt;n&lt;/sub&gt;) is a real-valued sequence.  We want to take the infimum (alternatively, supremum) of all c such that {n&amp;isin;&lt;b&gt;N&lt;/b&gt;: a&lt;sub&gt;n&lt;/sub&gt;&amp;ge;c} (respectively, &amp;le;c) has natural density 0.  So it's &lt;i&gt;like&lt;/i&gt; a limsup (respectively, liminf), but based on natural density.  Notably, it can be lower than the limsup (respectively, higher than the liminf), and the two could agree, producing a limit of sorts, even if no actual limit exists.&lt;br /&gt;&lt;br /&gt;Fortunately, there seems to be a pre-existing name for, if not this concept, then a very similar one!  It turns out that there's a notion of "approximate limsup" and "approximate liminf" (and, when they agree, "approximate limit"); these are denoted "lim sup ap", "lim inf ap", and "lim ap".  Now, these are defined in a somewhat different setting, where you're looking at taking a limit of a function f:&lt;b&gt;R&lt;/b&gt;&lt;sup&gt;d&lt;/sup&gt;&amp;rarr;&lt;b&gt;R&lt;/b&gt; and taking a limit as you approach some point x&lt;sub&gt;0&lt;/sub&gt;, and is based on Lebesgue measure... but the definition is clearly analogous.&lt;br /&gt;&lt;br /&gt;So, I'm choosing to reuse that term for this definition.  So, Cordwell et al provide our best upper bounds on limsupap ||n||/(log n). :)&lt;br /&gt;&lt;br /&gt;Because the thing is that once you name a concept, it makes it easier to think about, and ask more questions about it.  Like: What about the liminf?  What is liminfap ||n||/(log n)?&lt;br /&gt;&lt;br /&gt;I only first thought to ask this question a few days ago.  And I realize that I don't really have any idea!  It seems kind of like it ought to be 3/(log 3), same as the liminf, but, well, why should it be?  I don't see any reason it couldn't be higher!  Maybe there is an easy proof, but if so I'm missing it.  If it truly is higher, that'd be pretty crazy!  Or imagine if the limit didn't exist, but the approximate limit did...&lt;br /&gt;&lt;br /&gt;Anyway yeah.  Yay for having good names for things.&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=559919" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:543117</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/543117.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=543117"/>
    <title>What is the *second*-highest number we can make with k x's?</title>
    <published>2018-04-08T06:21:55Z</published>
    <updated>2018-04-08T06:21:55Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So, two entries ago, I discussed some variants on the Mahler-Popken problem.  I'd like here to return to the original Mahler-Popken problem and discuss a different variant.  Namely: What is the &lt;i&gt;second&lt;/i&gt;-largest number we can make with k x's?&lt;br /&gt;&lt;br /&gt;First, though, we should talk about the answer to the original problem in more detail.  As before, I'm going to focus on what happens when k is sufficiently large, and how large k needs to be may depend on x.&lt;br /&gt;&lt;br /&gt;So, as mentioned in the previous entry, to make the largest possible number from k x's, we want to group them into groups of m and then multiply them together.  How large is m?&lt;br /&gt;&lt;br /&gt;Well, m is the m that maximizes (mx)&lt;sup&gt;1/m&lt;/sup&gt;.  This is approximately e/x; indeed, e/x would be the maximum if we allowed m to vary continuously, but since that's not possible, m will be either the floor or the ceiling of this.  The value of m changes over at the values m&lt;sup&gt;m-1&lt;/sup&gt;/(m-1)&lt;sup&gt;m&lt;/sup&gt;.  Or, more specifically, m&lt;sup&gt;m-1&lt;/sup&gt;/(m-1)&lt;sup&gt;m&lt;/sup&gt; is the largest value of x where m is the group size, and (m+1)&lt;sup&gt;m&lt;/sup&gt;/m&lt;sup&gt;m+1&lt;/sup&gt; is the smallest.  (Throughout, I'm going to think of x as decreasing from &amp;infin; towards 0.)  As previously mentioned, at the changeover point (m+1)&lt;sup&gt;m&lt;/sup&gt;/m&lt;sup&gt;m+1&lt;/sup&gt;, both groups of size m and of size m+1 work.&lt;br /&gt;&lt;br /&gt;So.  When k is divisible by m, we group things into groups of m, yielding (mx)&lt;sup&gt;k/m&lt;/sup&gt;.  When x is on a boundary, x=(m+1)&lt;sup&gt;m&lt;/sup&gt;/m&lt;sup&gt;m+1&lt;/sup&gt;, then for any k sufficiently large we can make k using groups of m and m+1, yielding ((m+1)/m)&lt;sup&gt;k&lt;/sup&gt;; divisibility conditions don't even come into it.&lt;br /&gt;&lt;br /&gt;But what about when m is well-defined but k is not divisible by m?  Let's say k is r mod m, 0&amp;lt;r&amp;lt;m.  In this case, we'll need to use some groups of size either m+1 or m-1 to make the congruence come out right.  But which one?&lt;br /&gt;&lt;br /&gt;This is where it gets really neat.  Mahler and Popken found that, if x is larger, you want to use groups of m-1; and if x is smaller, you want to use groups of m+1.  The changeover point, where both give the same&lt;br /&gt;result, is at&lt;br /&gt;&lt;br /&gt;m&lt;sup&gt;m-1&lt;/sup&gt;/(m-1)&lt;sup&gt;m&lt;/sup&gt; ((m&amp;sup2;-1)/m&amp;sup2;)&lt;sup&gt;r&lt;/sup&gt;&lt;br /&gt;&lt;br /&gt;Let's denote this number by x(m,r).  There's several really neat things about this:&lt;br /&gt;&lt;br /&gt;1. Let's note that occurrence of m&lt;sup&gt;m-1&lt;/sup&gt;/(m-1)&lt;sup&gt;m&lt;/sup&gt;.  We've already seen that quantity before, as the value where m itself changes over, from m-1 to m.  And here it's what we get if we were to plug in r=0 -- which doesn't make sense, but if you plug it in anyway, that's what you get.  So, the changeover point from m-1 to m is x(m,0).&lt;br /&gt;&lt;br /&gt;2. For a fixed m, x(m,r) is monotonically decreasing in r.&lt;br /&gt;&lt;br /&gt;So as x decreases, the congruence classes change over from using (m-1)'s to using (m+1)'s, one at a time, in order.  At first, when we first hit x(m,0), when we first start using this specific value of m, all the congruence classes (other than 0, of course) are using (m-1)'s.  Then we hit x(m,1), and now, the r=1 congruence class changes over.  Then we hit x(m,2), and the r=2 congruence class changes over.  Etc.  Until... well, obviously, after x(m,m-1), all the congruence classes have changed over and are using (m+1)'s (except, once again, 0, obviously).  But what happens if we plug in r=m?  What's x(m,m)?&lt;br /&gt;&lt;br /&gt;3. If we plug in r=m, we get (m+1)&lt;sup&gt;m&lt;/sup&gt;/m&lt;sup&gt;m+1&lt;/sup&gt; -- the point where we change over to the next m!  That is to say, x(m,m)=x(m+1,0).&lt;br /&gt;&lt;br /&gt;In addition, for fixed m, x(m,r) is exponential in r.  Like, literally exactly exponential, not approximately.  Meaning that... like, say we've already divided up the positive real line based on the value of m we get, right?  With the dividing points being the numbers x(m,0), for m&amp;ge;1.  Let's call those the major dividing points.  Now we want to divide it up further based on just on the value of m, but on the more detailed eventual behavior, based on which congruence classes are using (m-1)'s or (m+1)'s.  Well now we're putting additional dividing points -- let's call them "minor dividing points" -- at the values x(m,r).&lt;br /&gt;&lt;br /&gt;Then for any two adjacent major dividing points, the minor dividing points between them are &lt;i&gt;evenly spaced&lt;/i&gt; if use use a log scale!&lt;br /&gt;&lt;br /&gt;I think that's really neat! :D&lt;br /&gt;&lt;br /&gt;But OK, what about the second-largest?&lt;br /&gt;&lt;br /&gt;Well, unfortunately, the answer there is considerably more complicated, and I'm not going to detail it all here.  I'll tell you the answer when r=0.  If x&amp;ge;x(m,1), you want to use m groups of size m-1 (in addition to your groups of size m).  If x&amp;le;x(m,m-1), you want to use m groups of size m+1.  When x is inbetween, you want to use one group of size m+1 and one group of size m-1.  (This is assuming m&amp;ge;2.  If m=1, then you make the second-highest by throwing in a group of size 2.)&lt;br /&gt;&lt;br /&gt;It's nicely symmetric.  Unfortunately the general answer is not.  Here's another example where I can tell you the full answer: Say x is on a boundary, x=x(m+1,0), so you can make the largest out of groups of size m and m+1.  To make the second-largest, you'll also want to throw in a group of size m+2.&lt;br /&gt;&lt;br /&gt;(These are actually the only two cases I need to know about for the original motivation; the hard case, when m is well-defined but does not divide k, is not actually necessary for this.  But I did it anyway. :P )&lt;br /&gt;&lt;br /&gt;Anyway, like I said, I'm not going to detail the hard case here.  The hard case actually needs to be divided into several subcases, depending on whether r=1, r=m-1, both (i.e. r=1 and m=2), or (the usual case) neither.  I'll just say that it can involve such things as:&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt; Using 2m-r groups of size m-1&lt;br /&gt;&lt;li&gt; Using m-r+1 groups of size m-1, and a group of size m+1&lt;br /&gt;&lt;li&gt; Using r groups of size m+1 when this is merely the second-largest&lt;br /&gt;&lt;li&gt; Using m-r groups of size m-1 when this is merely the second-largest&lt;br /&gt;&lt;li&gt; Using r-2 groups of size m+1, and a group of size m+2&lt;br /&gt;&lt;li&gt; Using m+r groups of size m+1&lt;br /&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;I'll also note that not all the changeover points are at x(m,r) for integral 0&amp;lt;r&amp;lt;m now.  Now, in some cases, there can be changeover points at x(m,&amp;frac12;) or x(m,m-&amp;frac12;) -- meaning, yes, we can get irrational changeover points!  We can also get some changeover points which are rational but can't nicely be expressed in the form x(m,r), not unless we're willing to allow r to be irrational.&lt;br /&gt;&lt;br /&gt;There's also the issue of discontinuity.  M&lt;sub&gt;k&lt;/sub&gt;(x) is necessarily continuous in x -- after all, it's the maximum of finitely many polynomials in x.  Or, you know, on a graph, in order for it to switch over from one polynomial of x to another, well, the two potential maxima have to meet, and at the point where they meet both are equal, and there's no discontinuity.&lt;br /&gt;&lt;br /&gt;But that doesn't apply for the second-largest.  Because there's two ways that the second-largest could switch.  One is that the current second swaps with the current third.  Then yeah, they meet, they're equal, no discontinuity.&lt;br /&gt;&lt;br /&gt;But the other way is that it swaps with the current first, becoming the largest.  Then at the point where the two meet, both are equal... meaning they're both the largest.  Neither is second-largest.  Instead, the next-best candidate, which to either side is third, is for an instant exposed to the world as second.  You have a discontinuity.&lt;br /&gt;&lt;br /&gt;So, above I talked about "changeover points", but I should be clear it's now no longer not always a smooth changeover -- sometimes it is, but when you're considering k that's r mod m, and the changeover point under consideration is x(m,r) itself, that point will have its own special behavior that doesn't match either what's above or what's below it.&lt;br /&gt;&lt;br /&gt;And there's one case I want to point out that's particularly weird, and that's what happens when m=2, r=1, and you look at the discontinuity at x=x(2,1)=3/2.  Because this case doesn't fall under one of the possibilities I listed above.  It can't even be described as group-and-multiply!&lt;br /&gt;&lt;br /&gt;Because, you see, while mostly you want to use groups of 2, what you want to do with your remaining 3 x's isn't to make, say, one group of 3 (for a value of 3x), or three groups of 1 (for a value of x&amp;sup3;), but rather, to use them to make x&amp;sup2;+x.  Which, again, then gets multiplied by a bunch of groups of 2.&lt;br /&gt;&lt;br /&gt;Isn't that crazy??&lt;br /&gt;&lt;br /&gt;Like, I can prove all this.  I can definitely prove that this happens when x=3/2 and not for any other x.  But I have no good heuristic explanation of it.&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=543117" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
  <entry>
    <id>tag:dreamwidth.org,2017-04-20:3148436:542688</id>
    <link rel="alternate" type="text/html" href="https://sniffnoy.dreamwidth.org/542688.html"/>
    <link rel="self" type="text/xml" href="https://sniffnoy.dreamwidth.org/data/atom/?itemid=542688"/>
    <title>A conjecture about Mahler-Popken for exponentiation</title>
    <published>2018-04-05T07:50:49Z</published>
    <updated>2019-09-14T04:09:37Z</updated>
    <category term="integer complexity"/>
    <dw:security>public</dw:security>
    <dw:reply-count>0</dw:reply-count>
    <content type="html">So I want to discuss here some variants of the Mahler-Popken problem.&lt;br /&gt;&lt;br /&gt;The original Mahler-Popken problem is this: Pick a real number x&amp;gt;0.  What is the largest number we can make with k x's using addition and multiplication?  I want to consider the analogous problem for some other models of computation.&lt;br /&gt;&lt;br /&gt;We'll vary things in three ways: Firstly, we can allow addition only, addition and multiplication, or addition, multiplication, and exponentiation.  Secondly, we can either use a &lt;i&gt;formula model&lt;/i&gt;, where we have k x's available to us to combine with our given operations; or a &lt;i&gt;circuit model&lt;/i&gt;, where we start with x and have k steps available to us, and at each step we can combine any two things we've made with one of our given operations.&lt;br /&gt;&lt;br /&gt;(The circuit-model problems are easy, as there's no tradeoffs involved, but I want to include them anyway.)&lt;br /&gt;&lt;br /&gt;One more note: I may sometimes restrict to k sufficiently large, where how large k has to be depends on x.&lt;br /&gt;&lt;br /&gt;So.  Let's start with the circuit model.  If we have only addition, the problem is trivial: We double at every step, ending up with 2&lt;sup&gt;k&lt;/sup&gt;x as our answer.  The particular value of x didn't really play any role here.&lt;br /&gt;&lt;br /&gt;If we have addition and multiplication, then, if x&amp;ge;2, obviously we want to square at each step.  But if x&amp;lt;2 -- or more generally, letting y be our largest number made so far, if y&amp;lt;2 -- then doubling is better than squaring.  So, again, this is easy -- we double until we get a number at least 2, and then we start squaring.&lt;br /&gt;&lt;br /&gt;If we also allow exponentiation, then (again using the notation of "y") from above of course if y&amp;ge;2 then we want to take y&lt;sup&gt;y&lt;/sup&gt;.  But what if y&amp;lt;2?  Then doubling is best... until y dips below a number I'll call C&lt;sub&gt;1&lt;/sub&gt; approximately 0.346, the solution in (0,1) to the equation x&lt;sup&gt;x&lt;/sup&gt;=2x.  Then taking y&lt;sup&gt;y&lt;/sup&gt; is better again.  So, OK -- starting from y=x, we repeatedly take y&lt;sup&gt;y&lt;/sup&gt; until y&amp;ge;C&lt;sub&gt;1&lt;/sub&gt;, then double until y&amp;ge;2, then repeatedly take y&lt;sup&gt;y&lt;/sup&gt;.  (Note that if we start with y&amp;lt;C&lt;sub&gt;1&lt;/sub&gt;, it's impossible to skip that middle segment, since we'll necessarily have y&lt;sup&gt;y&lt;/sup&gt;&amp;lt;1.)&lt;br /&gt;&lt;br /&gt;But something funny happens here that doesn't happen in the addition-multiplication case.  With just plus and times, it could take an arbitrary number of doublings before you hit 2, right?  Basically we have infinitely many cases, with transition points at x=2&lt;sup&gt;1-k&lt;/sup&gt;.  But here, x&lt;sup&gt;x&lt;/sup&gt; is always at least e&lt;sup&gt;-1/e&lt;/sup&gt;, which is about 0.692, and in particular greater than &amp;frac12; (and also greater than C&lt;sub&gt;1&lt;/sub&gt;).&lt;br /&gt;&lt;br /&gt;So in fact we only get finitely many cases:&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt; If x&amp;ge;2, just start doing y&lt;sup&gt;y&lt;/sup&gt;.&lt;br /&gt;&lt;li&gt; If 1&amp;le;x&amp;le;2, double once, then start doing y&lt;sup&gt;y&lt;/sup&gt;.&lt;br /&gt;&lt;li&gt; If &amp;frac12;&amp;le;x&amp;le;1, double twice, then start doing y&lt;sup&gt;y&lt;/sup&gt;.&lt;br /&gt;&lt;li&gt; If C&lt;sub&gt;1&lt;/sub&gt;&amp;le;x&amp;le;&amp;frac12;, double three times, then start doing y&lt;sup&gt;y&lt;/sup&gt;.&lt;br /&gt;&lt;li&gt; If x&amp;le;C&lt;sub&gt;1&lt;/sub&gt;, do y&lt;sup&gt;y&lt;/sup&gt;, then double twice, then start doing y&lt;sup&gt;y&lt;/sup&gt;.&lt;br /&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;But, OK, circuits are easy.  What about formulas?&lt;br /&gt;&lt;br /&gt;Obviously, addition-only is pretty silly; you can only make kx.&lt;br /&gt;&lt;br /&gt;With addition and multiplication, we have the original Mahler-Popken problem.  Now, Mahler and Popken came up with a very interesting solution to this problem, which I'll go into detail on in a subsequent entry.  But for now I want to just give it in pretty broad strokes.  Basically, given x, you can find a group size, m, such that the thing to do is to add up the x's in groups of approximately m, and then multiply these groups together.  If k is not divisible by m, you will need some groups of size m-1 or m+1 as well.  Importantly, which of these you should use depends only on x and the congruence class of k modulo m.&lt;br /&gt;&lt;br /&gt;(It's also possible to have x right on a boundary point, so that both m and m+1 work as group sizes.  In this case one doesn't need to worry about divisibility conditions.  But, again, I'm avoiding going too deep into this here.  However, there's a reason you'll see in a moment why I want to point out the existence of these boundary cases.)&lt;br /&gt;&lt;br /&gt;Regardless, the point is, if M&lt;sub&gt;k&lt;/sub&gt;(x) is the largest number we can make with k x's, one has, for k sufficiently large,&lt;br /&gt;&lt;br /&gt;M&lt;sub&gt;k&lt;/sub&gt;(x) = mx &amp;sdot; M&lt;sub&gt;k-m&lt;/sub&gt;(x)&lt;br /&gt;&lt;br /&gt;Or, if we want to be more abstract, that&lt;br /&gt;&lt;br /&gt;M&lt;sub&gt;k&lt;/sub&gt;(x) = M&lt;sub&gt;m&lt;/sub&gt;(x)M&lt;sub&gt;k-m&lt;/sub&gt;(x).&lt;br /&gt;&lt;br /&gt;So.  Here is my conjecture for the Mahler-Popken problem for addition, multiplication, and exponentiation, using M'&lt;sub&gt;k&lt;/sub&gt;(x) to denote the largest number we can make this way:&lt;br /&gt;&lt;br /&gt;For any x there exists an m such that for sufficiently large k we have&lt;br /&gt;&lt;br /&gt;M'&lt;sub&gt;k&lt;/sub&gt;(x) = M'&lt;sub&gt;m&lt;/sub&gt;(x)&lt;sup&gt;M'&lt;sub&gt;k-m&lt;/sub&gt;(x)&lt;/sup&gt;.&lt;br /&gt;&lt;br /&gt;Moreover, m is the smallest m such that M'&lt;sub&gt;m&lt;/sub&gt;(x)&amp;gt;1.&lt;br /&gt;&lt;br /&gt;Now this looks similar to the previous case, the Mahler-Popken problem, right?  But that second part means it's actually quite different.&lt;br /&gt;&lt;br /&gt;Firstly, as we saw with exponentiation earlier, there are once again only finitely many cases (in a sense -- more on this in a bit).  In the original Mahler-Popken problem, as x&amp;rarr;0, m&amp;rarr;&amp;infin;.  But here, again, for any x, we have x&lt;sup&gt;x&lt;/sup&gt;&amp;ge;&amp;frac12;, and so m can never exceed 4, no matter how small x gets.  More specifically, we get the following cases:&lt;br /&gt;&lt;br /&gt;&lt;ul&gt;&lt;br /&gt;&lt;li&gt; If x&amp;gt;1, then m=1 and so M'&lt;sub&gt;m&lt;/sub&gt;(x) = x.&lt;br /&gt;&lt;li&gt; If &amp;frac12;&amp;lt;x&amp;le;1, then m=2 and M'&lt;sub&gt;m&lt;/sub&gt;(x) = 2x.&lt;br /&gt;&lt;li&gt; If C&lt;sub&gt;1&lt;/sub&gt;&amp;le;x&amp;le;&amp;frac12;, then m=3 and M'&lt;sub&gt;m&lt;/sub&gt;(x) = 3x.&lt;br /&gt;&lt;li&gt; If C&lt;sub&gt;2&lt;/sub&gt;&amp;lt;x&amp;le;C&lt;sub&gt;1&lt;/sub&gt;, then m=3 and M'&lt;sub&gt;m&lt;/sub&gt;(x) = x&lt;sup&gt;x&lt;/sup&gt;+x.&lt;br /&gt;&lt;li&gt; If x&amp;le;C&lt;sub&gt;2&lt;/sub&gt;, then m=4 and M'&lt;sub&gt;m&lt;/sub&gt;(x) = 2x&lt;sup&gt;x&lt;/sup&gt;.&lt;br /&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;&lt;br /&gt;Here C&lt;sub&gt;1&lt;/sub&gt; is, as above, the solution in (0,1) to x&lt;sup&gt;x&lt;/sup&gt;=2x, and C&lt;sub&gt;2&lt;/sub&gt; is the solution in (0,1) to x&lt;sup&gt;x&lt;/sup&gt;+x=1.  C&lt;sub&gt;1&lt;/sub&gt; is, as previously mentioned, about 0.346, and C&lt;sub&gt;2&lt;/sub&gt; is about 0.303.&lt;br /&gt;&lt;br /&gt;So, that's different.  And it seems to be nicer, right?  But there's something else that's very different here.  Notice that, with the exception of C&lt;sub&gt;1&lt;/sub&gt;, these bounary points are all &lt;i&gt;sharp&lt;/i&gt; boundaries.  It's not like in the Mahler-Popken problem, where on one side of the boundary you use m, on the other side you use m+1, and that the boundary point both work and you can use either.  Here, there's an abrupt change of behavior when you hit the boundary point, assuming you're approaching it from above.&lt;br /&gt;&lt;br /&gt;This might seem to be impossible -- after all, it's easy to see that for any k, M'&lt;sub&gt;k&lt;/sub&gt;(x) must be continuous in x.  Doesn't this contradict that?&lt;br /&gt;&lt;br /&gt;Well, not quite.  Remember I only said "for sufficiently large k".  There's no contradiction here, so long as, when you approach when of these sharp boundaries from above, how large k has to be before this kicks in keeps getting higher and higher.  Or, in other words, so long as the initial irregularities keep getting worse and worse.  So while I said above you only get "finitely many cases", that's only true so long as you only look at the eventual behavior and not at all at the initial irregularities that are going to end up preserved at the top of that chain of exponentials, like the groups of m+1 or m-1 in the Mahler-Popken problem.&lt;br /&gt;&lt;br /&gt;I haven't discussed the solution to the Mahler-Popken problem in detail here, but there's nothing like that there.  There, things only get bad, you only get "infinitely many cases", as x approaches 0 and m approaches infinity; you never get infinitely many cases when m is fixed.&lt;br /&gt;&lt;br /&gt;Of course, my conjecture could be wrong.  But, I think it is correct, and that is what it implies.  In some ways quite nice.  In other ways, not so.  (Having tried it experimentally, I can tell you that those initial irregularities that get preserved at the top can get quite bad indeed.)&lt;br /&gt;&lt;br /&gt;I have not attempted to define defects for this problem, but to me this suggests that if one were to do so, they would probably not be well-ordered.&lt;br /&gt;&lt;br /&gt;Next time: More on the solution to the original Mahler-Popken problem, and a different variant than any of these!&lt;br /&gt;&lt;br /&gt;&lt;img src="https://www.dreamwidth.org/tools/commentcount?user=sniffnoy&amp;ditemid=542688" width="30" height="12" alt="comment count unavailable" style="vertical-align: middle;"/&gt; comments</content>
  </entry>
</feed>
