So
joshuazelinsky recently sent me a link to this paper and wow!
This paper contains just two theorems but both are huge advances in integer complexity; one on the upper bound, one on the lower bound.
First the upper bound. Let's review -- what upper bounds are known on integer complexity? If one wants a bound that works for all n, well, there's the naive bound ||n||≤3log2n, and then there's Josh's improvements on that, and that's it. The empirical maximum of ||n||/(log n) occurs at n=1439, but these bounds aren't good enough to prove that that value is indeed the maximum; they're substantial overestimates. What if you just want bounds that work for all but finitely many n, bounds on the lim sup? Sorry, we don't have any of those that aren't bounds for all n.
But we do know of better results if you just want bounds that work for almost all n, bounds on what in this post I called lim sup ap ||n||/(log n), which are obtained by the averaging method; these bounds are good enough to break the 1439 barrier, but of course they don't bound the actual lim sup. And we know of even better bounds if your idea of "almost all" only requires logarithmic density 1, rather than natural density 1; this is what I've now been denoting lim sup ap* ||n||/(log n), and the bounds come from Steinerberger and Shriver's method. The current best bounds for both of these categories come, to my knowledge, from Kazuyuki Amano's work.
Well. Konyagin and Oganesyan claim that they have shown that the averaging method actually yields an upper bound on lim sup ||n||/(log n)! Indeed, more than that -- you should just go click through to their paper and read their inequality. They have well and truly broken the 1439 barrier, proving that only finitely many numbers can have a complexity that high. Although, of course, their actual inequality comes with an error term, and having asked them, they say the error term is bad enough that their theorem doesn't currently prove that 1439 is the actual maximum. But wow! This is a massive improvement on the state of the art.
But wait, their next theorem is equally impressive. For a long time, nobody's been able to get a nontrivial lower bound on lim sup ||n||/(log n) either. By "nontrivial" here I mean "better than the known liminf, 3/(log 3)". It was an open question whether ||n|| was asymptotic to 3log3n, and some people thought it was, even though this would mean that ||2k||=2k would have to fail for large k! Well, Konyagin and Oganesyan say they've now done it -- they've proven a lower bound on lim sup ||n||/(log n) that is larger than the trivial one, showing that ||n|| is not asymptotic to 3log3n after all.
Except, they're actually claiming something much stronger. Getting a lower bound on the lim sup would mean showing that infinitely many n have ||n||/(log n) above the bound. They say they've shown that in fact, almost all n do.
So this isn't just a lower bound on the lim sup -- it's a lower bound on the lim inf ap! That's basically as good as you could do!
Their proof here is actually based on my work with
joshuazelinsky on the defect. At a high level, their approach is to first use our iterative classification theorem -- yes, the original one, they're not using low-defect polynomials -- to establish upper bounds on how many leaders below x have defect in the range (k-1)σ to kσ, where σ is the variable they're using to denote their step size (they pick σ=0.48), with these upper bounds, importantly, being uniform in both x and k. (This is the hard part.) Once they've done that, they apply this to count how many numbers n below a bound x have ||n||<(3+γ)log3n, where γ is a number they've picked for this to work (they pick γ=0.06), and compute that, oh look, it's o(x). Therefore, almost all numbers have ||n||/(log n) above this bound. Tada!
Josh and I actually worked on a similar idea many years ago (ours didn't require picking a step size below 1; we were looking at defects inbetween the integers k-1 and k, and of course were working based on low-defect polynomials), although I'm unsure if Josh's method would have been good enough to show that it worked for almost all n rather than just infinitely many; if it was, we didn't realize it. But ultimately it didn't work out because we couldn't prove those uniform bounds we needed. But Konyagin and Oganesyan say they've done it!
I do have to wonder about the choice of σ. I would expect larger values of σ to yield better results, so it's surprising to me that they picked it so far below what it could have been. I have asked them about this, however, and they are of the opinion that larger σ would probably not yield much better results. Still, we'll see if anyone manages to do any better with their ideas.
(It's possible they picked σ<½ so that they could start with Bσ and B2σ both already known, using Josh's and my work classifying numbers with defect less than 1. Of course, my algorithms can be used to compute all numbers with defect less than 2, but maybe they didn't know about this. I've since sent them the output of such a calculation, just in case they can make use of it. Like I said, their opinion was that larger σ wouldn't be much better, but we'll see.)
Now the question becomes, is it all correct? Unfortunately, their arguments are quite analysis-heavy, and so I am not the best person to evaluate them. So right now my answer can only be "I don't know". But I'm hopeful!
-Harry
This paper contains just two theorems but both are huge advances in integer complexity; one on the upper bound, one on the lower bound.
First the upper bound. Let's review -- what upper bounds are known on integer complexity? If one wants a bound that works for all n, well, there's the naive bound ||n||≤3log2n, and then there's Josh's improvements on that, and that's it. The empirical maximum of ||n||/(log n) occurs at n=1439, but these bounds aren't good enough to prove that that value is indeed the maximum; they're substantial overestimates. What if you just want bounds that work for all but finitely many n, bounds on the lim sup? Sorry, we don't have any of those that aren't bounds for all n.
But we do know of better results if you just want bounds that work for almost all n, bounds on what in this post I called lim sup ap ||n||/(log n), which are obtained by the averaging method; these bounds are good enough to break the 1439 barrier, but of course they don't bound the actual lim sup. And we know of even better bounds if your idea of "almost all" only requires logarithmic density 1, rather than natural density 1; this is what I've now been denoting lim sup ap* ||n||/(log n), and the bounds come from Steinerberger and Shriver's method. The current best bounds for both of these categories come, to my knowledge, from Kazuyuki Amano's work.
Well. Konyagin and Oganesyan claim that they have shown that the averaging method actually yields an upper bound on lim sup ||n||/(log n)! Indeed, more than that -- you should just go click through to their paper and read their inequality. They have well and truly broken the 1439 barrier, proving that only finitely many numbers can have a complexity that high. Although, of course, their actual inequality comes with an error term, and having asked them, they say the error term is bad enough that their theorem doesn't currently prove that 1439 is the actual maximum. But wow! This is a massive improvement on the state of the art.
But wait, their next theorem is equally impressive. For a long time, nobody's been able to get a nontrivial lower bound on lim sup ||n||/(log n) either. By "nontrivial" here I mean "better than the known liminf, 3/(log 3)". It was an open question whether ||n|| was asymptotic to 3log3n, and some people thought it was, even though this would mean that ||2k||=2k would have to fail for large k! Well, Konyagin and Oganesyan say they've now done it -- they've proven a lower bound on lim sup ||n||/(log n) that is larger than the trivial one, showing that ||n|| is not asymptotic to 3log3n after all.
Except, they're actually claiming something much stronger. Getting a lower bound on the lim sup would mean showing that infinitely many n have ||n||/(log n) above the bound. They say they've shown that in fact, almost all n do.
So this isn't just a lower bound on the lim sup -- it's a lower bound on the lim inf ap! That's basically as good as you could do!
Their proof here is actually based on my work with
Josh and I actually worked on a similar idea many years ago (ours didn't require picking a step size below 1; we were looking at defects inbetween the integers k-1 and k, and of course were working based on low-defect polynomials), although I'm unsure if Josh's method would have been good enough to show that it worked for almost all n rather than just infinitely many; if it was, we didn't realize it. But ultimately it didn't work out because we couldn't prove those uniform bounds we needed. But Konyagin and Oganesyan say they've done it!
I do have to wonder about the choice of σ. I would expect larger values of σ to yield better results, so it's surprising to me that they picked it so far below what it could have been. I have asked them about this, however, and they are of the opinion that larger σ would probably not yield much better results. Still, we'll see if anyone manages to do any better with their ideas.
(It's possible they picked σ<½ so that they could start with Bσ and B2σ both already known, using Josh's and my work classifying numbers with defect less than 1. Of course, my algorithms can be used to compute all numbers with defect less than 2, but maybe they didn't know about this. I've since sent them the output of such a calculation, just in case they can make use of it. Like I said, their opinion was that larger σ wouldn't be much better, but we'll see.)
Now the question becomes, is it all correct? Unfortunately, their arguments are quite analysis-heavy, and so I am not the best person to evaluate them. So right now my answer can only be "I don't know". But I'm hopeful!
-Harry