Dec. 2nd, 2009

sniffnoy: (Chu-Chu Zig)
Actually, now I do think I have a way for improving on Josh's proof - don't look at the *minimal* nonzero defect, just an acceptably small one - one small enough so that the stuff below that has a small multiplicative closure. Such as, say, 1/2. (That way everything below is of the form 2^a*3^b.) Actually, anything less than 1 should do, but of course you get tradeoff between growth rate and starting value. Of course, since 1/2 - or anything not below the minimal nonzero defect - gets you powers of 2, it can't tell us anything about 2^a*3^b; but anything below the minimal nonzero defect is so impractical I'm pretty sure it can't either.

I still have to check all this, but I'm pretty sure this should work.

-Harry
sniffnoy: (Dead face)
So, more on integer complexity. Let's recall I'm using f(n) to denote the complexity of n, and g(k) to denote the highest number of complexity k. f(n)≥3log3n; the smallest differences occur when n is of the form g(k) for some k, i.e. it's of the form 2^a*3^k with a≤2, and after those the smallest differences occur when n is what I called a "canopy number", which are of the form 2^a*3^k with a≤10 or 2^a(2^b*3^m+1)3^k with a+b≤2.

So let's define A_k={n∈N: f(n)-3log3(x)≤k} (we may also want to always throw in 1 even if k<1), and let's recall that Josh proved that A_k(x)=O((log x)^K) for some K depending on k. (Actually, have I stated that before? Well, he proved it a while ago. In particular, it follows that f(n)-3log3n is unbounded, which was the original motivation.)

Notice that if we define L(x)=min k such that g(k)≥x, this is bounded away from 3log_3(x), so the statement is equivalent with L(x) in place of 3log_3(x). Or instead we might use, say, ⌈3log_3(x)⌉, or l(x), defined by l(x)=max k such that g(k)≤ x. (Note l(x)=L(x)-1 unless x=g(k) for some k, then l(x)=L(x)=k.)

Regardless, I'm going to think about this difference, between f and whatever-function-that's-close-to-3log_3-we're using, as a "defect".

Now, here's the thing about Josh's proof: When I defined A_k, note I never required k to be an integer, and if we're going to be using actual logs, then, well, that would kind of be a silly requirement. And so Josh's proof works, inducting on k, increasing it 1/10 at a time. (Actually 2-3log32-ε, but it's easier to just say "1/10".)

This is perfectly good if all you want to do is prove that A_k(x) is polylogarithmic, but if you want good bounds on what the exponent is, it stinks. We start with an exponent of 1 for A_0 as our base case; this gets us an exponent of 2^(10k) for A_k. An exponent of 1024 for A_1. The actual minimal exponent for A_1 is 2 (this follows from what I did earlier).

Why 1/10? Because 2-3log_3(2) is the minimal nonzero defect that can occur. So I had an idea, if we use a different function in place of 3log_3, that gives us a larger minimal nonzero defect - say, 1, because the function will always be an integer - we could get much better bounds. Indeed, I even found a way to bring K down from exponential in k all the way down to linear in k - but this specifically used the fact that we were dealing with integers. And not only that, but the actual concrete methods that led to these bounds, I thought, could potentially prove the 2^a*3^b conjecture for higher a - for a short time I thought I had (nearly) proved it for a≤29.

But apparently there were some major holes in what I'd done.

You see, while the bad thing about Josh's proof is its small minimal nonzero defect, the good thing - that makes it work, unlike my attempts at variants of it - is just what happens at zero defect. You don't want there to be many things of zero defect - rather, you don't want there to be many elements in the multiplicative closure of the set of zero-defect numbers. If we're just using 3log_3, we get powers of 3, which are already closed under multiplication. Pretty good.

But suppose we use L instead - then you get all the canopy numbers. Because of that pesky +1, these are far from closed under multiplication. I'm not sure exactly how best to estimate how many products of canopy numbers are at most x, but I don't think it's polylogarithmic, and assuming that's correct, it makes it useless for this. Using the stricter ⌈3log_3⌉ doesn't help any.

Ah, but what if instead I use l? For a while, I thought this fixed it - then the numbers of defect 0 are the g(k), which, while not closed under multiplication, yield just numbers of the form 2^a*3^b when you close it up - logarithm squared, that's perfectly fine. But the problem is, unlike the functions above, l doesn't satisfy l(ab)≤l(a)+l(b), which is, SFAICT, quite necessary for the induction.

So we don't have 2^a*3^b for a≤29, we don't have that you can just use K=2k or 3k-1 or even just 2^k or 2^(k+1)-1; what we have, that works, is just Josh's original proof, that gets you K=2^(10k). (Unless I'm wrong about products of canopy numbers.)

...well, at least this'll make writing everything up quite a bit easier.

-Harry

October 2025

S M T W T F S
   1234
5 67891011
12131415161718
19202122232425
262728293031 
Page generated Oct. 29th, 2025 08:05 pm
Powered by Dreamwidth Studios