Apr. 5th, 2018

sniffnoy: (Sonic)
So I want to discuss here some variants of the Mahler-Popken problem.

The original Mahler-Popken problem is this: Pick a real number x>0. What is the largest number we can make with k x's using addition and multiplication? I want to consider the analogous problem for some other models of computation.

We'll vary things in three ways: Firstly, we can allow addition only, addition and multiplication, or addition, multiplication, and exponentiation. Secondly, we can either use a formula model, where we have k x's available to us to combine with our given operations; or a circuit model, where we start with x and have k steps available to us, and at each step we can combine any two things we've made with one of our given operations.

(The circuit-model problems are easy, as there's no tradeoffs involved, but I want to include them anyway.)

One more note: I may sometimes restrict to k sufficiently large, where how large k has to be depends on x.

So. Let's start with the circuit model. If we have only addition, the problem is trivial: We double at every step, ending up with 2kx as our answer. The particular value of x didn't really play any role here.

If we have addition and multiplication, then, if x≥2, obviously we want to square at each step. But if x<2 -- or more generally, letting y be our largest number made so far, if y<2 -- then doubling is better than squaring. So, again, this is easy -- we double until we get a number at least 2, and then we start squaring.

If we also allow exponentiation, then (again using the notation of "y") from above of course if y≥2 then we want to take yy. But what if y<2? Then doubling is best... until y dips below a number I'll call C1 approximately 0.346, the solution in (0,1) to the equation xx=2x. Then taking yy is better again. So, OK -- starting from y=x, we repeatedly take yy until y≥C1, then double until y≥2, then repeatedly take yy. (Note that if we start with y<C1, it's impossible to skip that middle segment, since we'll necessarily have yy<1.)

But something funny happens here that doesn't happen in the addition-multiplication case. With just plus and times, it could take an arbitrary number of doublings before you hit 2, right? Basically we have infinitely many cases, with transition points at x=21-k. But here, xx is always at least e-1/e, which is about 0.692, and in particular greater than ½ (and also greater than C1).

So in fact we only get finitely many cases:

  • If x≥2, just start doing yy.
  • If 1≤x≤2, double once, then start doing yy.
  • If ½≤x≤1, double twice, then start doing yy.
  • If C1≤x≤½, double three times, then start doing yy.
  • If x≤C1, do yy, then double twice, then start doing yy.


But, OK, circuits are easy. What about formulas?

Obviously, addition-only is pretty silly; you can only make kx.

With addition and multiplication, we have the original Mahler-Popken problem. Now, Mahler and Popken came up with a very interesting solution to this problem, which I'll go into detail on in a subsequent entry. But for now I want to just give it in pretty broad strokes. Basically, given x, you can find a group size, m, such that the thing to do is to add up the x's in groups of approximately m, and then multiply these groups together. If k is not divisible by m, you will need some groups of size m-1 or m+1 as well. Importantly, which of these you should use depends only on x and the congruence class of k modulo m.

(It's also possible to have x right on a boundary point, so that both m and m+1 work as group sizes. In this case one doesn't need to worry about divisibility conditions. But, again, I'm avoiding going too deep into this here. However, there's a reason you'll see in a moment why I want to point out the existence of these boundary cases.)

Regardless, the point is, if Mk(x) is the largest number we can make with k x's, one has, for k sufficiently large,

Mk(x) = mx ⋅ Mk-m(x)

Or, if we want to be more abstract, that

Mk(x) = Mm(x)Mk-m(x).

So. Here is my conjecture for the Mahler-Popken problem for addition, multiplication, and exponentiation, using M'k(x) to denote the largest number we can make this way:

For any x there exists an m such that for sufficiently large k we have

M'k(x) = M'm(x)M'k-m(x).

Moreover, m is the smallest m such that M'm(x)>1.

Now this looks similar to the previous case, the Mahler-Popken problem, right? But that second part means it's actually quite different.

Firstly, as we saw with exponentiation earlier, there are once again only finitely many cases (in a sense -- more on this in a bit). In the original Mahler-Popken problem, as x→0, m→∞. But here, again, for any x, we have xx≥½, and so m can never exceed 4, no matter how small x gets. More specifically, we get the following cases:


  • If x>1, then m=1 and so M'm(x) = x.
  • If ½<x≤1, then m=2 and M'm(x) = 2x.
  • If C1≤x≤½, then m=3 and M'm(x) = 3x.
  • If C2<x≤C1, then m=3 and M'm(x) = xx+x.
  • If x≤C2, then m=4 and M'm(x) = 2xx.


Here C1 is, as above, the solution in (0,1) to xx=2x, and C2 is the solution in (0,1) to xx+x=1. C1 is, as previously mentioned, about 0.346, and C2 is about 0.303.

So, that's different. And it seems to be nicer, right? But there's something else that's very different here. Notice that, with the exception of C1, these bounary points are all sharp boundaries. It's not like in the Mahler-Popken problem, where on one side of the boundary you use m, on the other side you use m+1, and that the boundary point both work and you can use either. Here, there's an abrupt change of behavior when you hit the boundary point, assuming you're approaching it from above.

This might seem to be impossible -- after all, it's easy to see that for any k, M'k(x) must be continuous in x. Doesn't this contradict that?

Well, not quite. Remember I only said "for sufficiently large k". There's no contradiction here, so long as, when you approach when of these sharp boundaries from above, how large k has to be before this kicks in keeps getting higher and higher. Or, in other words, so long as the initial irregularities keep getting worse and worse. So while I said above you only get "finitely many cases", that's only true so long as you only look at the eventual behavior and not at all at the initial irregularities that are going to end up preserved at the top of that chain of exponentials, like the groups of m+1 or m-1 in the Mahler-Popken problem.

I haven't discussed the solution to the Mahler-Popken problem in detail here, but there's nothing like that there. There, things only get bad, you only get "infinitely many cases", as x approaches 0 and m approaches infinity; you never get infinitely many cases when m is fixed.

Of course, my conjecture could be wrong. But, I think it is correct, and that is what it implies. In some ways quite nice. In other ways, not so. (Having tried it experimentally, I can tell you that those initial irregularities that get preserved at the top can get quite bad indeed.)

I have not attempted to define defects for this problem, but to me this suggests that if one were to do so, they would probably not be well-ordered.

Next time: More on the solution to the original Mahler-Popken problem, and a different variant than any of these!

June 2025

S M T W T F S
1234567
891011121314
15161718192021
2223 2425262728
2930     
Page generated Jul. 4th, 2025 02:44 am
Powered by Dreamwidth Studios