Tresolution
Sep. 23rd, 2023 12:13 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
OK, I think I'm finally actually done with the tree stuff for now. I mean, I'm not done, I still have questions. But I think I have reached a point where I can finally rest and get back to other things! I have finall figured out a conjecture for what's going on for higher c. I guess I'll tell you about that here.
(I haven't yet proven it -- maybe it's actually easy, idk, but I am dead tired right now. I do have something of a heuristic explanation for why it should work. It also seems to explain every other phenomenon I've noticed about these trees, except for some ones that are easy to prove by other means.)
(And no I'm not putting this on the tree page. Too complicated and too far away from the original point.)
Anyway! I should actually state this rule. So the way I'm thinking of things, right, is that, we've got our tree, it's 0-rooted, it uses an additive constant of c>=1 (so R(n)=n+L(n)+c), and given n we want to predict L(n) without having to compute the whole tree.
Now, if we define v(n) = S(n+c-1)+3-2c, where S(n) is the Zeckendorf left-shift operation, the idea is that L(n) "ought" to equal v(n), and so what I'm really trying to predict is the difference L-v. (Here, the v function is essentially a sort of stabilized L. So I'm looking at how the real thing differs from the stabilized thing.)
As an example of what this looks like for small c -- for c=1, this difference is always 0. Same for c=2. For c=3, it's 0 unless n=Fk-3, then it's (-1)^k. From there on it gets more complicated.
Oh, one more definition: I'll use T(n) to mean the *dual* Zeckendorf right-shift operation. (Left shift is well-defined for all pseudo-Zeckendorf representations, so it doesn't matter whether you use ordinary Zeckendorf or dual, but this isn't true for right-shift! Here it matters.) This is also equal to ⌊n/φ⌋.
So: Our initial condition is that, for n<T(c-1), L(n)=n+1. Of course this is true rather further out; L(n)=n+1 for all n<c. But we only need it this far out for our initial condition.
(This initial condition is frankly a bit too complicated for my liking, but, I don't see any way to simplify it. Oh well.)
Then, the rule is that for n≥T(c-1), we have
L(n)-v(n) =
|{m<n: L(m)<v(m), S(m+c-1)-c+2+L(m)-v(m)<=n<=(m+c-1)-c+1}| -
|{m<n: L(m)>v(m), S(m+c-1)-c+2<=n<=(m+c-1)-c+1+L(m)-v(m)}|
OK, that probably warrants some explaining. (There's also something misleading about the expression above. I wrote it as a difference, but in fact, one of those two sets will always be empty. You either have all positive contributions or all negative contributions, never cancellation.)
Anyway, the way I'm thinking about it is this. We start with a bunch of positive exceptions (n such that L(n)>v(n)) in our initial condition, with appropriate multiplicities (values of |L(n)-v(n)|).
Then, a positive exception at n results in a negative exception at S(n+c-1)+2-c. If n has multiplicity k, it results in negative exceptions at S(n+c-1)+2-c, S(n+c-1)+3-c, ..., S(n+c-1)+c+L(n)-v(n). And if some m ends up being an exception due to multiple different sources, well, those different sources add up; since each one contributes -1, you just get the negative of the number of them.
Similarly, a negative exception at n results in a positive exception at S(n+c-1)+1-c. And if n has multiplicity k, this time it extends downwards, resulting in positive exceptions at S(n+c-1)+1-c, S(n+c-1)-c, ..., S(n+c-1)+2-c+L(n)-v(n).
OK, actually, that's not quite the way I'm thinking about it. All of this gets much nicer if we adjust all the numbers by adding c-1 to all of them. (So the numbering now starts at c-1 instead of 0, and our additive constant is effectively 1.) So now, positive exceptions generate negative exceptions at S(n)+1, and negative exceptions generate positive exceptions at S(n).
Well, that's still not quite the way I'm thinking about it. Remember I mentioned there's never any cancellation? I think of that initial condition as being an initial "run", a contiguous segment where L(n)-v(n) maintains the same sign or is 0. It's a positive run. This run then generates the next run, which is a negative run; which then in turn generates the next run, which is positive, etc. Runs never overlap; each one comes purely from the previous one.
Anyway this rule explains a lot of things like, why is it that for each c there are only finitely many n st |L(n)-v(n)|>1? And why is it that the behavior of L(Fk-c) depends on whether the smallest Fibonacci number greater than or equal to c has even or odd index? I won't go into that here, but it answers that question! Although as mentioned I haven't actually proven this yet.
There is sort of an intuitive explanation of why this rule should be true. Our initial condition is forced -- all those numbers are positive exceptions because the baseline v value for them is *negative* (or 0, or just too low due to the other positive exceptions in the initial condition) and so they get forced upwards.
Then, well, each positive exception generates a negative exception after it. That is, if some n has L(n) bumped up by 1, then R(n) is also bumped up by 1, which means it's bumped *out* of the spot it would normally occupy; so the later m such that v(m)=R(n) instead takes L(m)=R(n)-1, since that's been left free.
Correspondingly, each negative exception generates a later positive exception; if some L(n) is bumped down by 1, then R(n) is also bumped down by 1, which them of course means that if v(m)=R(n) then L(m) must be bumped *up* by 1.
And if |L(n)-v(n)|>1, well, then, this causes a cascade of sorts. And if multiple of these affect the same number it adds up I guess? Yeah idk -- actually proving it this way sounds annoying. But this is the heuristic explanation of why this rule should work.
Anyway it seems to check out and it explains everything else I'm seeing! Now I'd better get back to other things...
(I haven't yet proven it -- maybe it's actually easy, idk, but I am dead tired right now. I do have something of a heuristic explanation for why it should work. It also seems to explain every other phenomenon I've noticed about these trees, except for some ones that are easy to prove by other means.)
(And no I'm not putting this on the tree page. Too complicated and too far away from the original point.)
Anyway! I should actually state this rule. So the way I'm thinking of things, right, is that, we've got our tree, it's 0-rooted, it uses an additive constant of c>=1 (so R(n)=n+L(n)+c), and given n we want to predict L(n) without having to compute the whole tree.
Now, if we define v(n) = S(n+c-1)+3-2c, where S(n) is the Zeckendorf left-shift operation, the idea is that L(n) "ought" to equal v(n), and so what I'm really trying to predict is the difference L-v. (Here, the v function is essentially a sort of stabilized L. So I'm looking at how the real thing differs from the stabilized thing.)
As an example of what this looks like for small c -- for c=1, this difference is always 0. Same for c=2. For c=3, it's 0 unless n=Fk-3, then it's (-1)^k. From there on it gets more complicated.
Oh, one more definition: I'll use T(n) to mean the *dual* Zeckendorf right-shift operation. (Left shift is well-defined for all pseudo-Zeckendorf representations, so it doesn't matter whether you use ordinary Zeckendorf or dual, but this isn't true for right-shift! Here it matters.) This is also equal to ⌊n/φ⌋.
So: Our initial condition is that, for n<T(c-1), L(n)=n+1. Of course this is true rather further out; L(n)=n+1 for all n<c. But we only need it this far out for our initial condition.
(This initial condition is frankly a bit too complicated for my liking, but, I don't see any way to simplify it. Oh well.)
Then, the rule is that for n≥T(c-1), we have
L(n)-v(n) =
|{m<n: L(m)<v(m), S(m+c-1)-c+2+L(m)-v(m)<=n<=(m+c-1)-c+1}| -
|{m<n: L(m)>v(m), S(m+c-1)-c+2<=n<=(m+c-1)-c+1+L(m)-v(m)}|
OK, that probably warrants some explaining. (There's also something misleading about the expression above. I wrote it as a difference, but in fact, one of those two sets will always be empty. You either have all positive contributions or all negative contributions, never cancellation.)
Anyway, the way I'm thinking about it is this. We start with a bunch of positive exceptions (n such that L(n)>v(n)) in our initial condition, with appropriate multiplicities (values of |L(n)-v(n)|).
Then, a positive exception at n results in a negative exception at S(n+c-1)+2-c. If n has multiplicity k, it results in negative exceptions at S(n+c-1)+2-c, S(n+c-1)+3-c, ..., S(n+c-1)+c+L(n)-v(n). And if some m ends up being an exception due to multiple different sources, well, those different sources add up; since each one contributes -1, you just get the negative of the number of them.
Similarly, a negative exception at n results in a positive exception at S(n+c-1)+1-c. And if n has multiplicity k, this time it extends downwards, resulting in positive exceptions at S(n+c-1)+1-c, S(n+c-1)-c, ..., S(n+c-1)+2-c+L(n)-v(n).
OK, actually, that's not quite the way I'm thinking about it. All of this gets much nicer if we adjust all the numbers by adding c-1 to all of them. (So the numbering now starts at c-1 instead of 0, and our additive constant is effectively 1.) So now, positive exceptions generate negative exceptions at S(n)+1, and negative exceptions generate positive exceptions at S(n).
Well, that's still not quite the way I'm thinking about it. Remember I mentioned there's never any cancellation? I think of that initial condition as being an initial "run", a contiguous segment where L(n)-v(n) maintains the same sign or is 0. It's a positive run. This run then generates the next run, which is a negative run; which then in turn generates the next run, which is positive, etc. Runs never overlap; each one comes purely from the previous one.
Anyway this rule explains a lot of things like, why is it that for each c there are only finitely many n st |L(n)-v(n)|>1? And why is it that the behavior of L(Fk-c) depends on whether the smallest Fibonacci number greater than or equal to c has even or odd index? I won't go into that here, but it answers that question! Although as mentioned I haven't actually proven this yet.
There is sort of an intuitive explanation of why this rule should be true. Our initial condition is forced -- all those numbers are positive exceptions because the baseline v value for them is *negative* (or 0, or just too low due to the other positive exceptions in the initial condition) and so they get forced upwards.
Then, well, each positive exception generates a negative exception after it. That is, if some n has L(n) bumped up by 1, then R(n) is also bumped up by 1, which means it's bumped *out* of the spot it would normally occupy; so the later m such that v(m)=R(n) instead takes L(m)=R(n)-1, since that's been left free.
Correspondingly, each negative exception generates a later positive exception; if some L(n) is bumped down by 1, then R(n) is also bumped down by 1, which them of course means that if v(m)=R(n) then L(m) must be bumped *up* by 1.
And if |L(n)-v(n)|>1, well, then, this causes a cascade of sorts. And if multiple of these affect the same number it adds up I guess? Yeah idk -- actually proving it this way sounds annoying. But this is the heuristic explanation of why this rule should work.
Anyway it seems to check out and it explains everything else I'm seeing! Now I'd better get back to other things...