Quick thoughts on Buridan's principle
May. 3rd, 2020 08:43 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
So I recently came across this paper (note the link to download is on the right!); see also this earlier one.
I bring it up because this is something I'd thought about before but had never thought to, like, formalize, so I'm glad someone else has written about it!
OK, so, to summarize: The world is full of discrete decision systems, right? Where you have a system and it's supposed to make an all-or-nothing decision between two alternatives, fall into one of two attractors. Or a finite number, doesn't matter.
Problem: As best we can tell, the universe evolves in a continuous manner! So there's no way to continuously map it to just a finite number of options (more than one, anyway). So, there must be some starting configuration that will *not* fall into either possibility.
From this Lamport derives Buridan's Principle: The time for a decision system to make a discrete decision *cannot* be bounded in advance, and sometimes it *will* simply fail to make a decision in time. Note that while he uses Buridan's Ass as a motivating example, there's nothing saying where this boundary might be; doesn't have to be equally spaced. And notice that the example he's really most concerned with is computer chips! Which are supposed to be digital, but are fundamentally running on electricity, whose analog nature cannot be entirely avoided.
Now if you're Buridan's Ass you can just be like, well, I'll choose arbitrarily, or choose randomly. But while that helps in that situation, it doesn't help overall, because now you just have 3 discrete outcomes, and there'll be some point at which point you get stuck choosing between picking the bale on the left and flipping a coin.
That said, there are some things I think he doesn't cover well. He does talk about adding noise, but he doesn't talk about *true* randomness. Now to be clear -- adding true randomness should not make a difference! It's kind of just a general principle that adding true classical randomness should not make a difference to this sort of thing.
Nonetheless, it definitely makes the argument harder -- because sure, the individual discrete outcomes can't vary continuously, but a probability distribution sure can! So maybe there's no exceptional point where a decision isn't made, but rather the probability distribution just varies continuously from (1,0) on one side to (0,1) on the other. Kind of like spontaneous symmetry breaking -- there the symmetry breaks when you look at one world but is maintained when you look at the whole ensemble; here it's continuity instead of symmetry, I guess?
But I'm hopeful someone can formalize a reason why this is impossible; I'd figure it must basically be because evolution of probabilistic systems is linear, that it has to evolve like the convex combination of how the possibilities evolve, not just in some arbitrary continuous manner. Also obviously his treatment of quantum mechanics is pretty slapdash. But I'm hopeful that if someone can formalize a proof for classical randomness, then maybe also for quantum amplitudes (although those famously *do* change a lot of things), especially if it's just fundamentally based on linearity, as of course quantum mechanics also respects that.
Anyway I don't seriously intend on thinking about this, this is just some quick thoughts...
I bring it up because this is something I'd thought about before but had never thought to, like, formalize, so I'm glad someone else has written about it!
OK, so, to summarize: The world is full of discrete decision systems, right? Where you have a system and it's supposed to make an all-or-nothing decision between two alternatives, fall into one of two attractors. Or a finite number, doesn't matter.
Problem: As best we can tell, the universe evolves in a continuous manner! So there's no way to continuously map it to just a finite number of options (more than one, anyway). So, there must be some starting configuration that will *not* fall into either possibility.
From this Lamport derives Buridan's Principle: The time for a decision system to make a discrete decision *cannot* be bounded in advance, and sometimes it *will* simply fail to make a decision in time. Note that while he uses Buridan's Ass as a motivating example, there's nothing saying where this boundary might be; doesn't have to be equally spaced. And notice that the example he's really most concerned with is computer chips! Which are supposed to be digital, but are fundamentally running on electricity, whose analog nature cannot be entirely avoided.
Now if you're Buridan's Ass you can just be like, well, I'll choose arbitrarily, or choose randomly. But while that helps in that situation, it doesn't help overall, because now you just have 3 discrete outcomes, and there'll be some point at which point you get stuck choosing between picking the bale on the left and flipping a coin.
That said, there are some things I think he doesn't cover well. He does talk about adding noise, but he doesn't talk about *true* randomness. Now to be clear -- adding true randomness should not make a difference! It's kind of just a general principle that adding true classical randomness should not make a difference to this sort of thing.
Nonetheless, it definitely makes the argument harder -- because sure, the individual discrete outcomes can't vary continuously, but a probability distribution sure can! So maybe there's no exceptional point where a decision isn't made, but rather the probability distribution just varies continuously from (1,0) on one side to (0,1) on the other. Kind of like spontaneous symmetry breaking -- there the symmetry breaks when you look at one world but is maintained when you look at the whole ensemble; here it's continuity instead of symmetry, I guess?
But I'm hopeful someone can formalize a reason why this is impossible; I'd figure it must basically be because evolution of probabilistic systems is linear, that it has to evolve like the convex combination of how the possibilities evolve, not just in some arbitrary continuous manner. Also obviously his treatment of quantum mechanics is pretty slapdash. But I'm hopeful that if someone can formalize a proof for classical randomness, then maybe also for quantum amplitudes (although those famously *do* change a lot of things), especially if it's just fundamentally based on linearity, as of course quantum mechanics also respects that.
Anyway I don't seriously intend on thinking about this, this is just some quick thoughts...