sniffnoy: (SMPTE)
This is a distinction that's been floating around in my head for a while. Basically -- well, I'm sure you've noticed that some people think that wireheading is actually a good thing. Whereas people like me obviously don't. What I want to try to get at here is, what are the ways of thinking behind this distinction, what's the difference between them?

One could perhaps explain yay-wireheading-vs-boo-wireheading purely in terms of "happiness is our only terminal value, all other values are subsidiary to it" vs. "hell no it's not", but I think there's more to it than that.

Another related distinction I've seen -- or one that I'm claiming is related -- is, well, you have writings like this, which take the point of view that obviously a rational person would kill themselves, while people like me just... you know... think this is incredibly dumb.

So, I posit that both of these, among other things, can be explained by what I'm terming "desire-thinking" vs "goal-thinking". Well, OK, I should say, this is not necessarily the only reason for the distinction. But this is something I've noticed, that seems to result in a lot of people talking past each other, and I think it's a real, useful distinction that seems to explain a lot of this.

So: Goal-thinking thinks in terms of goals. Goals are to be accomplished. If you're thinking in terms of goals, one think you're afraid of is being somehow disabled, having your capabilities reduced, because this means you have less ability to make an effect on the future. The ultimate disablement is death -- if you're dead, you can't affect anything anymore. While it's possible in some unusual cases that dying could help accomplish your goals, it's pretty unlikely; most of the time, you're better off remaining alive so that you can continue to affect things. So suicide is almost always unhelpful.

Wireheading is similarly disastrous, because it's just another means of rendering oneself inactive. We can generalize "wireheading" of course to anything that causes one to think one has accomplished one's goals when one hasn't. Or of course to having one's goals altered. We all know this argument, right? A rational agent resists any attempts to alter its goals, because if its goals were altered it would no longer wish to pursue its current goals, making them less likely to be accomplished. (The old "murder pill" argument.)

(Have you noticed I'm basically recapitulating Omohundro's "basic AI drives"? :) )

Another way of putting this is, goals are themselves driving forces.

The alternative that a number of people seem to be using is what I'm calling "desire-thinking". To some extent you could sum up this way of thinking as "it's all about happiness vs unhappiness" or "it's all about pleasure vs pain" -- e.g., instead of disablement, people thinking this way tend to focus on unhappiness, pain, and suffering instead -- but I don't think that's really getting at the root of the distinction in a useful way. Rather, I think the thing is that it involves thinking in terms of desires rather than goals. (Obviously I am using these words in an idiosyncratic way to explain this distinction, of course.) What do I mean than that? Well, goals are to be accomplished, but desires are to be extinguished. That is to say, you can imagine one super-goal, "extinguish all desires", which is itself the driving force, and desires are then treated as objects, so to speak. ("Driving force" vs. "object" is a difference of levels. You could imagine "rules of inference" vs. "axioms" as an analogy.) So -- having your desires altered? Under this point of view, that can be a good thing, if the new ones are easier. If you can just make yourself not care, great. Wireheading is excellent from this point of view, and even killing oneself works.

Now obviously I sympathize way more with the former point of view, but also I should be clear here I'm not at all suggesting that people use just one or the other. Like it seems pretty clear that nobody's a pure goal-thinker, because after all, we're are made out of meat, as they say, and we make some allowance for this. Thus when Eliezer Yudkowsky says "I wouldn't want to take a pill that would cause me to want to kill people, because then maybe I'd kill people, and I don't want that", we recognize it as an important principle of decision theory; but when someone (reputedly Clarence Darrow but who knows) says "I don't like spinach, and I'm glad I don't, because if I liked it I'd eat it, and I just hate it", we correctly recognize this as a joke. Still, I think it's a useful way of framing some things that have resulted in a lot of people talking past each other.

sniffnoy: (Golden Apple)
I'm wondering what other people do when they're sitting on the toilet in a bathroom with a tiled floor.

I have various "games" (not games in the usual sense obviously, really they're what Tom Vasel would call "activities") I play while I'm sitting on the toilet, staring at the floor tiles. A common one is imagining a ray of light, moving diagonally, passing through the light-colored tiles and bouncing off the dark-colored ones.

I don't think I've ever really spoken about this before except once -- it was the summer of 2013, and I was talking to Angus and Patrycja, and... I don't remember whether I mentioned that I do this and Angus said "oh, I do that too", or it was the other way around, but basically it was a case of, yeah, we both do that (the light ray thing, specifically). Patrycja on the other hand said she never does anything like that. ("Didn't anyone ever teach you how to use a toilet?", I joked.) So I'm wondering who else does this and what their rules are specifically.

What follows are the ones I do. I'm putting this below a cut since I'm guessing most people won't care.
Cut for pointless boring detail )
So: What games, if any, do you play with bathroom floor tiles?

sniffnoy: (Chu-Chu Zig)
I still haven't gone back and removed all those old <tt> tags (they're now gone from entries from 2010 and later!), but prompted by Ben Hsieh, here's a bit of politics.

I've seen this dichotomy of "should the Democratic party stick with bland centrism or should they go socialist/leftist" occasionally lately. I don't think this a good dichotomy. This might, admittedly, describe the Democrats' possible options that they could easily get support for, but otherwise it's bad.

Problem #1: It assumes that political positions can be meaningfully described on a left-right axis. Like, as much as people make fun of the "political compass", at least it recognizes that a one-dimensional scale does not suffice. In truth, I don't like the use of axes at all; I think Taymon Beal has the right idea here when he talks about the Magic color wheel:
The notion of dividing the universe based on some kind of philosophical and/or metaphysical alignment system is well-established in fantasy, but I don’t think I’ve ever seen a better execution of it than in Magic. Five is the perfect number; instead of winding up with a bunch of opposites (good vs. evil, order vs. chaos, fire vs. ice, etc.), it lends itself to a far more interesting system of allied and enemy colors. Instead of axes (which have the danger of feeling arbitrary), you have viewpoints. Colors don’t see each others’ actions and perspectives in reverse; they see them at an angle. You know, like real life.
Just so. (Though, y'know, the Magic colors should not be mistaken as a description of the real world. They're not. It's the more general point that's important here.)

This is not to say that you couldn't get by with some large number of axes -- I guess if you allow interpolation, n-1 axes, where n is the number of things you're interpolating between -- but that the axes people tend to actually use tend to be pretty bad. (As a general rule, anyone drawing up a system of political axes will include one axis which is essentially "agrees with me on what I consider the most important thing vs. disagrees with me on that thing", which they will find perfectly natural but others will not.)

Problem #2: It ignores the existence of liberalism or implicitly conflates it with other things. Like, liberalism is an actual thing, you know? It is an actual principled position that one can hold. It is not lesser leftism -- a moderated, more "realistic" version -- it is an entirely different thing, that happens to agree on some issues. Similarly, it's not "centrism" -- what the hell is "centrism", anyway? I get the impression it's just a mishmosh of compromises and compartmentalized, undigested ideas and compromises. But liberalism is an actual set of principles, you know? (Well, OK, it's a general term pointing to a particular cluster. Obviously there are different ways you can run with it, as the libertarians e.g. demonstrate.)

Here's a tangential point, one I've made many times before, but will repeat here -- the word "extremism" really conflates two different things. One of these things is, well, just object-level positions that are extreme. You tend to end up there if you try to take some set of principles to their conclusion, because the ideas that most people consider normal are not very coherent. Try to actually make something coherent and you end up in weirdo-land that most people would consider "extreme". The other is "meta-level extremism" -- resorting to threats and violence and all that. (Of course, this is really just illiberalism. Or an aspect of it, anyway.) Note that there's no logical connection between these two meanings! There seems to be an unfortunate correlation between them in people, but there's no necessary connection here. Extremism in the first sense is often a good thing. Extremism in the second sense is very much a bad thing.

I, for one, would like to see some extreme (in the first sense) liberalism. It would be quite a different thing from leftism.

Problem #3: This is a lesser problem than the ones above but -- seriously, as per #1, there's so many other ways you could go. I'm sure you can think of some.

And what about, like, the things that everyone agrees on? The things that don't show up on a "left-right" spectrum, because the entire mainstream spectrum agrees on them, but which are possibly -- or in some cases, clearly -- wrong? If "raise the minimum wage" is left and "lower the minimum wage" is right, where's "Actually, wage subsidies is a much better idea than minimum wage"? There's even some things we keep doing that it just seems that nobody wants -- not experts, not the populace... the one population that wants these things is entrenched interests with lots of money. Where on the spectrum is "Actually do all those things every expert says you should do but which politicians completely just ignore"? Where on the spectrum is "We never should have been using purely electronic voting machines"? Which is something I hear people say all the time, that people have been saying for decades, yet somehow these machines are still here. That's far from the only blatant one!

Point being: There are way more possibilities out there than going socialist on the one hand and business as usual on the other. As I said above -- some people may be disappointed that the Democratic party is "centrist" rather than leftist; I'm disappointed that it's "centrist" rather than seriously liberal. (Of course, the leftists complaining that it's centrist are often complaining about the parts that I like!)

sniffnoy: (Chu-Chu Zig)
Here we are, moved over from LiveJournal!

Hey, fonts seem to work properly here. Guess I can start going back and removing those <tt> tags...
sniffnoy: (SMPTE)
This entry is a response to this post on Meteuphoric, but I'm also posting it separately here (with some slight modifications).

I'm going to claim that yes, mistaken unifications are much more common than mistaken divisions. But the details matter here; it's not just about "unification" vs. "division". There are multiple ways we can "unify" things.

One way is to put them into a verbal cluster together. The thing is that this is often pretty useless for inference. Clusters aren't interfaces, and trying to make serious inferences from such similarities doesn't really work very well. The fundamental mistake people make here really of course is not thinking that clusters are interfaces but thinking that they're essences -- which goes along with thinking, ah, these are different manifestations of the same underlying phenomenon. Of course, the latter inference is sometimes true, but people make it too often; and, of course, there are no essences. Basically, 37 ways that words can be wrong, etc.

Another way is to say that two things are different manifestations of the same underlying phenomenon, even if they don't resemble each other (e.g. grasshoppers and locusts or water and ice). This of course is a useful thing. I don't think people make this mistake on its own too often; generally they seem to make this mistake as a result of clustering above. It only really becomes a problem of course when you apply one word to the whole thing and start conflating things. Note that this sort of unification while important still doesn't let you make straightforward inferences about one thing from the other.

A third way is to notice (or claim) that two things are isomorphic, that they "implement the same interface". This is useful for inference (within the domain that the isomorphism is valid, of course). But it would be a mistake to, seeing such an isomorphism, then fall back on naïve verbal categories and conclude that the isomorphs share an essence. Because, of course, there are no essences. But this is (roughly) a mistake I see people actually make. Isomorphisms aren't essences or identities, they're isomorphisms. Thoughtless "unification" is still a mistake.
sniffnoy: (Chu-Chu Zig)
So, I didn't get to participate much in Mystery Hunt this year, unfortunately, due to my computer suddenly crapping out on Saturday. (Website with solutions for this year's hunt: )

So, yeah, I was on Donner Party again this year, at least until my computer failed. Seems I don't know anyone else in Madison who does Mystery Hunt so I was just doing it alone from my apartment.

This one didn't go very well for Donner Party. So -- we were a remote-only team this year; we had literally nobody in Cambridge. Turns out, this hunt was structured in such a way as to make that a real problem -- there were, as originally planned, no time-releases; you had to unlock puzzles by solving previous ones... and it was arranged in such a way that a remote-only team would inevitably get stuck. (Thankfully we did at least eventually get time-releases.) This (and other similar problems, see below) caused quite a bit of (justified, IMO) grumbling on the internal team chat. Basically it went:
A: This is so unfair, that they don't accomodate remote-only teams at all!
B: Well, we could have sent someone to Cambridge.
A: Sure, but we explicitly asked them if it was OK that we were a remote-only team, and they said yes. If it was going to cause a problem they should have warned us when we asked.

I'm in agreement with A. Although ultimately it became irreelvant to me when computer problems forced me to withdraw.

(I'm also writing this late because getting my computer fixed, uh, took a while.)

Also, is it just me, or did this year's metas make backsolving especially difficult? Or maybe that's only true of the rounds I actually saw.

So, this will be shorter than usual, but:

Cut for puzzle spoilers )

And, that's all I have to say for this year. Hopefully things go better for me next year!

sniffnoy: (Chu-Chu Zig)
OK, OK, I know, any number of people have written things about how there are multiple types of libertarianism. Like, the "it's just good policy" sort vs. the deontological sort. Or the "UBI sounds good, just don't go distorting any markets" sort vs. the "hands off my money!" sort. But, here's another distinction I want to talk about.

I'd always thought of libertarianism as, y'know, fundamentally individualistic. I mean, it's based in classical liberalism, right? Enlightenment stuff, John Stuart Mill... about protecting the individual weirdo from the group, who insists everything be done their way. Do your own thing, and if you're not harming anyone, you shouldn't be punished for it. Certainly don't punish people for building new things! (But neither should you have to be building new things.) I don't consider myself a libertarian, but, that's basically what I want to see out of liberalism in general, you know? Protecting the individual from the "village".

But I'm realizing now that there's this other sort of libertarianism out there that doesn't seem to be about the individual. It's especially visible in the Republican party's pseudo-libertarianism, where often it's specifically the federal government that's singled out as a problem. (Not a political philosophy that's very portable to other countries!) But there does seem to be a more principled form of this, that makes the issue fundamentally about local control.

The thing that really bugs me about this is -- well, firstly, calling it "libertarianism" at all seems odd, but they call themselves that, so whatever; but ignoring that, well, it's often local governments, county sheriffs and such, that do the most (among governments; usually government isn't the problem, seems to me) to harass and oppress the individual weirdo. Indeed, in the US, oftentimes it's the federal government that protects citizens from local governments! Now any libertarian with their salt will point out that this structure is ridiculous, but the point remains -- favoring local control, without doing more to limit local governments as well, doesn't seem to be individualistic at all, but rather enabling the "village" in its oppression of the individual weirdo.

So, needless to say, one of these two sorts I am pretty sympathetic to; and the other sort, I'm really not.
sniffnoy: (Golden Apple)
Shuo: I have very little sympathy or compassion for children.
Me: Oh. I expected you to end that sentence earlier.
sniffnoy: (Chu-Chu Zig)
I posted this on /r/AskHistorians over on Reddit but got no responses. I expect I'll get less of one here, but, may as well ask!

I remember as a kid always hearing about how in the early days of rocketry, when it was still primarily theoretical, most people believed rockets couldn't work in space because Newton's 3rd law doesn't work without an atmosphere to "react against".

Now that I know more physics, it seems pretty unlikely to me that any real physicist could make such a mistake, because, I mean, Newton's 3rd law and its equivalents (like, you know, conservation of momentum) are pretty tightly woven into the structure of both classical and quantum physics in ways that have absolutely nothing to do with the presence of any atmosphere. And indeed when I've gone and tried to find any reference to this myth, the most I can find is one particular New York Times editorial piece from 1920 that made this claim; I haven't been able to find anything about such an idea being more generally widespread.

So was there really a "rockets can't work in space" myth among some substantial segment of the population? Or if it wasn't really a preexisting idea, were people actually convinced by this editorial to any substantial and lasting extent (before the eventual practical demonstrations that yes, rockets work fine in space)? Or is the tale of this myth itself a myth, inflated from the mistake of a few newspaper writers? Does anyone know about this?

(My suspicion is that it's the latter.)

sniffnoy: (Kirby)
So, here's an idea I had the other day: My inequality that o(Rev(α,B))≤o(Rev(α,o(B))) doesn't work if you replace o(B) by B', an extension of B. But what if you replace it by B', an extension of B satisfying o(B')=o(B)?

(We can also more generally replace α by A, a WPO, although then we have to make the additional assumption that everything involved is well-defined, and also in that case I haven't actually proven the original inequality in general)

I mean, it's a fairly natural generalization, right? And like obviously it works if A and B are both finite. And there's also some cases where, based on what I already know, one can see that they have to actually be equal (both being equal to o(Rev(α,o(B)))).

But in fact I'm pretty sure now that this generalization is false! One can take α=ω, B=N²+N³, and B'=N+N³. Since N² can be extended to N, B' is an extension of B; and o(B)=o(B')=ω³. And yet things don't work out quite the same way when you take weakly decreasing maps from ω into these partial orders. Because, and I haven't checked this totally thoroughly, but I'm pretty sure that (with the additional restriction that the functions must be eventually 0) the left hand side is equal to ωω5, while the right hand side is ωω4.

(Without that restriction, it's only slightly different; ωω5+2ω3+3 vs ωω4+1ω3+3.)

So, so much for that! That said, it can probably still be proven under more restrictive conditions. For instance, maybe if A is finite. And I'd definitely bet on it being true if A=α is a power of ω and B is finite; probably more generally if A=α is total and B is finite. (Probably also if o(A) is initial and B is finite.) Beyond that it's hard to say. One can probably get it to work with other smallness conditions -- I had to take o(B)=ω³ to get a counterexample, can one prove that's the smallest? -- but I don't know to what extent I'll bother...

EDIT: OK, here's a counterexample with B finite: Take α=ω once again, take B'=2×2, and take B that looks as follows:
*   *
 \ /
Maybe it can still work if A is finite, though, at least!

EDIT: Nope, it fails then too! Take A=2, and take B=(1∪ω)+(1∪ω), where here by "union" I mean "disjoint union"; you can extend the lower copy of 1∪ω to an ω to get B'=ω+(1∪ω). Then o(Rev(2,B))=ω²3+ω+2, but o(Rev(2,B'))=ω²3+ω+1.

(I think this case should work also with A=ω, but I haven't done a proper check. I could be wrong there.)

So, basically, this one is unsalvageable. It works in the trivial case when A and B are both finite, and some other trivial or not-so-trivial cases I can prove, but not in any decent generality. I'm calling this one a wrap.

sniffnoy: (Kirby)
EDIT 8/3: One tiny correction, see struck out portion below. Also, pretty sure I really have proved this now.


OK, so it turns out that despite my big "EDIT: This entry is wrong" warning on my last entry, almost everything it is correct. Just, not quite for the reasons I thought.

My formula for "multidimensional natural multichoose", in the case of powers of ω, was indeed correct. What was not correct was the part I didn't write down -- what if the inputs are not powers of ω? In this entry I will describe the correct formula for this.

...OK, I say "correct" but in reality I haven't actually sat down and fully proved it. I'm pretty certain this has to be correct, though. If somehow I'm wrong, well, I'll put a big edit on this later. :P

So. Recall, we're given ordinals α1 through αn, and we want to compute the maximum extending ordinal of the poset of lower sets of α1×...×αn that are "strictly bounded in each dimension", i.e., none of the projections onto any of the αi are surjective. (To ignore some stupid stuff, I'm going to assume all αi are nonzero. You can rephrase the below so it's still correct even if one of them is zero, but I don't want to have to phrase it that way here.)

Here is what we do. First, we write out each ordinal in Cantor normal form; we're going to use the "weak" Cantor normal form, where exponents can be repeated but there are no coefficients. Each ordinal can be broken down into "segments" corresponding to the Cantor normal form, each of length a power of ω. Now, we're looking at the product of these ordinals, so this breaks up the product into finitely many "boxes"; again, the dimensions of these boxes are all powers of ω.

For ease of talking about it, I am going to place these boxes on a grid. Suppose that αi has ri segments. We will associate each box to an element of {1,...,r1}×...×{1,...,rn}.

Now, as per ancient mathematical tradition, we are going to take a (natural) sum of (natural) products (I'll omit the "natural" from here on out). (Indeed, if we were in 2 dimensions -- the case I'd solved earlier -- it would even be a sum over paths, with the value of each path being a product based on what it passed through. A wonderful mathematical tradition! Unfortunately, things don't quite work out that way in the general case.) The set we are going to be summing over is the set of lower sets of {1,...,r1-1}×...×{1,...,rn-1}. That's right -- we only want lower sets that don't use the highest possible coordinate in any dimension. Not unlike the problem we're trying to solve.

So for each such lower set, we need to get a value. What we're going to do is take the "frontier" of the lower set; to define that, let me introduce some notation. I'll use ε to denote the vector (1,...,1). A point p will be on the frontier of S if it is not in S, but p-ε is; or if p is along one of the lower boundaries (p has 1 among its coordinates), so that p-ε is "out of bounds".

So, are we going to take the product of values associated to the boxes on the frontier of S? Not quite. There's an additional restriction. For each point p, I'll define ε(p) to be the vector which is 1 in each dimension where the corresponding box is finite (i.e. has length 1), and 0 in each dimension where it's infinite. The box associated to the point p will only be included in the product if both p and p-ε(p) are on the frontier. (Note that if the box is infinite in all dimensions -- if it's "full-dimensional" -- then p-ε(p) is just p again.)

So now we can associate a value to each box included in our product. How do we do that? Well, write down the box's dimensions; let's drop all of them that are finite (i.e. 1), so we'll be looking at a box, possibly lower-dimensional, all of whose dimensions are infinite.

If the box has dimension 0, its value is 1. If the box has dimension 1, its value is equal to its length.

If the box has dimension 2 or more, its value is given according to the formula I stated earlier; if its dimensions are ωβ_1×...×ωβ_k, then its value is ωω^g(β_1,...,β_k), where g(β_1,...,β_k) is given as follows:
1. If at least two of the βi are infinite, or at least one of them is of the form ε+k where ε is an ε-number and k is finite, then g(β_1,...,β_k) is the natural sum of the βi.
2. Otherwise, it is the natural sum minus 1. (And it is guaranteed in this case that the natural sum will be a successor, so you will be able to subtract 1.)

(Note how different the rule is in dimension 2 or higher from in dimension 1! In dimension 2 or higher, the value of the box always has the form ωω^γ; this is not true at all in dimension 1.)

(Isn't the rule about the value of a box of dimension 0 unnecessary, seeing as such a box can never be included in the product anyway? Yes... unless you're in dimension 0 in the first place. I'm a believer in including trivial cases. Yes. You could actually leave the value undefined; I decided not to, but whatever, it's arbitrary.)

So, to summarize: You sum over lower sets of the set of boxes, excluding the outer boundaries; for each lower set S you multiply over boxes p on the frontier of S such that p-ε(p) is also on on the frontier of S; and the value of a box is given by the formula above. Not too bad, really. Much simpler than the general rule for "right-multichoose", certainly!

Now that that's out of the way, let's talk about inequalities. Specifically, the inequality o(Rev(A,B))≤o(Rev(A,o(B))), where A and B are WPOs, and we assume that Rev(A,B) and Rev(A,o(B)) are also WPOs.

Previously, I had proved this inequality when A is total, by mimicking the proof of the 2-dimensional case of the above. I then claimed (in the entry below) that by iterating this, one could prove it more generally when A is a product of ordinals, rather than just an ordinal.

That iterative proof does not work. Fortunately, now that I know how to handle the multidimensional case, indeed I can say that it is true when A is a product of ordinals, by mimicking the proof of the multidimensional case.

But I've been able to prove some other nice special cases of this inequality as well -- ones dependent only on the values of o(A) and o(B). Unfortunately, the methods used here are seriously unlikely to extend to other cases. Still, a month or two ago I would have been surprised to make basically any progress at all on the case where A might not be total.

So, first recall that, for trivial reasons, it works if A and B are both finite. Also, using the stuff above, it's not too hard to see that it works if A is finite, and o(B) is a power of ω (although somehow I apparently missed this earlier).

But I've also got it working now if we assume that B is finite... and that o(A) is an initial ordinal. Yeah, that's a pretty restrictive condition. Oh well. It'll have to do for now. (Note that a WPO A with o(A) initial is in some sense "close to total". Though maybe not too close; you can have A with o(A)=ω1 but Rev(A,2) not a WPO.)

In fact I've also got it working if o(B)=ω and o(A) is an initial ordinal. Or if we reverse it -- if o(A)=ω and o(B) is an initial ordinal. Unfortunately, I have not been able to "complete the rectangle" and get it working when o(A) and o(B) are both initial ordinals (unless one of them is ω, obviously).

Still, considerable progress! Maybe this inequality really is true in general. Or maybe I just haven't been clever enough in trying to come up with a counterexample. Well, we'll see -- hopefully, anyway.

sniffnoy: (Kirby)
EDIT 7/27: The statements in this entry may very well be FALSE! It turns out my formula is very wrong in more than two dimensions! Now, admittedly, the case I actually bothered to state here might well still be right; in fact, I'm guessing that more likely than not, it is. (OK, having gone over it again now actually I'm pretty sure it is.) But the more general formula they are a part of is false, and I will have to see about correcting this. More on this in an entry soon to come.

So I wrote this down in an email to John, and then realized I should probably post it here as well.

So let's consider "big multichoose". I want to consider the variant that I denoted ((α multichoose β))0, where we consider all weakly decreasing functions from beta to alpha that are eventually 0.

This can also be considered as the set of lower sets of α×β that are "bounded in each direction", in the sense that neither projection is surjective.

Now of course if α and β are finite -- say n and m -- then this is just counting the number of partitions that fit in an (n-1)x(m-1) box, i.e., it's (n+m-2 choose n-1). (Which is different from multichoose, obviously, but nicer for the purposes I want to consider here.)

So suppose we generalize from two ordinal factors in the product to multiple. What happens now? Can we get a formula similar to my formula for dimension 2?

Answer: We can! It's not hard, either, once you know the dimension 2 case (OK, I haven't sat down and fully proven it, but the proof should be similar).

The thing is that I'd considered this before but always dismissed it because, well -- counting partitions in a given box is easy; counting plane partitions in a given box is less easy; and in dimensions higher than 3, things apparently get pretty wild pretty fast. So I figured, well, if the finite case is intractable, what's the point of attempting the infinite case?

Of course, this is silly, because if you can reduce the infinite case to the finite one, that's still significant. Like really, in the dimension 2 case, if you plug in finite ordinals, my formula doesn't actually say "OK, it's (n+m-2 choose n-1)", rather it says, OK, count the partitions in this box. We just know what that is. So in higher dimensions it will just fail to reduce the problem in the finite case, but in the infinite case, at least now it is a finite description.

So anyway as before the main core of the problem is when all the factors are powers of ω (and not equal to 1). So let's say we're putting in ωα_1, ..., ωα_n. Also, say n≥2; if n=1 (or n=0) the problem A. is trivial and B. is *not* given by the formula below, so let's exclude that. (And yes there is a good reason that dimension≥2 turns out different from dimension 1, but I'm not going to go into that here.)

So what we get out is ωω^g(α_1,...,α_n), for some g. What's g? Well, it's basically the same as before:

If either:
1. one of the αi is of the form epsilon+finite, or
2. at least two of the αi are infinite
then g(α1,...,αn) is just the natural sum of the αi.
But otherwise, it's the natural sum, minus 1.

(Before I did it out, I had assumed that condition (2) would require that all of the αi be infinite, but in fact, only two of them need to be. This is the influence of dimension 2, the starting point for the pattern, being felt all the way up the chain.)

(Note again that in dimensions 1 and 0, the pattern is broken and there is no g; in dimension 0 it's always 1 and in dimension 1 it's the identity. We don't generally get something of the form ωω^α.)

This got me wondering if I could prove some bound on o(Rev(α1×...×αn,X)) for any WPO X by similar means, like how earlier I proved a bound on o(Rev(α,X)), but then I realized I could actually just prove that by taking my earlier bound and iterating it; I don't need to get into anything fancy there (which is good, it's ugly enough in the two-dimensional case I did already need to do by hand).

EDIT 7/25: The above paragraph seems to be false! Yikes, I may have to do that case by hand as well!

So, one more addition to the lore of "big multichoose"...

sniffnoy: (Kirby)
It works! OK, so...

All ordinal arithmetic in this entry will be natural. All addition will be natural addition and all multiplication will be natural multiplication. Basically, for now, I am embedding the ordinals in the surreals. Also, all rings will be commutative.

So you'll recall I came up with notions of "natural multichoose" and "natural choose" (aka "big multichoose" and "big choose") for ordinals. These are defined order-theoretically, remember: ((α multichoose β)) is the largest ordinal extending the partial order of weakly decreasing functions from β to α, and (α choose β) is the same with strictly decreasing functions. (Obviously, this latter is zero unless β is finite.)

So I spent quite a bit of time figuring out how to compute these, but that's a problem I solved a while back. Also you'll recall I figured out some inequalities for them. But just a few weeks ago I realized something -- that while I was looking at inequalities, I had missed at least one straight-up identity!

While the description of how to compute ((α multichoose β)) in general is more complicated (though not that complicated!), the description of how to compute ((α multichoose k)), where k is finite, is quite simpler. Here's one simple way of stating it, in the form of two rules that determine it completely: 1. We'll start with powers of ω; ((ωα multichoose k))=ωαk. 2. Now extend by Vandermonde's convolution. (Vandermonde's identity, while usually stated for choose, is also valid for multichoose, with the same proof.)

Now I had never described it in that form before; and yet, that's plainly what it was. Somehow, until the other week, I never realized that, wait, natural multichoose satisfies Vandermonde's identity! (So, for that matter, does natural choose. The rule for computing natural choose is the same as the rule for computing natural multichoose, except that of course (1 choose k) is 0 instead of 1 for k>1.)

This got me wondering about the possibility of a version of Vandermonde's identity holding even when the thing on the bottom is infinite -- after all, we're using natural addition here, and a given ordinal has only finitely many natural summands. Unfortunately, the obvious analogue doesn't work. For choose it does... but only because if you have something infinite on the bottom, both sides are automatically 0. So that's not interesting. Let's forget about putting infinite things on the bottom; while it's a fruitful subject (that I've written about elsewhere!), for the remainder of this entry, I will only consider choosing or multichoosing whole numbers.

So this got me wondering -- what about further identities? There are various combinatorial identities we could try, and I could go over how I came up with ones to try, but, well, most of them didn't work, so I think I'll just skip that.

However, one thing did appear to work. Let's talk about λ-rings. A λ-ring is a ring equipped with operations λk for each whole number k, which must satisfy certain identities. The quantity λk(x) is meant to be analogous to (x choose k) -- or, more analogously, the k'th exterior power of x. An equivalent description is in terms of operations σk, analogous to ((x multichoose k)), or, more analogously, to the k'th symmetric power of x; these satisfy similar identities but not the same ones. The λk and σk are interdefinable via σk(x)=(-1)kλk(-x).

In fact one simple way of constructing a λ-ring is binomial rings. If you have a ring R with no Z-torsion and where for any x, (x choose n) is in R -- that's (x choose n) in the usual polynomial sense -- then you get a λ-ring this way, with λk(x)=(x choose k) and σk(x)=((x multichoose k)). These are basically the simplest examples of λ-rings. We'll return to these later.

So, the one thing that appeared to work is that natural multichoose appeared to satisfy the identities of a λ-ring! The σ-identities that is, not the λ-identities. (Unfortunately, natural choose does *not* satisfy the identities of a λ-ring. You do of course get operations λk from taking Sk to be natural multichoose, but while I can find an order-theoretic interpretation of these, it's rather artificial.) I haven't listed out what the identities are, but basically, aside from a few trivial ones, there's identities for λk(x+y), for λk(xy), and for λmn(x)). (And of course one can write down analogous identities for σk.) The addition identity is just Vandermonde, and works the same way for σk; the mutliplication and iteration identities are more complicated, and are different for σk; I won't attempt to describe them here.

And actually there's a pretty simple proof that ordinal multichoose satisfies these σ-identities, though it took me a while to realize, and I spent quite a while just wondering. Basically it follows quite easily once you realize that if you have an ordinal α=ωα_1+...+ωα_r (written in "weak Cantor normal form", where there are no coefficients but the exponents are weakly decreasing), then a simple way of describing ((α multichoose k)) is as hkα_1,...,ωα_r), where hk is a complete homogeneous polynomial. (If you replace hk here with ek, the elementary symmetric polynomial, you get the λk corresponding to this σk. Which, remember, is not natural choose.)

So this is pretty good -- natural multichoose satisfies the identities of a λ-ring. There's just one thing lacking, and that's that ordinals don't form a ring. So actually they're a "λ-semiring". Or a "σ-semiring", I guess; I'm a little doubtful that the equivalence works without negatives (even though in this case the λk do take ordinals to ordinals). (EDIT: OK actually it probably works. Whatever.)

So that raises the question -- can we extend these operations, these σk, to differences of ordinals in a sensible manner, and make an actual λ-ring? Or, hell, let's go all the way -- how about we extend them all the way to the surreals? Can we make the surreals into a λ-ring like this, so that when we restrict the σk down to the ordinals, we get back the order-theoretically meaningful natural multichoose?

Note of course that there's already a λ-ring structure on the surreals coming from the fact that, as a characteristic 0, they're certainly a binomial ring. But this would be a different λ-ring structure; as a binomial ring, we'd have σ²(ω)=ω²½+ω½, whereas under this one, we'd have σ²(ω)=ω².

Basically, with ordinals, if we had a "monomial", ωαa, then what we had was σkαa)=ωαk((a multichoose k)), and then you can extend by Vandermonde. (For λ instead of σ, just use choose instead of multichoose.) Well, surreal numbers have something like Cantor normal form -- Conway normal form. The differences are that, firstly, the exponents can be arbitrary surreal numbers rather than just ordinals; secondly, the coefficients can be real numbers rather than just whole numbers; and thirdly, rather than being a finite sum, it can be an arbitrary well-ordered decreasing sum. So this gives us a way of extending it; define σkαa)=ωαk((a multichoose k)), where now α can be surreal and a can be real, and then extend in the obvious fashion. (Or you can do the same with λ instead of σ.) Of course, the question remained: Does this form a λ-ring? Vandermonde (and the trivialities) are easy enough to prove, but the multiplication and iteration identities are nasty and hard to reason about.

Now, since Conway normal forms can be added and multiplied in the obvious way, this means that surreal numbers are isomorphic to the field of Hahn series over the real numbers with the value group being the surreal numbers themselves; the isomorphism is given by just replacing ω with T-1. (Yes, the value group is also the surreal numbers; that's the sort of weird thing that happens when you've got surreal numbers.) So the question now becomes -- if we have a λ-ring, and we take a ring of Hahn series over that λ-ring, is it also a λ-ring? Where here, in this more general case, we're defining λk(Tαa) to be Tαkλk(a). (Or the same thing but with σ instead of λ.) Right, in the case we care about, that of the surreals, the base ring is the reals, and we're considering it as a λ-ring by considering it as a binomial ring.

Well, I learned, from Darij Grinberg's text on λ-rings, that in fact this is well-known if, rather than Hahn series, we are just taking polynomials. So, could I extend it? To power series, to Laurent series, to Puiseux series -- to Hahn series?

Well, long story short, yes. At first I had some trouble because Hahn series, unlike power series or Laurent series or Puiseux series, aren't actually the limit of finite sums of their monomials; so when I tried to imitate the proofs Grinberg gives, which work by first doing it for monomials and then extending by Vandermonde, I ran into a problem. I wanted to just do the same thing but extending by Vandermonde and also continuity, and this worked for power series and Laurent series and Puiseux series, but not for Hahn series. I was just going to go ask MathOverflow, honestly. But today I made one last try -- I imitated Grinberg's proofs, but this time, instead of doing it for monomials like he does, I just plugged in the whole damn sum. And it worked! (Note that Grinberg's proof basically offloads the hard part to the verification that for a λ-ring A, Λ(A), which I'm not going to define here, is a λ-ring.) I was able to keep all the infinite sums and products to contexts where they truly made sense and could be manipulated sensibly without having to worry about any sort of convergence (or where such convergence is easily proved, like rings of formal power series).

So that's it then! Hahn series over a λ-ring form a λ-ring; in particular the surreal numbers, considered as Hahn series over the real numbers, form a λ-ring; and when you restrict the σk operations of this λ-ring down to the ordinals, you get natural multichoose, which is order-theoretically meaningful. And therefore (although we knew this already by easier means) the natural multichoose operations also satisfy the identities of the σk in a λ-ring, because they are the σk in a λ-ring.

So, that's one more thing to write up someday...

EDIT: Fixed interdefinition of λk and σk, oops.
EDIT: Found a reference for the equivalence, thanks to Darij Grinberg! Hazewinkel, Gubareni, and Kirichenko's "Algebra, Rings, and Modules" volume 3 goes over it. It uses σ rather than S, so I'm changing that in the entry.

sniffnoy: (SMPTE)
Here's an idea I had, I have no idea if this is standard or what. It's kind of trivial, but it seems worth mentioning at least.

EDIT: OK, here is somebody saying this previously; it may be standard, I'm not so sure.

So, background: Harry Gonshor defined the exponential function for the surreals. But what about for the surcomplex numbers? Say z=x+iy is surcomplex. It's clear what the magnitude of exp(z) should be (namely, exp(x)), but how is the direction determined? Now, if y is limited (i.e., not infinite), then cos(y) and sin(y) are defined and so one can define exp(z), and everything works out. But when y is infinite we have more of a problem; after you go around a circle infinitely many times, where do you end up? The oscillating nature of exp(iy) would seem to pose a problem for any attempt to sensibly define exp(iω).

So, here's my idea: Just chop off the infinite part of y. Define exp(iy)=exp(iy'), where y' is y with the infinite parts cut off.

And sure, it's a trivial thing to do, but it works pretty well. First off, we get the usual relation exp(z+w)=exp(z)exp(w); or at least we do assuming that sin and cos for limited surreals obey the usual relations, which they have to, right? (I don't have a text on surreals on hand at the moment.) Like I'm pretty sure that when you're proving exp(x+y)=exp(x)exp(y) for surreals, the hard part is if x and y may be infinite, not if they may be limited, because then you can use power series arguments adapted to the surreals (if I'm understanding/recalling correctly).

Secondly, let's look at the kernel of this map. For exponentiation on C, the kernel consists of numbers of the form τin (or 2πin, in the more usual notation), where n is an integer. Well, the commonly-used analogue in the surreals of the integers is the omnific integers -- those surreals with integral real part and no infinitesimal part. (That is to say, the infinite part has no effect on whether you're an omnific integer or not; all purely infinite surreals are omnific integers.) But that fits great here -- the kernel of this surcomplex exponentiation is precisely numbers of the form τin (or 2πin in the more usual notation) where n is an omnific integer. Really, we can just let all purely imaginary, purely infinite surcomplex numbers exponetiate to 1. Why not? Seems good to me!

And I mean seriously -- does anyone have a better idea? Well, maybe, but I haven't heard of it at least.

Does anyone know anything about this? Is this standard? Does this lead anywhere? It seems like it's too trivial to lead to anything seriously interesting beyond what having the surreal exponential (and sin and cos for limited surreals) already get you, but I mean just having a definition of the exponential for surcomplex numbers seems neat, so there you go.

sniffnoy: (Sonic)
EDIT, 2/25, 7 PM: I'm no longer confident that the claims in this entry are correct. I've found a serious mistake in my proof. It seems like fixable, but I haven't succeeded so far.

EDIT, 2/27, 1:30 AM: THIS ENTRY AS ORIGINALLY POSTED IS FALSE. I've edited to be true, but, uh, much less interesting. False statements are in strikethrough, added statements are in bold.

So I've been thinking more about big multichoose and WPOs recently. I'll use Rev(A,B) to denote the set of order-reversing maps from A to B.

So recall, I figured out how to compute o(Rev(α,β)) (where α and β are ordinals) and showed that for any WPO X, o(Rev(α,X))≤o(Rev(α,o(X))). Or to state it another way -- which is the way the proof actually goes -- I figured out how to compute an upper bound on o(Rev(α,X)) based on α and o(X), and showed that when X is total, this upper bound is attained.

The other day I noticed that in fact, if o(X) is an (infinite) initial ordinal, then this upper bound is also attained. (Because in this case, o(X) embeds in X.) In other words, in this case, the value of o(Rev(α,X)) depends only on α and o(X), and not on any further details about X.

Could there be other values of o(X) for which this is true, I wondered? More specifically, could it be true whenever o(X) is a power of ω? (It's certainly not true when o(X) isn't a power of ω.)

It took me quite a while to find a counterexample, but eventually I did: Take α=ω, and take X to be Rado's order. In this case, o(X)=ω², and o(Rev(ω,ω²))=ωω²+2, while o(Rev(ω,X))=ωω2+2.

It should not have taken me so long to find a counterexample. A simpler counterexample is α=ω, X=ω×ω. I computed o(Rev(ω,ω×ω)) incorrectly earlier, and got ωω²+2; in fact it is ωω2+2, the same as for Rado's order. What's more, this is really easy to see, since Rev(X,Y×Z) = Rev(X,Y) × Rev(X,Z), so it's just the natural product of two copies of omega multichoose omega, which is ωω+1. Somehow I entirely failed to notice this.

So that's it -- problem solved, right? Well, not quite. Rado's order is a famous example of a WPO which is not a BPO. This leaves the question: Could the statement be true if we require that X be a BPO? Or what if instead of requiring that X be a BPO, we just insist on the weaker requirement that Rev(X,2) be a WPO, or something like that? So far, I have no counterexample to this. It's definitely not true, see above.

Unfortunately, I know very little about the theory of BPOs, so I think I am unlikely to resolve such a question (though perhaps a bit more likely if it turns out that only a weaker hypothesis is needed). But I thought this was worth putting out there.

Note, by the way, that if o(X) is a power of ω and, in addition, we have that α is finite, then the statement is true without any further hypothesis on X; hence why my counterexample above had α=ω. Note that o(X) is also as small as a counterexample can be, as o(X)=ω². (This remark is still true.)

By the way, I also tried proving the statement, by means analogous to how I proved that the bound is attained if X is total, and found that much of a proof went surprisingly well. There were three problems that arose, essentially:
1. How do we go from the case where α is a power of ω, to the case where it isn't? I was able to solve this problem.
2. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a limit ordinal? I was able to solve this problem (assuming an inductive hypothesis, of course).
3. How can we handle the case where α is a power of ω and o(X)=ωβ, where β is a successor ordinal? This turned out to be a real problem, even assuming an inductive hypothesis! And indeed, the counterexample I mentioned above has o(X)=ω². But the fact that it's just this one part of the proof that turned out to be a problem does incline me to think that it might indeed be possible to prove it with additional hypotheses on X.

Note: If X has a bottom element, everything I'm saying still works if we restrict to functions which are eventually 0. Pretty sure. And of course if α finite we can restrict to strictly decreasing functions.

EDIT: It totally works! Just using the hypothesis that Rev(X,2) is a WPO, too! This is amazing! Not only that, but there's a generalization that covers both this case and the case where X is total! (Though not also the case where o(X) is initial.) Namely, you require that Rev(X,2) be a WPO, and that X=X1+...+Xr, where each o(Xi) is a power of ω. In the case where X is total you have Cantor normal form, and in the amazing case, r=1. And once again it works if we remove the requirement that Rev(X,2) be WPO but add the requirement that α be finite.

EDIT 2: Sorry, just noticed a mistake in my proof. I think this is fixable.

sniffnoy: (Sonic)
I don't think this is known, but again, I'm not a logician.

So: Let's say you have a well partial order X. De Jongh and Parikh proved it has some largest extending ordinal, o(X). Moreover, they proved that o(X) is the smallest ordinal greater than any o(L), where L ranges over proper downwardly-closed subsets of X.

Question: Say X is a WPO, and you're given an ordinal α<o(X). Can you find a downward-closed subset L of X with o(L)=α?

It certainly seems like you should be able to! But you'll notice that this doesn't follow immediately from the above characterization, which merely guarantees that there's some downward-closed L with o(L)≥α. How do you get it exactly on the nose?

This problem stumped me for several days, but it turns out there's an easy answer: Iterate! Each time you iterate this, o(L) goes down; and since you're working with ordinals, well, it has to terminate. When it terminates, there's your L. I'm glad such a simple problem turned out to have such a clean solution!

(I guess I've left out a few steps here, but nothing that isn't easy to fill in. Also, since you're passing to initial segments, you don't need to phrase it in terms of o(L) going down, it's simpler without that, but that's how I originally thought of it.)

sniffnoy: (SMPTE)
So people throw around the word "impossible" a lot, but oftentimes they actually mean different things by it. (I'm assuming here we're talking about real-world discussions rather than mathematical discussions, where things are clearer.) I thought I'd create a list of different things that people mean by "impossible", in the hopes that it might clarify things. Note -- because we're talking about real-world things, time is going to play a role. (Yes, there's not really any universal clock. Whatever.)

I'm listing these as "levels of impossibility", going roughly from "most impossible" to "least impossible", even though they're not necessarily actually linearly ordered. Also, some of the distinctions between some of these may be fuzzy at times.

Level 0. Instantaneously inconsistent. The given description contains or logically implies a contradiction. It rules out all possible states at some point in time, in any universe. People often claim this one when they really mean level 2 or level 3.

Level 1. Instantaneously impossible (contingently). In the actual universe we live in, the given description is instantaneously impossible; it rules out all possible states at some point in time. This one... I'm not sure this one really comes up in political discussions, I can only really think of physical examples.

Level 2. Non-equilibrium. The described system fails to propagate itself forward in time; or, if a system extended in time is described, it contains an inconsistency. This is one that people often actually mean when they say something is "impossible".

Level 3. Unstable equilibrium or possible non-equilibrium. The described system is not resilient to noise; it will not propagate itself forward in time unless exceptional conditions hold continually. This is another one that people often really mean when they say something is "impossible".

Level 4. Unachievable. The described system is unreachable from our present state -- it may make sense on its own, it may not be inconsistent with the way the world evolves in time, but it's inconsistent with the initial conditions that hold in the real world. Yet another one that people often mean when they say "impossible".

Level 5. Not "stably achievable". The only path from the current state to the described state is not resilient to noise and requires exceptional conditions to hold, possibly for an extended period of time. We might also want to require that in addition that the failure modes of such a path leave us worse off than we were before or somehow prevent the same path from being used again (so that you can't just try over and over for free).

Are there any I'm missing? This seems like a pretty good breakdown to me.

sniffnoy: (Chu-Chu Zig)
Solutions are up for this year's Mystery Hunt -- or mostly up; metas, metametas, and a few scattered others appear to be lacking them. Well, whatever.

EDIT: Quantum Minesweeper solution is now up, so I've updated this entry accordingly.
EDIT 1/23: I've added some more stuff now that a few of the meta solutions are up.
BIG EDIT 1/26: It turns out out there's a good reason why the hunt software was lacking its usual capabilities here! [ profile] devjoe, who was on Team Luck, goes into it here.

So this year I was on Donner Party (again). Seems a few of my other friends joined Donner Party this year; Kevin defected from Codex, and Nic was on it too.

Results don't seem to be up yet so I don't know just how behind we were. I assumed we must be really behind all the time; I think we reached every round on a timer. (We didn't solve any of the round metas or metametas, and only one or two of the sub-round metas.) On the other hand, we were already on the Endymion round by the time the coin was found (along with everybody else I guess), so maybe we weren't as behind as I thought. Still, I feel like one of these years I need to get back on a bigger team; Mystery Hunt is considerably more fun when you're actually in contention for the win, you know? Also when you stay up all night and there are actually a good number of other people staying up all night with you. (Due to my staying up all night, I ended up going to bed on Sunday shortly before 7 PM... and about 5 minutes before the coin was found. It was probably found while I was, like, brushing my teeth or something.)

I tried to get people from Baker House (where I now live; I guess I haven't mentioned that) to join in but none did. Melissa mentioned that her brother does Mystery Hunt but didn't join in herself for whatever reason. I did however recruit Adam Funk from Truth House for the Hunt; after the first day, when other Bakerites failed to join in, I spent much of my solving time in his room instead of in public areas of Baker like I'd intended. Unfortunately these past two weeks have been quite busy for me and so I wasn't able to prepare Adam, or others, as much as I should have, but he still contributed quite a bit, I think.

Before I talk about the individual puzzles, I want to talk about how just how the Hunt was run this year, and in particular the hunt website/software. Simply put, I didn't like it. The Hunt for the past few years had used the same software for its website and it worked really well. This one... didn't. Not in the sense that it was buggy, but in the sense that it was just lacking the (even basic) features I'd come to expect from the past few years. It wouldn't show you which puzzles you'd already solved; you had to keep track of that yourself. (Because I started late, I initially assumed the check marks in the dog show round indicated the puzzles we'd solved. Nope.) It wouldn't show you what wrong answers you'd already submitted; you had to keep track of that yourself. There was nothing explaining how round advancement worked or anything like that, let alone showing your current score (were there scores? I don't know) or time left until next unlock. (When I said above we advanced to every round on a timer, that was an inference.) No list of all puzzles unlocked so far. Etc. Fancy 3D animations are not much of a replacment for actual functionality.

Not to mention the most obvious annoyance of all -- every time you made it to a new round, you had to go to a different website! One with crazy long unprouncable strings of letters in them that you couldn't just easily tell someone. Part of what I want in the Hunt is to be able to easily recruit people during Hunt-time and bring them in without having to get them officially added to the team. Being able to just tell them the website, here's our username, here's our password, works really well for that. This doesn't.

Which brings me to a different point -- I'm not sure I like how Donner Party organized things this year. In the past I'd solved for Manic Sages and Codex, who each used (IIRC) a combination of IRC and a MediaWiki-based wiki. This worked really well; I could let someone else use my account for the wiki, and anyone can use IRC. More recently on Strange New Universe and Donner Party we'd used (IIRC agin) IRC (still pretty straightforward) and Google Docs. The latter is a bit more of a pain, I can't just *tell* someone something, I have to use the software to share them on it.

This year we used Google Docs, Trello (acting like the "list of puzzles" page on a wiki), and Slack (substituting for IRC). This... oy. (It was especially a pain combined with the Hunt website address issue I've mentioned above.) Trello was a good addition organization-wise, but I had some trouble adding Adam. (Actually we were using a team account, but I forgot that. Oops.) Still, it eventually worked. Slack, I couldn't figure out how to add him to the chat at all.

Like I said, what I want is to be able to quickly and unofficially add people mid-Hunt. This means either any accounts used should be team ones and I can just give them the username and password, or should be throwaway ones (like on a specialized wiki, rather than on Trello as a whole) that I can also give away. The Trello + Google Docs system was good, about on par with using a wiki, but slightly more troublesome to add people. But Slack seems like a definite downgrade from IRC; having a channel history is a plus but not being able to add Adam is a problem.

(Also, getting back to the Hunt itself, is it just me or were there a lot of unnecessary PDFs this year?)

Anyway, enough about that; time to talk about individual puzzles!

Cut for length and spoilers )

Anyway, that's it for this year. Hopefully next year is a bit better-run. And maybe I really should see about joining a larger team...

sniffnoy: (Chu-Chu Zig)
So currently in the US copyrights are automatic and not required to be registered, due to the US having entered into the Berne Convention. This leads to terrible copyright snarls where one person wants to use something of someone else's but they can't even determine who holds the current copyright to talk to about it.

Obviously bringing back copyright registration, with proper records of copyright transfers and all, would solve this (if y'know you could somehow extricate the US from the Berne Convention). But there is a real advantage to not having to register copyrights; suppose you're just some person who's drawn something and posted it to the internet and now some big company is making money selling T-shirts of it. It'd be nice to have some legal recourse there without having to register every single thing you do.

So here's an idea for an intermediate policy, which I assume would still violate the Berne Convention, but just something worth considering if, y'know, IP reform ever actually happens. What if copyrights didn't have to be registered, but unregistered copyrights could not be transferred or exclusively licensed? Obviously they could be licensed, but you could not legally bind yourself not to license it in the future. Unregistered copyrights would also automatically expire with the author's death. This way, if the copyright is unregistered, tracking who you need to consult with is easy: You talk to the author. If the copyright is registered, it might have gone through all sorts of complex stuff, but that would all be in the registry and you could look it up.

(I am not a lawyer, there may easily be all sorts of nasty details here I'm missing. But thought I'd throw this out there.)
sniffnoy: (Chu-Chu Zig)
So, I basically stopped really caring about Star Wars a while back, but Noelle organized an expedition to see The Force Awakens today, so I went and saw it. It was OK? There were substantial parts that didn't make a lot of sense, or at least weren't sufficiently clarified, so I'm basically going to complain about it here.

Cut for spoilers )
...OK. I'm done complaining for now. There's more I could say but I think that would be picking nits. The original left a lot unexplained too, after all, and wasn't really the worse for it. These are just the things I think need explaining or fixing.


September 2017

17181920 212223


RSS Atom
Page generated Sep. 26th, 2017 07:28 am
Powered by Dreamwidth Studios