Jan. 30th, 2013

sniffnoy: (Chu-Chu Zig)
Or, here's some assorted math with John lately. Have to make up for that dumb last entry. Skip to the bit after the horizontal rule if you want to know what the title's about.

EDIT next day: Added the last paragraph.
EDIT February 3: Added a bit more about the diamond lemma.

So not too long ago John was asking -- say we have a finite-dimensional vector space V and some subspace W of End(V). How can we tell whether, for all v in V, there is some w in W such that w fixes v? ("W can fix every vector.") Like, what are sufficient conditions, necessary conditions...? Obviously containing the identity is sufficient, but what else can we say?

Now in the original context John was considering, W was actually constructed some specific way, and in fact it was closed under multiplication. Well, adding that condition kind of changes everything, doesn't it? So these are really two separate problems.

Also: From here on I'm just going to assume V=Fn, where F is our base field. The added abstraction of not doing so desn't really gain us anything here, and I want to talk about this in terms of matrices.

Let's consider the first one first. Here's an easy case: If W has dimension 1, it can only have this property by consisting of all scalar matrices. What if W has codimension 1?

Codimension 1 already exhibits several different sorts of behavior: Obviously, W could contain the identity, and thus fix every vector. Or W could fail to fix some vector: Suppose W consists of all matrices with a 0 in the upper left; then there's no way it can fix the vector (1,0,...,0). Or W could fix every vector without containing the identity; for instance, suppose W is the set of traceless matrices. Then given any v, you can pick a linear transformation that fixes v, and on some complement of span(v), has trace -1.

Anyway, in the codimension 1 case, we can describe W by a nonzero matrix A -- i.e., B is in W iff A·B=0, where by A·B I mean "dotting" A and B as if they were vectors, multiplying corresponding entries and adding them up. So can we describe whether or not W can fix every vector in terms of properties of A?

Indeed we can! Initially I did a bunch of tedious computations in the 2x2 case, and found that W could fix every vector iff A either had trace 0 (because this is equivalent to saying I∈W) or nonzero determinant (because...? I had no idea; that was just what I computed). But I had no idea if this generalized.

Anyway the other day I sat down and thought about it some more and concluded that it did generalize, but that that was the wrong generalization. The correct statement is, W can fix every vector iff A either has trace 0, or has rank greater than 1. For a proof, well, see the MathOverflow question. (Actually seeing that will tell you a lot of the other things I'm about to say, but oh well.) I expect there should be a better proof, but, eh, it's what I came up with.

Anyway, let's consider the case where W is now assumed to be closed under multiplication. Julian quickly pointed out then that if W contains any invertible element, W contains the identity (because Cayley-Hamilton theorem). So if W fails to fix some vector, it must consist entirely of singular matrices. This seemed like it ought to put on a dimension restriction -- it seemed like such a subspace ought to have dimension at most n²-n, but we couldn't prove it.

Fortunately, this sort of thing -- subspaces of matrices of bounded rank -- has been well studied. A bit of searching turned up this question on math.stackexchange, which also linked to this paper. So yes -- a subspace of matrices, each of rank at most r, does have dimension at most nr.

And from the paper, much more is true! Applying the results with r=n-1, we get that if dim(W)>n²-2n+2, and all elements of W are singular, then either A. there is some nonzero vector which is in the kernels of all the elements of W, or B. there is some proper subspace containing all the images of elements of W. Either way, W fails to fix some vector. Thus, if W is closed under multiplication and dim(W)>n²-2n+2, the only way it can fix every vector is if it contains the identity. Rather different from the case without multiplicative closure.

The paper also has results about if dim(W)=n²-2n+2, but I didn't see how to apply those here. In any case, one ought to be able to do much better -- we only used multiplicative closure to go from "contains an invertible element" to "contains the identity", when presumably it can be used for more than that.




So somehow I was showing John a particular open problem involving the combinatorics of words (specifically, problem 15 here -- there's some neat problems there, btw) and he said, this is an awful problem. This is the sort of awful thing Miklos would work on. He would work on awful stuff like the free idempotent monoid on n generators.

The free idempotent monoid on n generators, huh? So, like, squarefree words, I thought. So for n=2 this is finite, as there are only 7 squarefree words. But for n=3, there are arbitrarily long (indeed, infinite -- yes, that's the same by König's Lemma) squarefree words. So I said I would probably bet that for n=3, this monoid is infinite. John wasn't so sure.

And so I sat down to prove it. I wanted to show that any word reduced to a unique squarefree word; obviously, this is a case for a diamond lemma argument. Of course there are several cases based on how the reduction sites overlap. Most cases worked, but two were problematic. One I couldn't prove worked, and the other, well, just clearly didn't work. (ADDENDUM February 3: For what little it's worth, the other case does in fact work.)

Specifically, consider the word ababcbabc. Reducing the duplicated ab, we get abcbabc. But reducing the duplicated babc, we get ababc, which then reduces to abc. So despite being distinct squarefree words, abc is equivalent to abcbabc; there are weird hidden relations you wouldn't have expected, and the diamond property just fails. (Also, note that 'a', 'b', and 'c' could have been replaced with arbitrary words here.)

And so I sent John an email pointing out this fact, with the subject line: "Your monoid is disgusting, and here's why."

John replied by pointing me to the MathWorld entry on the subject, because, well, people have studied this monoid before. And apparently there are enough of these weird hidden relations that regardless of n, despite the infinitude of squarefree words, the monoid is finite. (So, I would have lost that bet.) And there's a formula! Why is there a formula? (Along with the link, John left just the comment, "Good heavens.") I guess there probably is still some normal form for it, which is a stricter condition than just being squarefree, and you can enumerate these? Well, I don't really intend on getting out the book to check...

Actually, when I was a fourth-year at Chicago, first semester, I remember there was some problem on an algebra problem set where I decided that the thing to do was to show that for any n, sufficiently long words can't be squarefree. Of course, this is false, so it didn't work. (I figured, it was true for 2, so...) Unfortunately, I don't remember what the actual problem is, so I have no idea what a correct approach might have been. I have to wonder though if maybe proving that the free idempotent monoid is finite would have done what I wanted, though! Whether or not that would actually be within reach is another question...

-Harry

January 2026

S M T W T F S
     123
45678910
11121314151617
18192021222324
25262728293031
Page generated Jan. 4th, 2026 10:30 am
Powered by Dreamwidth Studios