Skip to content →

Category: math

A noncommutative moduli space

Supernatural numbers also appear in noncommutative geometry via James Glimm’s characterisation of a class of simple $C^*$-algebras, the UHF-algebras.

A uniformly hyperfine (or, UHF) algebra $A$ is a $C^*$-algebra that can be written as the closure, in the norm topology, of an increasing union of finite-dimensional full matrix algebras

$M_{c_1}(\mathbb{C}) \subset M_{c_2}(\mathbb{C}) \subset … \quad \subset A$

Such embedding are only possible if the matrix-sizes divide each other, that is $c_1 | c_2 | c_3 | … $, and we can assign to $A$ the supernatural number $s=\prod_i c_i$ and denote $A=A(s)$.

In his paper On a certain class of operator algebras, Glimm proved that two UHF-algebras $A(s)$ and $B(t)$ are isomorphic as $C^*$-algebras if and only if $s=t$. That is, the supernatural numbers $\mathbb{S}$ are precisely the isomorphism classes of UHF-algebras.

An important invariant, the Grothendieck group $K_0$ of $A(s)$, can be described as the additive subgroup $\mathbb{Q}(s)$ of $\mathbb{Q}$ generated by all fractions of the form $\frac{1}{n}$ where $n$ is a positive integer dividing $s$.

A “noncommutative space” is a Morita class of $C^*$-algebras, so we want to know when two $UHF$-algebras $A(s)$ and $B(t)$ are Morita-equivalent. This turns out to be the case when there are positive integers $n$ and $m$ such that $n.s = m.t$, or equivalently when the $K_0$’s $\mathbb{Q}(s)$ and $\mathbb{Q}(t)$ are isomorphic as additive subgroups of $\mathbb{Q}$.

Thus Morita-equivalence defines an equivalence relation on $\mathbb{S}$ as follows: if $s=\prod p^{s_p}$ and $t= \prod p^{t_p}$ then $s \sim t$ if and only if the following two properties are satisfied:

(1): $s_p = \infty$ iff $t_p= \infty$, and

(2): $s_p=t_p$ for all but finitely many primes $p$.

That is, we can view the equivalence classes $\mathbb{S}/\sim$ as the moduli space of noncommutative spaces associated to UHF-algebras!

Now, the equivalence relation is described in terms of isomorphism classes of additive subgroups of the rationals, which was precisely the characterisation of isomorphism classes of points in the arithmetic site, that is, the finite adèle classes

$\mathbb{S}/\sim~\simeq~\mathbb{Q}^* \backslash \mathbb{A}^f_{\mathbb{Q}} / \widehat{\mathbb{Z}}^*$

and as the induced topology of $\mathbb{A}^f_{\mathbb{Q}}$ on it is trivial, this “space” is usually thought of as a noncommutative space.

That is, $\mathbb{S}/\sim$ is a noncommutative moduli space of noncommutative spaces defined by UHF-algebras.

The finite integers form one equivalence class, corresponding to the fact that the finite dimensional UHF-algebras $M_n(\mathbb{C})$ are all Morita-equivalent to $\mathbb{C}$, or a bit more pompous, that the Brauer group $Br(\mathbb{C})$ is trivial.

Multiplication of supernaturals induces a well defined multiplication on equivalence classes, and, with that multiplication we can view $\mathbb{S}/\sim$ as the ‘Brauer-monoid’ $Br_{\infty}(\mathbb{C})$ of simple UHF-algebras…

(Btw. the photo of James Glimm above was taken by George Bergman in 1972)

Leave a Comment

Quiver Grassmannians can be anything

A standard Grassmannian $Gr(m,V)$ is the manifold having as its points all possible $m$-dimensional subspaces of a given vectorspace $V$. As an example, $Gr(1,V)$ is the set of lines through the origin in $V$ and therefore is the projective space $\mathbb{P}(V)$. Grassmannians are among the nicest projective varieties, they are smooth and allow a cell decomposition.

A quiver $Q$ is just an oriented graph. Here’s an example



A representation $V$ of a quiver assigns a vector-space to each vertex and a linear map between these vertex-spaces to every arrow. As an example, a representation $V$ of the quiver $Q$ consists of a triple of vector-spaces $(V_1,V_2,V_3)$ together with linear maps $f_a~:~V_2 \rightarrow V_1$ and $f_b,f_c~:~V_2 \rightarrow V_3$.

A sub-representation $W \subset V$ consists of subspaces of the vertex-spaces of $V$ and linear maps between them compatible with the maps of $V$. The dimension-vector of $W$ is the vector with components the dimensions of the vertex-spaces of $W$.

This means in the example that we require $f_a(W_2) \subset W_1$ and $f_b(W_2)$ and $f_c(W_2)$ to be subspaces of $W_3$. If the dimension of $W_i$ is $m_i$ then $m=(m_1,m_2,m_3)$ is the dimension vector of $W$.

The quiver-analogon of the Grassmannian $Gr(m,V)$ is the Quiver Grassmannian $QGr(m,V)$ where $V$ is a quiver-representation and $QGr(m,V)$ is the collection of all possible sub-representations $W \subset V$ with fixed dimension-vector $m$. One might expect these quiver Grassmannians to be rather nice projective varieties.

However, last week Markus Reineke posted a 2-page note on the arXiv proving that every projective variety is a quiver Grassmannian.

Let’s illustrate the argument by finding a quiver Grassmannian $QGr(m,V)$ isomorphic to the elliptic curve in $\mathbb{P}^2$ with homogeneous equation $Y^2Z=X^3+Z^3$.

Consider the Veronese embedding $\mathbb{P}^2 \rightarrow \mathbb{P}^9$ obtained by sending a point $(x:y:z)$ to the point

\[ (x^3:x^2y:x^2z:xy^2:xyz:xz^2:y^3:y^2z:yz^2:z^3) \]

The upshot being that the elliptic curve is now realized as the intersection of the image of $\mathbb{P}^2$ with the hyper-plane $\mathbb{V}(X_0-X_7+X_9)$ in the standard projective coordinates $(x_0:x_1:\cdots:x_9)$ for $\mathbb{P}^9$.

To describe the equations of the image of $\mathbb{P}^2$ in $\mathbb{P}^9$ consider the $6 \times 3$ matrix with the rows corresponding to $(x^2,xy,xz,y^2,yz,z^2)$ and the columns to $(x,y,z)$ and the entries being the multiplications, that is

$$\begin{bmatrix} x^3 & x^2y & x^2z \\ x^2y & xy^2 & xyz \\ x^2z & xyz & xz^2 \\ xy^2 & y^3 & y^2z \\ xyz & y^2z & yz^2 \\ xz^2 & yz^2 & z^3 \end{bmatrix} = \begin{bmatrix} x_0 & x_1 & x_2 \\ x_1 & x_3 & x_4 \\ x_2 & x_4 & x_5 \\ x_3 & x_6 & x_7 \\ x_4 & x_7 & x_8 \\ x_5 & x_8 & x_9 \end{bmatrix}$$

But then, a point $(x_0:x_1: \cdots : x_9)$ belongs to the image of $\mathbb{P}^2$ if (and only if) the matrix on the right-hand side has rank $1$ (that is, all its $2 \times 2$ minors vanish). Next, consider the quiver



and consider the representation $V=(V_1,V_2,V_3)$ with vertex-spaces $V_1=\mathbb{C}$, $V_2 = \mathbb{C}^{10}$ and $V_2 = \mathbb{C}^6$. The linear maps $x,y$ and $z$ correspond to the columns of the matrix above, that is

$$(x_0,x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8,x_9) \begin{cases} \rightarrow^x~(x_0,x_1,x_2,x_3,x_4,x_5) \\ \rightarrow^y~(x_1,x_3,x_4,x_6,x_7,x_8) \\ \rightarrow^z~(x_2,x_4,x_5,x_7,x_8,x_9) \end{cases}$$

The linear map $h~:~\mathbb{C}^{10} \rightarrow \mathbb{C}$ encodes the equation of the hyper-plane, that is $h=x_0-x_7+x_9$.

Now consider the quiver Grassmannian $QGr(m,V)$ for the dimension vector $m=(0,1,1)$. A base-vector $p=(x_0,\cdots,x_9)$ of $W_2 = \mathbb{C}p$ of a subrepresentation $W=(0,W_2,W_3) \subset V$ must be such that $h(x)=0$, that is, $p$ determines a point of the hyper-plane.

Likewise the vectors $x(p),y(p)$ and $z(p)$ must all lie in the one-dimensional space $W_3 = \mathbb{C}$, that is, the right-hand side matrix above must have rank one and hence $p$ is a point in the image of $\mathbb{P}^2$ under the Veronese.

That is, $Gr(m,V)$ is isomorphic to the intersection of this image with the hyper-plane and hence is isomorphic to the elliptic curve.

The general case is similar as one can view any projective subvariety $X \rightarrow \mathbb{P}^n$ as isomorphic to the intersection of the image of a specific $d$-uple Veronese embedding $\mathbb{P}^n \rightarrow \mathbb{P}^N$ with a number of hyper-planes in $\mathbb{P}^N$.

ADDED For those desperate to read the original comments-section, here’s the link.

Leave a Comment

The empty set according to bourbaki

The footnote on page E. II.6 in Bourbaki’s 1970 edition of “Theorie des ensembles” reads




If this is completely obvious to you, stop reading now and start getting a life. For the rest of us, it took me quite some time before i was able to parse this formula, and when i finally did, it only added to my initial confusion.

Though the Bourbakis had a very preliminary version of their set-theory already out in 1939 (Fascicule des Resultats), the version as we know it now was published, chapter-wise, in the fifties: Chapters I and II in 1954, Chapter III in 1956 and finally Chapter IV in 1957.


In the first chapter they develop their version of logic, using ‘assemblages’ (assemblies) which are words of signs and letters, the signs being $\tau, \square, \vee, \neg, =, \in$ and $\supset$.

Of these, we have the familiar signs $\vee$ (or), $\neg$ (not), $=$ (equal to) and $\in$ (element of) and, three more exotic ones: $\tau$ (their symbol for the Hilbert operator $\varepsilon$), $\square$ a sort of wildcard variable bound by an occurrence of $\tau$ (the ‘links’ in the above scan) and $\supset$ for an ordered couple.

The connectives are written in front of the symbols they connect rather than between them, avoiding brackets, so far example $(x \in y) \vee (x=x)$ becomes $\vee \epsilon x y = x x$.

If $R$ is some assembly and $x$ a letter occurring in $R$, then the intende meaning of the *Hilbert-operator* $\tau_x(R)$ is ‘some $x$ for which $R$ is true if such a thing exists’. $\tau_x(R)$ is again an assembly constructed in three steps: (a) form the assembly $\tau R$, (b) link the starting $\tau$ to all occurrences of $x$ in $R$ and (c) replace those occurrences of $x$ by an occurrence of $\square$.

For MathJax reasons we will not try to draw links but rather give a linked $\tau$ and $\square$ the same subscript. So, for example, the claimed assembly for $\emptyset$ above reads

$\tau_y \neg \neg \neg \in \tau_x \neg \neg \in \square_x \square_y \square_y$

If $A$ and $B$ are assemblies and $x$ a letter occurring in $B$ then we denote by $(A | x)B$ the assembly obtained by replacing each occurrence of $x$ in $B$ by the assembly $A$. The upshot of this is that we can now write quantifiers as assemblies:

$(\exists x) R$ is the assembly $(\tau_x(R) | x)R$ and as $(\forall x) R$ is $\neg (\exists x) \neg R$ it becomes $\neg (\tau_x(\neg R) | x) \neg R$

Okay, let’s try to convert Bourbaki’s definition of the emptyset $\emptyset$ as ‘something that contains no element’, or formally $\tau_y((\forall x)(x \notin y))$, into an assembly.

– by definition of $\forall$ it becomes $\tau_y(\neg (\exists x)(\neg (x \notin y)))$
– write $\neg ( x \notin y)$ as the assembly $R= \neg \neg \in x \square_y$
– then by definition of $\exists$ we have to assemble $\tau_y \neg (\tau_x(R) | x) R$
– by construction $\tau_x(R) = \tau_x \neg \neg \in \square_x \square_y$
– using the description of $(A|x)B$ we finally indeed obtain $\tau_y \neg \neg \neg \in \tau_x \neg \neg \in \square_x \square_y \square_y$

But, can someone please explain what’s wrong with $\tau_y \neg \in \tau_x \in \square_x \square_y \square_y$ which is the assembly corresponding to $\tau_y(\neg (\exists x) (x \in y))$ which could equally well have been taken as defining the empty set and has a shorter assembly (length 8 and 3 links, compared to the one given of length 12 with 3 links).

Hair-splitting as this is, it will have dramatic implications when we will try to assemble Bourbaki’s definition of “1” another time.

Leave a Comment