Finite Group Representation

This article is one of Lie Group & Representation contents.



Groups and Representations

Definition: A Group, $G$ is a set with a rule for assigning to every (ordered) pair of elements, third element, satisfying:

1. If $f,g\in G$ then $h=fg\in G$.
2. For $f,g,h\in G$, $f(gh)=(fg)h$.
3. There is an identity element, $e$, such that for all $f\in G$, $ef=fe=f$.
4. Every element $f\in G$ has an inverse, $f^{-1}$, such that $ff^{-1}=f^{-1}f=e$

Definition: A Representation of $G$ is a mapping, $D$ os the elements of $G$ onto a set of linear operators with the following properties:
1. $D(e)=1$, where $1$ is the identity operator in the space on which the linear operators act.
2. $D(g_1)D(g_2)=D(g_1g_2)$, in other words the group multiplication law is mapped onto the natural multiplication in the linear space on which the linear operators act.

The Regular Representation

Definition: Take the group elements themselves to form an orthonormal basis for a vector space. Now regular representation is defined by $$D(g_1)\left| g_2 \right\rangle = \left| g_1 g_2 \right\rangle$$ Then dimension is order of the group.

Also we can think matrix like $$[D(g)]_{ij}=\left\langle e_i |D(g)|e_j\right\rangle$$and matrix product also adopt well $$[D(g_1g_2)]_{ij}=[D(g_1)D(g_2)]_{ij}=\left\langle e_i |D(g_1)D(g_2)|e_j\right\rangle = \sum_k \left\langle |D(g_1)|e_k\right\rangle \left\langle e_k |D(g_2)|e_k\right\rangle = \sum_K [D(g_1)]_{ik}[D(g_2)]_{kj}$$ This is true for any finite group.

Irreducible Representations

Since representation is linear operator we can always make a new good representation by similarity transformation: $$D(g)\rightarrow D'(g)=S^{-1}D(g)S$$ when $S$ is invertible, and we say $D$ and $D'$ are equivalent representation.

Definition: A representation is reducible if it has an invariant subspace, which means that the action of any $D(g)$ on any vector in the subspace is still in the subspace. In terms of projection operator $P$ onto the subspace this condition can be written as $$PD(g)P=D(g)P\ \forall g \in G$$

Definition: A representation is completely reducible if it is equivalent to a representation whos matrix elements have the following form: $$\begin{pmatrix}D_1(g) & 0 & \cdots\\ 0 & D_2(g) &\cdots \\ \vdots & \vdots & \ddots\end{pmatrix}$$ where $D_j(g)$ is irreducible $\forall j$. This is called block diagonal form

This is quite famous for old cover of 櫻井(Sakurai) QM.


A representation in block diagonal form is said to be the direct sum of the sub-representations, $D_j(g)$, $$D_1\oplus D_2 \oplus \cdots$$ 


Useful Theorems

Theorem: Every representation of a finite group is equivalent to a unitary representation.
Suppose $D(g)$ is a representation of a finite group $G$. Construct the operator $$S=\sum_{g\in G} D(g)^\dagger D(g)$$ where $S$ is hermitian and positive semidefinite. Thus it can be diagonalized and its eigenvalues are non-negative: $$S=U^{-1}dU$$ where $d$ is diagonal $$d=\begin{pmatrix}d_1 &0&\cdots\\ 0&d_2&\cdots\\ \vdots&\vdots&\ddots\end{pmatrix}$$ where $d_j\ge 0\ \forall j$. Because of the group property, all of the $d_j$s are actually positive. 
(If one of the $d_j$s is zero, then there is a vector $\lambda$ such that $S\lambda=0$. But then $\lambda^\dagger S\lambda=0=\sum_{g\in G} \left| \left| D(g) \lambda \right| \right|^2$. Thus $D(g)\lambda$ must vanish for all $g$, which is impossible, since $D(e)=1$.) 
Therefore, we can construct a square-root of $S$ that is hermitian and invertible $$X=S^{1/2}\equiv U^{-1}\begin{pmatrix} \sqrt{d_1}&0&\cdots\\ 0&\sqrt{d_2}&\cdots\\ \vdots&\vdots&\ddots\end{pmatrix} U$$ where $X$ is invertible, because none of $d_j$s are zero. We can define $$D'(g)=XD(g)X^{-1}$$
Now, this representation is unitary: $$D'(g)^\dagger D'(g)=X^{-1}D(g)^\dagger S D(g)X^{-1}$$ but $$D(g)^\dagger SD(g)=D(g)^\dagger (\sum_{h\in G} D(h)^\dagger D(h))D(g)=\sum_{h\in G} D(hg)^\dagger D(hg)=\sum_{h\in G} D(h)^\dagger D(h)=S=X^2$$ where the last line follows because $hg$ runs over all elements of $G$ when $h$ does.$\blacksquare$


Theorem: Every representation of a finite group is completely reducible.
By the previous theorem, it is sufficient to consider unitary representations. If the representation is irreducible, we are finished because it is already in block diagonal form. If it is reducible, then $\exists$ a projector $P$ such that $PD(g)P=D(g)P\ \forall g\in G$. This is the condition that $P$ be an invariant subspace. Taking the adjoint gives $PD(g)^\dagger P=PD(g)^\dagger\ \forall g \in G$. But because $D(g)$ is unitary, $D(g)^\dagger=D(g)^{-1}=D(g^{-1})$ and thus since $g^{-1}$ runs over all $G$ when $g$ does, $PD(g)P=PD(g)\ \forall g \in G$. But this implies that $(1-P)D(g)(1-P)=D(g)(1-P)\ \forall g \in G$ and thus $1-P$ projects onto an invariant subspace. Thus we can keep going by induction and eventually completely reduce the representation.$\blacksquare$

Schur's Lemma

Theorem (Schur's Lemma 1): If $D_1(g)A=AD_2(g)\ \forall g \in G$ where $D_1$ and $D_2$ are inequivalent, irreducible representations, then $A=0$.
Proof) First suppose that there is a vector $\left| \mu \right\rangle$ such that $A\left| \mu \right\rangle=0$. Then there is a non-zero projector, $P$, onto the subspace that annihilates $A$ on the right. But this subspace is invariant with respect to the representation $D_2$, because $$AD_2(g)P=D_1(g)AP=0\ \forall g \in G$$
But because $D_2$ is irreducible, $P$ must project onto the whole space(definition of reducible), and $A$ must vanish in whole space. If $A$ annihilates one state, it must annihilate them all. A similar argument shows that $A$ vanishes if there is a $\left| \nu \right\rangle$ which annihilates $A$. If no vector annihilates $A$ on either side, then it must be an invertible square matrix. It must be square, because, for example, if the number of rows were larger than the number of columns, the the rows could not be a complete set of states, and there would be a vector that annihilates $A$ on the right. A square matrix is invertible unless its determinant vanishes. But if the determinant vanishes, then the set of homogeneous linear equations $$A \left| \mu \right\rangle =0$$ has a nontrivial solution, which again means that there is a vector that annihilates $A$. But if $A$ is square and invertible, then $$A^{-1}D_1(g)A=D_2(g)\ \forall g\in G$$ os $D_1$ and $D_2$ are equivalent, contrary to assumption.$\blacksquare$

Theorem (Schur's Lemma 2): If $D(g)A=AD(g)\ \forall g \in G$ where $D$ is a finite dimensional irreducible representation, then $A\propto I$
Proof) Any finite dimensional matrix has at least one eigenvalue, because the characteristic equation $\det (A-\lambda I)=0$ has at least one root, and then we can solve the homogeneous linear equations for the components of the eigenvector $\left| \mu \right\rangle$. But then $D(g)(A-\lambda I)=(A-\lambda I)D(g)\ \forall g \in G$ and $(A-\lambda I)\left| \mu \right\rangle = 0$. Thus the same argument we used in the proof of the previous theorem implies $(A-\lambda I)=0$.$\blacksquare$

This means that form of the basis states of an irreducible representation are unique.
(Later, this will be used for Wigner-Eckart theorem, which restrict physical operator)

Orthogonality Relations

Theorem (Orthogonality Relation): For the matrix element of irreducible representations, $$\sum_{g\in G} \frac{n_a}{N} [D_a(g^{-1})]_{jk}[D_b(g)]_{lm}=\delta_{ab}\delta_{jl}\delta_{km}$$
Proof) Consider the following linear operator $$A_{jl}^{ab}\equiv \sum_{g\in G} D_a(g^{-1})\left| a, j\right\rangle \left\langle b, l \right| D_b (g)$$ where $D_a$ and $D_b$ are finite dimensional irreducible representations of $G$. Now look at $$D_a(g_1)A_{jl}^{ab} = \sum_{g\in G} D_a(g_1) D_a(g^{-1})\left| a, j \right\rangle \left\langle b, l \right| D_b(g) = \sum_{g\in G} D_a(g_1 g^{-1})\left| a, j \right\rangle \left\langle b, l\right| D_b(g) = \sum_{g\in G} D_a ((g g_1^{-1})^{-1})\left| a, j\right\rangle \left\langle b, l \right|D_b(g)$$
Now let $g'=gg_1^{-1}$ $$=\sum_{g'\in G} D_a(g'^{-1})\left| a, j\right\rangle \left\langle b, l \right| D_b (g'g_1) = \sum_{g'\in G} D_a(g'^{-1})\left| a, j \right\rangle \left\langle b, l \right| D_b(g')D_b(g_1) = A_{jl}^{ab} D_b(g_1)$$
Now Schur's lemma implies $A_{jl}^{ab}=0$ is $D_a$ and $D_b$ are different, and further that if they are the same (remember that we have chosen a canonical form for each representation so equivalent representations are written in exactly the same way) $A_{jl}^{ab} \propto I$. Thus we can write $$A_{jl}^{ab} \equiv \sum_{g\in G} D_a (g^{-1})\left| a, j\right\rangle \left\langle b, l \right| D_b(g) = \delta_{ab} \lambda_{jl}^a I$$
To comute $\lambda_{jl}^a$, compute the trace of $A_{jl}^{ab}$ in two different ways. We can write $$\mbox{Tr} A_{jl}^{ab}=\delta_{ab} \mbox{Tr}(\lambda_{jl}^a I)=\delta_{ab} \lambda^a_{jl} \mbox{Tr} I = \delta_{ab}\lambda_{jl}^a n_a$$ where $n_a$ is the dimension of $D_a$. But we can ues the cyclic property of the trace and the fact that $A_{jk}^{ab} \propto \delta_{ab}$ to write $$\mbox{Tr} A_{jl}^{ab} = \delta_{ab} \sum_{g\ in G} \left\langle a, l \right\rangle D_a(g) D_a(g^{-1})\left| a, j \right\rangle = N \delta_{ab} \delta_{jl}$$ where $N$ is the order of the group. Thus $\lambda_{jl}^a = N\delta_{jl}/n_a$ and we have shown $$\sum_{g\in G} D_a(g^{-1})\left| a, j \right\rangle \left\langle b, l\right| D_b(g) = \frac{N}{n_a} \delta_{ab}\delta{jl} I$$ In matrix elements, same as what we want.$\blacksquare$


For unitary irreducible representations, we can write $$\sum_{g\in G} \frac{n_a}{N} [D_a(g)]_{jk}^*[D_b(g)]_{lm}=\delta_{ab}\delta_{jl}\delta_{km}$$
With proper normalization $$\sqrt{\frac{n_a}{N}}[D_a(g)]_{jk}$$

Also this is complete set of functions of $g$. Since representation makes orthonormal basis on regular representation, $$F(g)=\left\langle F|g\right\rangle = \left\langle F \right| D_R(g) \left| e \right\rangle $$ where $\left\langle F \right| = \sum_{g'\in G} F(g')\left\langle g'\right|$ and $D_R$ is the regular representation. Thus an arbitrary $F(g)$ can be written as a linear combination of the matrix elements of the regular representation.


Theorem: The matrix elements of the unitary, irreducible representations of $G$ are a complete orthonormal set for the vector space of the regular representation, or alternatively, for functions of $g\in G$.

This means irreducible representations became orthogonal basis of regular representation.

Order of the group $N$ is the sum of the squares of the dimensions of the irreducible representations $n_i$. $$N=\sum_i n_i^2$$

Characters

Definition: The characters $\chi_D(g)$ of a representation $D$ are the traces of the linear operators of the representation or their matrix elements: $$\chi_D(g)\equiv \mbox{Tr}D(g)=\sum_i [D(g)]_{ii}$$

Character is identify representation because
1. Not changed by similarity transformation

2. Also have orthonormality condition: $$\sum_{g\in G,\ j=k,\ l=m} \frac{1}{N} [D_a(g)]_{jk}^*[D_b(g)]_{lm}=\sum_{j=k,\ l=m} \frac{1}{n_a} \delta_{ab} \delta_{jl} \delta_{km} = \delta_{ab}$$ (from sum over $j=k$ and $l=m$ from orthogonality relation) or $$\frac{1}{N}\sum_{g\in G} \chi_{D_a}(g)^*\chi_{D_b}(g)=\delta_{ab}$$

3. Constant on conjugacy class, and became complete basis for functions that are constant on the conjugacy class:
Suppose $F(g_1)$ is such a function. Then it can be expanded in terms of the matrix elements of the irreducible representations: $$F(g_1)=\sum_{a,j,k} c_{jk}^a [D_a(g_1)]_{jk}$$ but since $F$ is constant on conjugacy classes, we can write it as $$F(g_1)=\frac{1}{N} \sum_{g\in G} F(g^{-1}g_1 g) = \frac{1}{N} \sum_{a,j,k} c_{jk}^a [D_a(g^{-1}g_1g)]_{jk}$$ and thus $$F(g_1)=\frac{1}{N}\sum_{a,j,k,g,l,m} c_{jk}^a [D_a(g^{-1})]_{jl}[D_a(g_1)]_{lm}[D_a(g)]_{mk}$$
But now we can do the sum over $g$ explicitly using the orthogonality relation, $$F(g_1)=\sum{a,j,k,l,m} \frac{1}{n_a} c_{jk}^a [D_a(g_1)]_{lm} \delta_{jk}\delta_{lm}$$ or $$F(g_1)=\sum_{a,j,l} \frac{1}{n_a} c_{jj}^a [D_a(g_1)]_{ll}=\sum_{a,j} \frac{1}{n_a} \chi_a (g_1)$$ $\blacksquare$
The characters, $\chi_a(g)$, of the independent irreducible representations form a complete, orthonormal basis set for the functions that are constant on conjugacy classes. Thus the number of irreducible representations is equal to the number of conjugacy classes.

4. There is an orthogonality condition for a sum over representations:
Label the conjugacy classes by an integer $\alpha$, and let $k_\alpha$ be the number of elements in the conjugacy class. Then define the matrix $V$ with matrix elements $$V_{\alpha a} = \sqrt{\frac{k_a}{N}} \chi_{D_a} (g_\alpha)$$ where $g_\alpha$ is the conjugacy class $\alpha$. Then the orthogonality condition relation can be written as $V^\dagger V=1$. But $V$ is a square matrix, so it is unitary, and thus we also have $VV^\dagger=1$, or $$\sum_a \chi_{D_a}(g_\alpha)^*\chi_{D_a}(g_\beta)=\frac{N}{k_\alpha} \delta_{\alpha\beta}$$ $\blacksquare$
Let $D$ be any representation (not necessarily irreducible). In its completely reduced form, it will contatin each of the irreducible representations some integer of times, $m_a$. We can compute $m_a$ simply by using the orthogonality relation for the characters. $$\frac{1}{N} \sum_{g\in G} \chi_{D_a}(g)^* \chi_D(g)=m_a^D$$ The point is that $D$ is direct sum $$\sum_a D_a \oplus \cdots \oplus D_a\ (m_a^D\mbox{times})$$

5. It can use to find out how many irreducible representations appear in a particular reducible one, but actually to explicitly decompose the reducible representation into its irreducible representation into its irreducible components: If $D$ is an arbitrary representation, the sum $$P_a = \frac{n_a}{N} \sum_{g\in G} \chi_{D_a}(g)^* D(g)$$ is a projection operator onto the subspace that transforms under the representation $a$.
To check this, set $j=k$ and sum in the original orthogonality relation, $$\frac{n_a}{N} \sum_{g\in G} \chi_{D_a}(g)^*[D_b(g)]_{lm} = \delta_{ab}\delta_{lm}$$ Thus when $D$ is written in block diagonal form, the sum gives 1 on the subspaces that transform like $D_a$ and 0 on all the rest. Thus it is the projection operator. $\blacksquare$


Eigenstates

Theorem: If a hermitian operator, $H$, commutes with all the elements, $D(g)$, of a representation of the group $G$, then you can choose the eigenstates of $H$ to transform according to irreducible representations of $G$. If an irreducible representation appears only once in the Hilbert space, every state in the irreducible representation is an eigenstate of $H$ with the same eigenvalue.

Tensor Products

Definition: Suppose that $D_1$ is an $m$ dimensional representation acting on a space with basis vectors $\left| j \right\rangle$ for $j=1$ to $m$, and $D_2$ is an $n$ dimensional representation acting on a space with basis vectors $\left| x \right\rangle$ for $x=1$ to $n$. We can make $m\times n$ dimensional space called the tensor product space taking basis vectors labeled by both $j$ and $x$ in an ordered pair $\left| j,x\right\rangle$. Tensor product representation $D_1 \otimes D_2$ is multiplication of the two small representation, and matrix elements of $D_{D_1\otimes D_2}(g)$ are products of those of $D_1(g)$ and $D_2(g)$: $$\left\langle j,x \right| D_{D_1\otimes D_2}(g) \left| k,y \right\rangle \equiv \left\langle j\right| D_1(g)\left| k\right\rangle \left\langle x \right| D_2(g)\left| y \right\rangle$$


Reference

Howard Georgi - Lie Algebras in Particle Physics