Tensor Operator and SU(2)

This article is one of Lie Group & Representation contents.



Definition: A tensor operator is a set of operators that transforms under commutation with the generators of some Lie algebra like an irreducible representation of the algebra.


A tensor operator transforming under the spin-s representation of SU(2) consists of a set of operators, O_l^s for l=1 to 2s+1 (or -s to s), such that [J_a,O_l^s]=O_m^s[J_a^s]_{ml}


Orbital Angular Momentum

A particle in a spherically symmetric potential. If the particle has no spin, then J_a is the orbital angular momentum operator, J_a = L_a = \epsilon_{abc} r_b p_c

The position vector is related to a tensor operator because it transforms under the adjoint representation [J_a, r_b] = \epsilon_{acd} [r_cp_d,r_b]=-i\epsilon_{acd}r_c\delta_{bd}=-i\epsilon_{acb}r_c=r_c[J_a^\mbox{adj}]_{cb} where J^\mbox{adj} is the adjoint representation, which is spin 1 representation.



Using Tensor Operators

Suppose that we are given a set of operators, \Omega_x for x=1 to 2s+1 that transforms according a representation D that is equivalent to the spin-s representation of SU(2): [J_a,\Omega_x]=\Omega_y [J_a^D]_{yx}

Since by assumption, D is equivalent to the spin-s representation, we can find a matrix S such that SJ_a^D S^{-1}=J_a^s or in terms of matrix elements [S]_{lx}[J_a^D]_{xy}[S^{-1}]_{yl'}=[J_a^s]_{ll'}

Then we define a new set of operators O_l^s=\Omega_y [S^{-1}]_{yl}\quad\mbox{for } l=-s\mbox{ to } s

Now O_l^s satisfies [J_a, O_l^s]=[J_a,\Omega_y][S^{-1}]_{yl} = \Omega_z [J_a^D]_{zy} [S^{-1}]_{yl}\\ = \Omega_z [S^{-1}]_{zl'}[S]_{l'z'}[J_a^D]_{z'y}[S^{-1}]_{yl}=O_{l'}^s[J_a^s]_{l'l} which is what we want. 

Note that it is particularly simple for J_3, because in our standard basis in which the indices l label the J_3 value, J_3^1, (or J_3^s for any s) is a diagonal matrix [J_3^s]_{l'l} = l \delta_{ll'}\quad\mbox{for }l,l'=-s \mbox{ to } s Thus [J_3,O_l^s]=O_{l'}^s [J_3^s]_{l'l} = l O_l^s


In practice, it is usually not necessary to find the matrix S explicitly. If we can find any linear combination of the \Omega_x which has a definite value of J_3 (that means that it is proportional to its commutator with J_3), we can take that to be a component of O^s, and then build up all the other O^s components by applying raising and lowering operators.

For the position operator it is easiest to start by finding the operator r_0. Since [J_3,r_3]=0, we know that r_3 has J_3=0 and therefore that r_3\propto r_0. Thus we take r_0=r_3 Then the commutation relations for the spin 1 raising and lowering operators give the rest [J^\pm,r_0]=r_{\pm 1} = \mp (r_1 \pm ir_2)/\sqrt{2}


The Wigner-Eckart theorem

The product O_l^s \left| j,m,\alpha \right\rangle transforms J_a O_l^s \left| j,m,\alpha \right\rangle = [J_a,O_l^s] \left| j,m,\alpha \right\rangle + O_l^s J_a \left| j,m,\alpha \right\rangle \\ = O_{l'}^s \left| j,m,\alpha \right\rangle [J_a^s]_{l'l} + O_l^s \left| j,m',\alpha \right\rangle [J_a^j]_{m'm} This is the transformation law for a tensor product of spin s and j, s\otimes j. Because we are using the standard basis for the states and operators in which J_3 is diagonal, this is particularly simple for the generator J_3, then J_3O_l^s \left| j,m,\alpha \right\rangle = (l+m) O_l^s \left| j,m,\alpha \right\rangle The J_3 value of the product of a tensor operator with a state is just the sum of the J_3 values of the operator and the states.


The product of the tensor operator and the ket behaves under the algebra just like the tensor product of two kets. Thus we can decompose it into irreducible representations in exactly the same way, using the highest weight procedure. That is, we note that O_s^s \left| j,j,\alpha \right\rangle with J_3=j+s is the highest weight state. We can lower it to construct the rest of the spin j+s representation. Then we can find the linear combination of J_3=j+s-1 states that is the highest weight of the spin j+s-1 representation, and lower it to get the entire representation, and so on. In this way, we find explicit representations for the states of the irreducible components of the tensor product in terms of linear combinations of the O_l^s\left| j,m,\alpha \right\rangle

\{ j \} \otimes \{ s \} = \sum_{\otimes l=\left| s -j\right|}^{s+j} \{l\} where \{k\} denotes the spin k representation.

In this decomposition, each representation frokm j+s to \left| j-s\right| appears exactly once. We can write the result of the highest weight analysis as follows: \sum_l O_l^s \left| j,M-l,\alpha\right\rangle \left\langle s,j,l,M-l | J,M \right\rangle = k_J \left| J,M\right\rangle Here \left| J,M\right\rangle is a normalized state that transforms like the J_3=M component of the spin J representation and k_J is an unknown constant for each J (but dose not depend on M). The coefficients \left\langle s,j,l,M-l | J,M \right\rangle are determined by the highest weight construction, and can be evaluated from the tensor product of kets, where all the normalization are known and the constants k_J are equal to 1: \sum_l \left| s, l \right\rangle \left| j,M-l\right\rangle \left\langle s,j,l,M-l | J,M\right\rangle

One way to prove that the coefficients can be taken to be the same in both  equations is to notice that in both cases, J^+\left| J,J\right\rangle must vanish and that this condition determines the coefficients \left\langle s, j, l, J-l | J,J\right\rangle up to a multiplicative constant. Since the transformation properties of O_l^s \left| j,m\right\rangle and \left| s,l\right\rangle \left| j,m\right\rangle are identical, the coefficients must be proportional. The only difference is the factor of k_J.

We can invert above equation and express the original product states as linear combinations of the states with definite total spin J. O_l^s \left| j,m,\alpha \right\rangle = \sum_{J=\left| j-s\right|}^{j+s} \left\langle J,l+m | s,j,l,m\right\rangle k_J \left| J,l+m\right\rangle


The coefficients \left\langle J,M | s,j,l,M-l\right\rangle are thus entirely determined by the algebra, up to some choices of the phases of the states. Once we have a convention for fixing these phases, we can make table of these coefficients one and for all, and be done with it. The notation \left\langle J, l+m | s,j,l,m\right\rangle just means the coefficient of \left| J,l+m\right\rangle in the product \left| s,l\right\rangle \left| j,m\right\rangle. These are called Clebsch-Gordan coefficients.


The Clebsh-Gordan coefficients are all group theory, The physics comes in when we reexpress the \left| J,l+m\right\rangle in terms of the Hilbert space basis states \left| J,l+m,\beta\right\rangle: k_J \left| J,l+m\right\rangle = \sum_\beta k_{\alpha\beta} \left| J,l+m,\beta \right\rangle We have absorbed the unknown coefficients k_J into the equally unknown coefficients k_{\alpha\beta}. These depend on \alpha, j, O^s and s, because the original products do, and on \beta and J, of course. But they do not depend at all on l or m. We only need to know the coefficients for one value of l+m. The k_{\alpha \beta} are called reduced matrix elements and denoted k_{\alpha\beta}=\left\langle J,\beta \right| O^s \left| j,\alpha \right\rangle

Putting all this together, we get the Wigner-Eckart theorem for matrix elements of tensor operators: \left\langle J,m',\beta \right| O_l^s \left| j,m,\alpha \right\rangle = \delta_{m',l+m} \left\langle J,l+m | s,j,l,m\right\rangle \cdot \left\langle J,\beta \right\rangle O^s \left\langle j,\alpha \right\rangle


If we knwo any non-zero matrix element of a tensor operator between states of some given J,\beta and j,\alpha, we can compute all the others using the algebra. This sounds pretty amazing, but all that is really going on is that we can use the raising and lowering operators to go up and down withing representations uisng pure group theory. Thus by clever use of the raisin and lowering operators, we can compute any matrix element from another. The Wigner-Eckart theorem just expresses this formally.



Example

Suppose \left\langle 1/2, 1/2, \alpha \right| r_3 \left| 1/2, 1/2, \beta \right\rangle = A

Find \left\langle 1/2, 1/2, \alpha \right| r_1 \left| 1/2, -1/2, \beta \right\rangle = ?

First, since r_0 =r_3, \left\langle 1/2, 1/2, \alpha \right| r_0 \left| 1/2, 1/2, \beta \right\rangle = A

Then we know from [Using tensor operators] that r_1 = \frac{1}{\sqrt{2}}(-r_{+1} + r_{-1})

Thus \left\langle 1/2, 1/2, \alpha \right| r_1 \left| 1/2, -1/2, \beta \right\rangle = \left\langle 1/2, 1/2, \alpha \right| \frac{1}{\sqrt{2}}(-r_{+1} + r_{-1})\left| 1/2, -1/2, \beta \right\rangle \\ = -\frac{1}{\sqrt{2}} \left\langle 1/2, 1/2, \alpha \right| r_{+1} \left| 1/2, -1/2, \beta \right\rangle

Now we could plug this into the formlua, and you could find the Clebsh-Gordan coefficients in a table. Or just use what we have already done, decomposing 1/2 \otimes 1 into irrdeucible representations. For example, we know from the highest weight construction that \left| 3/2, 3/2 \right\rangle \equiv r_{+1} \left| 1/2, 1/2, \beta \right\rangle is a 3/2,3/2 states because it is the highest weight state that we can get as a product of an r_l operator acting on an \left| 1/2, m\right\rangle state. Then we can get the corresponding \left| 3/2, 1/2 \right\rangle state in the same representation by acting with the lowering operator J^-: \left| 3/2, 1/2 \right\rangle = \sqrt{\frac{2}{3}} J^- \left| 3/2, 3/2 \right\rangle \\ =\sqrt{\frac{2}{3}} r_0 \left| 1/2, 1/2, \beta \right\rangle + \sqrt{\frac{1}{3}} r_{+1} \left| 1/2, -1/2, \beta \right\rangle

But we know that this spin-3/2 state has zero matrix element with any spin-1/2 state, and thus 0 = \left\langle 1/2, 1/2, \alpha | 3/2, 1/2 \right\rangle \\ = \sqrt{\frac{2}{3}} \left\langle 1/2, 1/2, \alpha \right| r_0 \left| 1/2, 1/2, \beta \right\rangle + \sqrt{\frac{1}{3}}\left\langle 1/2, 1/2, \alpha\right| r_{+1} \left| 1/2, -1/2, \beta\right\rangle

so \left\langle 1/2, 1/2, \alpha \right| r_{+1} \left| 1/2, -1/2, \beta \right\rangle = -\sqrt{2} \left\langle 1/2, 1/2, \alpha \right| r_0 \left| 1/2, 1/2, \beta \right\rangle = -\sqrt{2} A

so \left\langle 1/2, 1/2, \alpha \right| r_1 \left| 1/2, -1/2, \beta \right\rangle = A

Although we did not need it here, we can also conclude that \left| 1/2, 1/2 \right\rangle = \sqrt{\frac{1}{3}} r_0 \left| 1/2, 1/2, \alpha \right\rangle - \sqrt{\frac{2}{3}}r_{+1} \left| 1/2, -1/2, \alpha\right\rangle is a 1/2,1/2 state. This statement is actually a little subtle, and shows the power of the algebra. 

When we did this analysis for the tensor product of j=1 and j=1/2 states, we used the fact that the \left| 1/2.1/2\right\rangle must be orthodonal to the \left| 3/2, 1/2\right\rangle states to find the form of the \left| 1/2. 1/2\right\rangle state. We cannot do this here, because we do note knwo from the symmetry alone how to determine the norms of the states r_l \left| 1/2, m\right\rangle

However, we know from the analysis with the states and the fact that the transformation fo there objects is analogous that J^+ \left| 1/2, 1/2\right\rangle = 0 Thus it is a 1/2,1/2 state because if it the highest weight state in the representation.


There are several ways of approaching such questions. Here is another way. Consider the matrix elements


Making Tensor Operators


Products of Operators

Product of two tensor operators, O_{m_1}^{s_1} and O_{m_2}^{s_2} in the spin s_1 and spin s_2 representations, transforms under the tensor product representation, s_1 \otimes s_2 because [J_a,O_{m_1}^{s_1} O_{m_2}^{s_2}]=[J_a, O_{m_1}^{s_1}]O_{m_2}^{s_2} + O_{m_1}^{s_1} [J_a, O_{m_2}^{s_2}] = O_{m'_1}^{s_1}O_{m_2}^{s_2} [X_a^{s_1}]_{m'_1m_1} + O_{m_1}^{s_1}O_{m'_2}^{s_2} [X_a^{s_2}]_{m'_2 m_2}

Thus the product can be decomposed into tensor operators using the highest weight procedure.


Note that as susual, thins are particularly simple for the generator J_3. Then [J_3,O_{m_1}^{s_1}O_{m_2}^{s_2}]=(m_1+m_2)O_{m_1}^{s_1}O_{m_2}^{s_2}

The J_3 value of the product of two tensor operators is just the sum of the J_3 values of the two operators in the product.



Reference

Howard Georgi - Lie Algebras in Particle Physics