Community

Contact

• Andrius Kulikauskas
• m a t h 4 w i s d o m @
• g m a i l . c o m
• +370 607 27 665
• Eičiūnų km, Alytaus raj, Lithuania

Participants

• Kirby Urner

Books

Thank you!

Restriction and induction

Restriction and induction are an important case to consider because they are both left and right adjoint to each other.

Definition of the category {$\mathbf{Rep}_G$}

Fix a field {$K$}. Given a group {$G$}, we define the category of representations {$\mathbf{Rep}_G$} as follows. An object is a pair {$(V,\phi_V)$} consisting of a vector space over {$K$} and a group homomorphism {$\phi_V:G\rightarrow GL(V,K)$}. A morphism is an equivariant map, which is to say, a linear map {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} such that {$\beta \circ \phi_V(g)=\psi_W(g)\circ \beta$} for all {$g\in G$}, as in the commutative diagram below.

{$$\begin{matrix} & V & \overset{\beta}\longrightarrow & W \\ g\rightarrow\phi_V(g) & \downarrow & & \downarrow & \psi_W(g)\leftarrow g\\ & V & \overset{\beta}\longrightarrow & W \end{matrix}$$}

Note that the morphisms {$\beta$}, {$\phi_V(g)$}, {$\psi_W(g)$} in the commutative diagram below are all linear transformations. Indeed, {$\beta$} is fixed and serves for all {$g\in G$}. I think of it as an interspatial map from {$V$} to {$W$} that expresses the clockwork of {$\phi_V$} in {$GL(V,K)$} in terms of the clockwork {$\psi_W$} in {$GL(W,K)$}. {$\phi_V$} and {$\psi_W$} are said to be conjugate.

Note also that the morphism takes us from one representation to another represention and yet it is an equivariant map which is a linear transformation that takes us from one vector space to another vector space.

A composition of equivariant maps {$\alpha:(U,\theta_U)\rightarrow(V,\phi_V)$} and {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} yields an equivariant map {$\beta\circ\alpha:(U,\theta_U)\rightarrow (W,\psi_W)$} such that {$\beta \circ \alpha \circ \theta_U(g)=\psi_W(g) \circ \beta \circ \alpha$} for all {$g\in G$}.

{$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V & \overset{\beta}\longrightarrow & W \\ \theta_U(g) & \downarrow & & \downarrow & \phi_V(g) & \downarrow & \psi_W(g)\\ & U & \overset{\alpha}\longrightarrow & V & \overset{\beta}\longrightarrow & W \end{matrix}$$}

Examples of objects (representation) when {$G=S_3$}

Consider some examples of representations of {$S_3$} having elements {$()$}, {$(12)$}, {$(13)$}, {$(23)$}, {$(123)$}, {$(132)$}. Here is the multiplication table:

{$$\begin{matrix} \mathbf{\times} & \mathbf{()} & \mathbf{(12)} & \mathbf{(13)} & \mathbf{(23)} & \mathbf{(123)} & \mathbf{(132)} \\ \mathbf{()} & () & (12) & (13) & (23) & (123) & (132) \\ \mathbf{(12)} & (12) & () & (132) & (123) & (23) & (13) \\ \mathbf{(13)} & (13) & (123) & () & (132) & (12) & (23) \\ \mathbf{(23)} & (23) & (132) & (123) & () & (13) & (12) \\ \mathbf{(123)} & (123) & (13) & (23) & (12) & (132) & () \\ \mathbf{(132)} & (132) & (23) & (12) & (13) & () & (123) \\ \end{matrix}$$}

 {$\mathbf{()}$} {$\mathbf{(12)}$} {$\mathbf{(13)}$} {$\mathbf{(23)}$} {$\mathbf{(123)}$} {$\mathbf{(132)}$} {$\mathbf{PR}:$} {$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$} {$\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$} {$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}$} {$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}$} {$\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}$} {$\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$}

Here I want to point out a technical detail that I have lost many hours over and am still not completely sure of. It is a problem that comes up because I like to use cycle notation because it is compact. However, it has caused me great confusion. The cycle notation for the permutation composes from left to right, {$(12)(13)=(123)$}, which means that {$1\rightarrow 2$}, {$2\rightarrow 1\rightarrow 3$}, {$1\rightarrow 3$}. The multiplication of permutation matrices also composes by matrix multiplication from left to right but we must use the inverses! and we order the matrices from right to left because they are acting on the column vector on the right. Which is to say, if {$g_1=(12)$}, {$g_2=(13)$}, {$g_1g_2=(123)$}, then we map {$g$} to the inverse permutation matrix so that {$g_2^{-1}g_1^{-1}=(g_1g_2)^{-1}$} as below:

{$$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$}

Instead, it is simpler to think of the permutation as a reordering of a string, for example, {$123\rightarrow 312$} or simply {$[312]$}. Then this composes in the same way as matrix multiplication and so we can read from left to right {$[321]\cdot [213]=[312]$}, or in cycle notation: {$(13)(12)=(132)$}, or {$\rho((13))\rho((12))=\rho((132))$} or {$\rho((132))_{ik}=\sum_j \rho((13))_{ij} \rho((12))_{jk}$}.

A left regular representation:

{$\mathbf{RR}: \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix}$}

The permutation representation and the regular representation are reducible.

An irreducible representation has no invariant subspace and so its dimension cannot be too large. The number of irreducible complex representations is equal to the number of conjugacy classes. See: J.P.Serre. Linear Representations of Finite Groups. A study of the regular representation shows that it contains every irreducible representation {$W_i$} with multiplicity equal to its degree {$n_i$}.

The trivial representation: {$\mathbf{TR}: (1)$}, {$(1)$}, {$(1)$}, {$(1)$}, {$(1)$}, {$(1)$}.

The alternating representation: {$\mathbf{AR}: (1)$}, {$(-1)$}, {$(-1)$}, {$(-1)$}, {$(1)$}, {$(1)$}.

The standard representation is derived from the permutation representation by considering the complement of the one-dimensional trivial representation on the invariant subspace spanned by {$(1,\dots,1)$}. The standard representation can be written in several noteworthy ways. Here are some from the GroupProps wiki:

{$\mathbf{SR1} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ -1 & 0 \end{pmatrix}$}

{$\mathbf{SR2} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{pmatrix} \begin{pmatrix} -1/2 & -\sqrt{3}/2 \\ -\sqrt{3}/2 & 1/2 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} -1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & -1/2 \end{pmatrix} \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ -\sqrt{3}/2 & -1/2 \end{pmatrix}$}

{$\mathbf{SR3} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & e^{2\pi i/3} \\ e^{-2\pi i/3} & 0 \end{pmatrix} \begin{pmatrix} 0 & e^{-2\pi i/3} \\ e^{2\pi i/3} & 0 \end{pmatrix} \begin{pmatrix} e^{2\pi i/3} & 0 \\ 0 & e^{-2\pi i/3} \end{pmatrix} \begin{pmatrix} e^{-2\pi i/3} & 0 \\ 0 & e^{2\pi i/3} \end{pmatrix}$}

Examples of morphisms (equivariant maps) when {$G=S_3$}

Given the representations {$\mathbf{SR1}$}, {$\mathbf{SR2}$}, {$\mathbf{SR3}$} from the previous section, consider the equivariant maps {$\alpha_{ij}:\mathbf{SRi}\rightarrow \mathbf{SRj}$}. They are given by the matrices below, where {$C$} is an arbitrary scalar constant:

{$\alpha_{12}= C\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix}$}

{$\alpha_{13}= C\begin{pmatrix} -u & -u^2 \\ u & 1 \end{pmatrix}$} where {$u=e^{\frac{2\pi i}{3}}$}.

{$\alpha_{23}= ...$}

When {$W=V$}, the equivariant map {$\alpha$} is a square matrix, and if it is an invertible matrix, we have that {$\phi_V(g) = \alpha \psi_V(g) \alpha^{-1}$}. This action takes us from representation {$\psi_V$} to representation {$\phi_V$} by changing the basis.

Note also that the zero map is an equivariant map.

Calculating a morphism (an equivariant map)

An equivariant map is fixed for all {$g\in G$}. When the representations are finite dimensional, then the equivariant map is a matrix. Here is an example of calculating an equivariant map {$\alpha_{12}:\mathbf{SR1}\rightarrow \mathbf{SR2}$} as a matrix {$a_{ij}$}.

{$\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} -\frac{1}{2} & \frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}$}

{$\begin{pmatrix} (-a_{11}+a_{21})v_1 + (-a_{12}+a_{22})v_2 \\ a_{21}v_1 + a_{22}v_2 \end{pmatrix} = \begin{pmatrix} (-\frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12})v_1 + (\frac{\sqrt{3}}{2}a_{11} + \frac{1}{2}a_{12})v_2 \\ (-\frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22})v_1 + (\frac{\sqrt{3}}{2}a_{21} + \frac{1}{2}a_{22})v_2 \end{pmatrix}$}

{$a_{21}=-\frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22} \Rightarrow a_{22}=\sqrt{3}a_{21}$} and {$a_{22}=\frac{\sqrt{3}}{2}a_{21} + \frac{1}{2}a_{22} \Rightarrow a_{22}=\sqrt{3}a_{21}$}

{$-a_{11}+a_{21} = -\frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12} \Rightarrow a_{21} = \frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12}$} and {$-a_{12}+a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{1}{2}a_{12} \Rightarrow a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{3}{2}a_{12}$}

{$\begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\ -\frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}$}

{$a_{21} = \frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12}$} as above

{$a_{22} = \frac{\sqrt{3}}{2}a_{11} - \frac{1}{2}a_{12}$} when combined with {$a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{3}{2}a_{12}$} from above implies {$a_{12}=0$}

{$a_{11} = \frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22}$} with {$a_{22}=\sqrt{3}a_{21}$} yields {$a_{11}=2a_{21}$}

{$a_{12} = \frac{\sqrt{3}}{2}a_{21} - \frac{1}{2}a_{22}$} with {$a_{22}=\sqrt{3}a_{21}$} yields {$a_{12}=0$}

Thus we have that this equivariant map is: {$a_{12}\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix}$}

Illustrating composition of morphisms when {$G=S_3$}

Consider {$T=\mathbb{C}^6$}, {$U=\mathbb{C}^3$}, {$V=\mathbb{C}^2$}, {$W=\mathbb(C)$} with elements {$(t_1,t_2,t_3,t_4,t_5,t_6)\in T$}, {$(u_1,u_2,u_3)\in U$}, {$(v_1,v_2)\in V$}, {$(w_1)\in W$}.

The restriction functor {$\mathbf{Res}^G_H$}. Definition and example.

Fix field {$K$} as above. Let {$H$} be a subgroup of {$G$} with index {$n=[G:H]$}.

The restriction functor {$\mathbf{Res}^G_H: \mathbf{Rep}_G\rightarrow \mathbf{Rep}_H$} sends the pair {$V$} and {$\phi_V:G\rightarrow GL(V,K)$} to the pair {$V$} and {$\phi_V|_H:H\rightarrow GL(V,K)$} where {$\phi_V|_H(h)=\phi_V(h)$} for all {$h\in H$}. Which is to say, the restriction functor simply restricts the representation {$\phi_V$} to be defined on {$H\subset G$}. Similarly, given a morphism (an equivariant map) {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} with {$\beta \circ \phi_V(g)=\psi_W(g)\circ \beta$} for all {$g\in G$}, the functor {$\mathbf{Res}^G_H$} simply maps {$\beta$} to {$\beta$} and the diagram commutes, as before, for all {$h\in H\subset G$}. See: Restricted representation.

Given the representation {$\mathbf{SR1}$} of {$G=S_3$}:

{$() \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$}, {$(13) \rightarrow \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix}$}, {$(23) \rightarrow \begin{pmatrix} 1 & 0 \\ 1 & -1 \end{pmatrix}$}, {$(123) \rightarrow \begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}$}, {$(132) \rightarrow \begin{pmatrix} -1 & 1 \\ -1 & 0 \end{pmatrix}$}

For the subgroup {$H=\{(),(12)\}$} there is the restricted representation {$\mathbf{Res}^G_H(\mathbf{SR1})$}:

{$() \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$}

Similarly, for {$H$} we also have the restricted representation {$\mathbf{Res}^G_H(\mathbf{SR2})$}:

{$() \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{pmatrix}$}

And the functor {$\mathbf{Res}^G_H$} takes us from the equivariant map {$\alpha_{12}: \mathbf{SR1} \rightarrow \mathbf{SR2}$} to the analogous equivariant map {$\mathbf{Res}^G_H(\alpha_{12}): \mathbf{Res}^G_H(\mathbf{SR1}) \rightarrow \mathbf{Res}^G_H(\mathbf{SR2})$}.

{$\alpha_{12}= \mathbf{Res}^G_H(\alpha_{12}) = C\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix}$}

ANSWER: Is the restriction functor invertible?

Definition of the induced representation {$\textrm{Ind}^G_H$}.

The induction functor is defined in terms of the left cosets of {$H$} in {$G$}.

The left cosets of {$H$} in {$G$} are the sets of the form {$gH$} where {$g\in G$}. An example is the twelve-hour clock {$\mathbb{Z}_{12}$} and its subgroup {$\{0,6\}$} which has left cosets {$\{0,6\},\{1,7\},\{2,8\},\{3,9\},\{4,10\},\{5,11\}$}.

Let us consider some basic facts about the left cosets and establish some notation:

• Fix {$g\in G$}. Left multiplication by {$g$} defines a map from {$H$} to {$gH$} which is surjective. The map is also injective for if {$h_1,h_2\in G$} and {$gh_1=gh_2$}, then {$g^{-1}gh_1=g^{-1}gh_2$} and so {$h_1=h_2$}. Thus the map is bijective and the domain and the codomain have the same cardinality: {$|H|=|gH|$}.
• Given two cosets {$g_1H$} and {$g_2H$}, consider whether {$g_1^{-1}g_2$} is in {$H$}. If it is not, then {$g_1H \cap g_2H = \emptyset$}, for otherwise there exist {$h_1, h_2 \in H$} such that {$g_1h_1 = g_2h_2$} and so {$g_1^{-1}g_2 = h_1h_2^{-1} \in H$}, with a contradiction. And if {$g_1^{-1}g_2 = h$} is in {$H$}, then {$g_2 = g_1h$} and so {$g_2H=g_1hH=g_1H$}, which is to say, {$g_1H=g_2H$} is the same left coset.
• Consequently, the left cosets partition {$G$} into a set of equivalence classes. We can select one element {$g_i$} from each coset. Let {$I$} be the index set for these elements and the left cosets they represent. Then the set {$\{g_i | i\in I\}$} is called a set of left coset representatives. And {$G$} is the disjoint union {$\bigsqcup_{i\in I}g_iH$} of the left cosets. Every element {$g \in G$} can be written uniquely as {$g=g_ih_g$} for some {$i\in I$} and {$h_g\in H$}. And, of course, every such expression is an element of {$G$}. We can chose a different set of representatives but the partition remains the same.
• Note that multiplication on the left by {$g\in G$} sends left cosets to left cosets. For if {$h_1\in H$} and {$gg_i=g_jh_1$} then likewise {$gg_ih_2=g_jh_1h_2$} for all {$h_2\in H$}. Thus {$gg_iH=g_jH$}. Furthermore, if {$gg_iH=g_jH$} and {$gg_kH=g_jH$} then {$gg_iH=gg_kH$}, {$g^{-1}g_iH=g^{-1}g_kH$}, {$g_iH=g_kH$}. Finally, every element of {$G$} is an element of some coset of {$H$} regardless of the cardinalities. Thus multiplication on the left by {$g\in G$} acts as a permutation on the left cosets of {$H$} in {$G$}. There exists a permutation {$\sigma\in S_I$} such that {$gg_iH = g_{\sigma(i)}H$} for all {$i\in I$}. We can chose a different set of representatives but the action of {$g$} on the cosets remains the same. In that sense, the labels may change but the permutation remains the same.
• If {$G$} is a finite group, then {$|I]$} is finite and we call it the index of {$H$} in {$G$} and denote it {$[G:H]$}. In what follows, we write {$[G:H]=n$}. We have {$|G]=[G:H]|H|$}, which is Lagrange's theorem.

The induction functor is based on the action of {$g\in G$} upon any element {$g_kh_l\in G$} thus decomposed. The crucial equation is:

{$$g(g_k(h_l))=g_{\sigma(k)}(g_{\sigma(k)}^{-1}gg_kh_l)$$}

This equation expresses the action by {$g$} which takes coset {$g_kH$} to coset {$g_{\sigma(k)}H$} and also acts on {$H$} by taking element {$h_l$} to element {$g_{\sigma(k)}^{-1}gg_kh_l$}. We know that {$h=g_{\sigma(k)}^{-1}gg_kh_l$} is in {$H$} by the decomposition of the element {$gg_kh_l=g_{\sigma(k)}h$}. We also know that {$g_{\sigma(k)}^{-1}gg_k = h^{-1}h_l$} is in {$H$}. The action of taking {$h_l$} to {$g_{\sigma(k)}^{-1}gg_kh_l$} is the action of left multiplying {$h_l$} by {$g_{\sigma(k)}^{-1}gg_k$}.

Note that the action on the cosets changes the indices (the action manifests knowledge "how") whereas the action on the subgroup adds a modifier (the action manifests knowledge "what").

Given a representation {$\theta$} of {$H$} on {$V$}, we define the vector space {$W=\bigoplus_{k=1}^n V_k$} where each {$V_k$} is an isomorphic copy of {$V$}. The elements of {$W$} are sums {$\sum_{i=1}^nv_i$} where {$v_i\in V_i$}. When the vector space {$W$} is finite dimensional, {$|W|=n|V|$}. The induced representation {$\textrm{Ind}_H^G \theta(g)$} acts on {$W$} with an {$n |V| \times n |V|$} matrix which for each {$k\in I$} places the {$|V|\times |V|$} matrix for {$\theta(g_{\sigma(k)}^{-1}gg_k)$} as the block {$(\sigma(k), k)$} within the {$n\times n$} permutation matrix for {$\sigma(k)$}.

We can define the Induced representation most generally as in the Wikipedia article. Given {$g\in G$}, we can uniquely decompose {$gg_k$} as {$gg_k=g_{\sigma(k)}h_k$} where {$h_k\in H$}. Consequently, we may define the induced representation {$\textrm{Ind}_H^G \theta(g)(\sum_{k=1}^nv_k)=\sum_{k=1}^n\theta(h_k)v_{\sigma(k)}$} for all {$v_k\in V_k$}. Note that {$h_k=g_{\sigma(k)}^{-1}gg_k$}.

If the vector spaces have finite dimension, then we can define the induced representation in terms of matrices, as in the video by Mahender Singh. Define

{$$\dot{\theta_x}=\begin{cases} \theta_x & \text{ if } x\in H \\ 0 & \text{ if } x\notin H\end{cases}$$}

The induced representation is given by {$\textrm{Ind}_H^G \theta (g)=(\dot{\theta}_{g_i^{-1}gg_j})$}, which is to say, the {$(i,j)$} block is {$\dot{\theta}_{g_i^{-1}gg_j}$}.

Indeed, the unique nonzero block in the {$j$}th row is {$\theta_{g_{\sigma(j)}^{-1}gg_j}$}. It acts by matrix multiplication whereby it reads the contents of the vector space {$V_j$} in {$W$}, multiplies these contents by the matrix {$\theta_{g_{\sigma(j)}^{-1}gg_j}$}, and writes these new contents within the vector space {$V_{\sigma(j)}$} in {$W$}. Recall that {$g_{\sigma(j)}^{-1}gg_j\in H$} and {$\theta$} is a representation of {$H$}.

We can think of {$g\in G$} as conglomerating the actions {$t_j=g_{\sigma(j)}^{-1}gg_j$} of {$H$} for all {$j\in I$}. That action {$t_j$} of {$H$} can be thought of as a sequence of three actions of {$G$}. The first action {$g_j$} takes us from {$H$} to {$g_jH$}. The second action {$g$} takes us from {$g_jH$} to {$g_{\sigma(j)}H$}. The third action {$g^{-1}_{\sigma(j)}$} takes us from {$g_{\sigma(j)}H$} back to {$H$}. This is taking place for each row {$j\in I$}. A challenge to consider is to determine, given elements {$t_j\in H$} for one or more {$j\in I$}, how that constrains {$g$}.

Another way to define the induced representation is to use the tensor product. Given the representation {$\theta$} of subgroup {$H$} of {$G$} on the vector space {$V$} over the field {$K$}, we can think of {$\theta$} as a module {$V$} over the group ring {$K[H]$}. Define the induced representation

{$$\textrm{Ind}^G_H\theta = K[G]\bigotimes_{K[H]}V$$}

Example of an induced representation.

Let {$G=S_3$}. Let {$\theta_U$} be the representation of the subgroup {$H=\{(),(12)\}$} given as follows:

{$() \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$}

Here we can verify that {$\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$} as required.

Then the left cosets of {$H$} are {$\{(),(12)\}$}, {$\{(13),(123)=(13)(12)\}$}, {$\{(23),(132)=(23)(12)\}$} (see S2 in S3: Cosets). Note that cycles compose from right to left (see Symmetric group: Multiplication. The action of {$g\in G$} upon these cosets has the following multiplication table:

{$$\begin{matrix} \times & \mathbf{H} & \mathbf{(13)H} & \mathbf{(23)H} \\ \mathbf{()} & H & (13)H & (23)H \\ \mathbf{(12)} & H & (23)H & (13)H \\ \mathbf{(13)} & (13)H & H & (23)H \\ \mathbf{(123)=(13)(12)} & (13)H & (23)H & H \\ \mathbf{(23)} & (23)H & (13)H & H \\ \mathbf{(132)=(23)(12)} & (23)H & H & (13)H \\ \end{matrix}$$}

This expresses the permutation {$\sigma$} which appears in the definition. We can furthermore express the map {$\tau$} by indicating the action as to how it affects the element in {$H$}:

{$$\begin{matrix} \times & \mathbf{()} & \mathbf{(12)} & \mathbf{(13)} & \mathbf{(13)(12)} & \mathbf{(23)} & \mathbf{(23)(12)} \\ \mathbf{()} & () & (12) & (13) & (13)(12) & (23) & (23)(12) \\ \mathbf{(12)} & (12) & () & (23)(12) & (23) & (13)(12) & (13) \\ \mathbf{(13)} & (13) & (13)(12) & () & (12) & (23)(12) & (23) \\ \mathbf{(123)=(13)(12)} & (13)(12) & (13) & (23) & (23)(12) & (12) & () \\ \mathbf{(23)} & (23) & (23)(12) & (13)(12) & (13) & () & (12) \\ \mathbf{(132)=(23)(12)} & (23)(12) & (23) & (12) & () & (13) & (13)(12) \\ \end{matrix}$$}

In constructing the induced representation, we focus on how an element acts on coset representatives:

{$$\begin{matrix} \times & \mathbf{()} & \mathbf{(13)} & \mathbf{(23)} \\ \mathbf{()} & () & (13) & (23) \\ \mathbf{(12)} & (12) & (23)(12) & (13)(12)\\ \mathbf{(13)} & (13) & () & (23)(12) \\ \mathbf{(123)=(13)(12)} & (13)(12) & (23) & (12) \\ \mathbf{(23)} & (23) & (13)(12) & () \\ \mathbf{(132)=(23)(12)} & (23)(12) & (12) & (13) \\ \end{matrix}$$}

We can express these actions with permutation matrices of blocks, as follows, where the blocks of zeroes have been omitted. We have

{$\textrm{Ind}_H^G \theta(g)= \begin{pmatrix} \dot{\theta}_{()g()} & \dot{\theta}_{()g(13)} & \dot{\theta}_{()g(23)} \\ \dot{\theta}_{(13)g()} & \dot{\theta}_{(13)g(13)} & \dot{\theta}_{(13)g(23)} \\ \dot{\theta}_{(23)g()} & \dot{\theta}_{(23)g(13)} & \dot{\theta}_{(23)g(23)} \end{pmatrix}$}

We may calculate as follows:

{$() \rightarrow \begin{pmatrix} \dot{\theta}_{()} & 0 & 0 \\ 0 & \dot{\theta}_{()} & 0 \\ 0 & 0 & \dot{\theta}_{()} \end{pmatrix} = \begin{pmatrix} 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} \dot{\theta}_{(12)} & 0 & 0 \\ 0 & 0 & \dot{\theta}_{(12)} \\ 0 & \dot{\theta}_{(12)} & 0 \end{pmatrix} = \begin{pmatrix} -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \end{pmatrix}$},

{$(13) \rightarrow \begin{pmatrix} 0 & \dot{\theta}_{()} & 0 \\ \dot{\theta}_{()} & 0 & 0 \\ 0 & 0 & \dot{\theta}_{(12)} \end{pmatrix} = \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \end{pmatrix}$}, {$(123) = (13)(12) \rightarrow \begin{pmatrix} 0 & 0 & \dot{\theta}_{(12)} \\ \dot{\theta}_{(12)} & 0 & 0 \\ 0 & \dot{\theta}_{()} & 0 \end{pmatrix} = \begin{pmatrix} & & & & -1 & 1 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \end{pmatrix}$}

{$(23) \rightarrow \begin{pmatrix} 0 & 0 & \dot{\theta}_{()} \\ 0 & \dot{\theta}_{(12)} & 0 \\ \dot{\theta}_{()} & 0 & 0 \end{pmatrix} = \begin{pmatrix} & & & & 1 & 0 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \end{pmatrix}$}, {$(132) = (23)(12) \rightarrow \begin{pmatrix} 0 & \dot{\theta}_{(12)} & 0 \\ 0 & 0 & \dot{\theta}_{()} \\ \dot{\theta}_{(12)} & 0 & 0 \end{pmatrix} = \begin{pmatrix} & & -1 & 1 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \end{pmatrix}$}

And we can double check that matrix multiplication works as required so that {$\textrm{Ind}_H^G \theta((13)) \times \textrm{Ind}_H^G \theta((12)) = \textrm{Ind}_H^G \theta((13)(12))$}.

Note that in the symmetric group we need to read cycle products such as {$(13)(12)=(123)$} as composing from right to left because they act on the right. It took me hours to appreciate that! Especially because each cycle is read from left to right.

We can compare with the following two row notation:

{$\begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 3 & 2 & 1 \end{pmatrix} \Leftarrow \begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 2 & 1 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 2 & 3 & 1 \end{pmatrix}$}

Next, compare with the permutation matrix group. Consider the permutation matrices for {$(13)(12)=(123)$}:

{$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$}

The matrix for {$(123)$} acts on the column matrix by fixing the labels and shifting the slots. Thus the the labels are moved: {$x_1$} from the first slot to the second slot, {$x_2$} from the second to the third, and {$x_3$} from the third to the first.

{$\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} x_3 \\ x_1 \\ x_2 \end{pmatrix}$}

The matrix for {$(123)$} acts on the row matrix by fixing the slots and relabeling the labels.

{$\begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} x_2 & x_3 & x_1 \end{pmatrix}$}

In multiplying matrices, we are acting on a vector on the right hand side: {$\mathbf{Ind}^G_H(\theta_U)(\mathbf{(13)})\cdot \mathbf{Ind}^G_H(\theta_U)(\mathbf{(12)})\cdot \mathbf{v} = \mathbf{Ind}^G_H(\theta_U)(\mathbf{(123)})\cdot \mathbf{v}$}. Thus it is the action on the column that is relevant. The formulation in terms of the tensor product helps to make this clear.

Similarly, let {$\phi_U$} bet the representation of the subgroup {$H=\{(),(12)\}$} given as follows:

{$() \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$(12) \rightarrow \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$}

Then the induced representation {$\mathbf{Ind}^G_H(\phi_U)$} is given by

{$(12) \rightarrow \begin{pmatrix} 0 & 1 & & & & \\ 1 & 0 & & & & \\ & & & & 0 & 1 \\ & & & & 1 & 0 \\ & & 0 & 1 & & \\ & & 1 & 0 & & \end{pmatrix}$}, {$(13) \rightarrow \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & 0 & 1 \\ & & & & 1 & 0 \end{pmatrix}$} and so on.

Note that the restriction functor keeps the vector space the same whereas the induction functor replaces the vector space with the direct sum of {$[G:H]$} copies of it.

Definition of the induction functor {$\mathbf{Ind}^G_H$}.

As before, {$H$} is a subgroup of {$G$} with index {$n=[G:H]$}. We select a set of coset representatives {$\{g_i | 1\leq i \leq n\}$}. Given a vector space {$V$}, we define {$\bar{V}=\bigoplus_{i=1}^{n}V_i$} where {$V_i\cong V$}.

The induction functor {$\mathbf{Ind}^G_H: \mathbf{Rep}_H\rightarrow \mathbf{Rep}_G$} sends the representation {$\theta_V:H\rightarrow GL(V,K)$} of {$H$} on {$V$} to the induced representation {$\textrm{Ind}^G_H(\theta_V):G\rightarrow GL(\bar{V},K)$} of {$G$} on {$\bar{V}$}.

The induction functor also sends the morphism (equivariant map) {$\alpha:\psi_U\rightarrow\theta_V$} to the morphism (equivariant map) {$\textrm{Ind}^G_H(\alpha):\textrm{Ind}^G_H(\psi_U)\rightarrow \textrm{Ind}^G_H(\theta_V)$}. We define the latter morphism to be the linear map from {$\bar{V}$} to {$\bar{U}$} given by {$n\times n$} blocks where the blocks on the diagonal are given by the matrix for {$\alpha$} and the other blocks are all zero. Let us verify that {$\beta$} is indeed an equivariant map as required.

We are given the commutative diagram for {$\alpha$} and {$h\in H$}:

{$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V \\ h\rightarrow\psi_U(h) & \downarrow & & \downarrow & \theta_V(h)\leftarrow h\\ & U & \overset{\alpha}\longrightarrow & V \end{matrix}$$}

For any {$g\in G$}, the linear representation {$\textrm{Ind}^G_H(\psi_U)(g)$} is a matrix of {$n\times n$} blocks. Indeed, it is a permutation matrix of the blocks. The {$(i,j)$} block is zero unless {$g^{-1}_igg_j\in H$}, in which case we may write {$h_{ij}=g^{-1}_igg_j$}. The linear representation {$\textrm{Ind}^G_H(\theta_V)(g)$} is the same permutation of blocks. The only difference is that in the first case the nonzero blocks are {$\psi_{h_{ij}}$} and in the second case they are {$\theta_{h_{ij}}$}. We have that {$\alpha\circ\psi_{h_{ij}}=\theta_{h_{ij}}\circ\alpha$}. Analogously, we have {$\textrm{Ind}^G_H(\alpha)\circ\psi_{h_{ij}}=\theta_{h_{ij}}\circ\textrm{Ind}^G_H(\alpha)$}

we define the commutative diagram for {$\textrm{Ind}^G_H(\alpha)$} and {$g\in G$}

{$$\begin{matrix} & \bar{U} & \overset{\textrm{Ind}^G_H(\alpha)}\longrightarrow & \bar{V} \\ g\rightarrow\textrm{Ind}^G_H(\psi_U)(g) & \downarrow & & \downarrow & \textrm{Ind}^G_H(\theta_V)(g)\leftarrow g\\ & \bar{U} & \overset{\textrm{Ind}^G_H(\alpha)}\longrightarrow & \bar{V} \end{matrix}$$}

It sends identity equivariant maps {$\textrm{id}_V$} in {$\mathbf{Rep}_H$} to identity equivariant maps {$\textrm{id}_W$} in {$\mathbf{Rep}_G$}. [SHOW THIS]

It sends the composition of maps in {$\mathbf{Rep}_H$} to the composition of maps in {$\mathbf{Rep}_G$}. [SHOW THIS]

We can define the induction functor more abstractly as per Tammo tom Dieck. [WRITE THIS OUT]

We can define the induction functor even more abstractly in terms of tensor products. [WRITE THIS OUT]

Example of an equivariant map between induced representations.

[CORRECT THIS]

We can calculate the family of equivariant maps that take us from {$\theta_U$} to {$\phi_U$} defined in the section above.

{$\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}$}

{$b_{11}(-u_1 + u_2) + b_{12}u_2 = b_{21}u_1 + b_{22}u_2$}

{$b_{21}(-u_1 + u_2) + b_{22}u_2 = b_{11}u_1 + b_{12}u_2$}

{$-b_{11}u_1 + (b_{11} + b_{12})u_2 = b_{21}u_1 + b_{22}u_2$}

{$-b_{21}u_1 + (b_{21} + b_{22})u_2 = b_{11}u_1 + b_{12}u_2$}

{$b_{21}=-b_{11}$} and {$b_{22} = b_{11} + b_{12}$}

We determine the family {$\begin{pmatrix} b_{11} & b_{12} \\ -b_{11} & b_{11}+b_{12} \end{pmatrix}$}.

Similarly, we calculate the family of equivariant maps {$\mathbf{Ind}^G_H(\beta)$} that take us from {$\mathbf{Ind}^G_H(\theta_U)$} to {$\mathbf{Ind}^G_H(\phi_U)$}. Comparing matrices for each {$g\in G$}, we see that for every block there is a permutation thanks to which we make the same calculation, with the index shifted appropriately:

{$\begin{pmatrix} & & & & \\ & b_{i+1\;j+1} & b_{i+1\;j+2} & \\ & b_{i+2\;j+1} & b_{i+2\;j+2} & \\ & & & \end{pmatrix} \begin{pmatrix} & & & \\ & -1_{j+1\;k+1} & 1_{j+1\;k+2} & \\ & 0_{j+2\;k+1} & 1_{j+2\;k+2} & \\ & & \end{pmatrix} \begin{pmatrix} \\ x_{k+1} \\ x_{k+2} \\ \; \end{pmatrix} = \begin{pmatrix} & & & \\ & 0_{i+1\;j+1} & 1_{i+1\;j+2} & \\ & 1_{i+2\;j+1} & 0_{i+2\;j+2} & \\ & & \end{pmatrix} \begin{pmatrix} & & & & \\ & b_{j+1\;k+1} & b_{j+1\;k+2} & \\ & b_{j+2\;k+1} & b_{j+2\;k+2} & \\ & & & \end{pmatrix} \begin{pmatrix} \\ x_{k+1} \\ x_{k+2} \\ \; \end{pmatrix}$}

{$b_{i+2\; j+1}=-b_{i+1\; j+1}$} and {$b_{i+2\; j+2} = b_{i+1\; j+1} + b_{i+1\; j+2}$} much as above, just with the index shifted.

But furthermore, we can show that each block is the same. Simply consider the permutation which contains the two blocks with the same content. (I should show the calculation!)

The upshot is that the equivariant maps {$\mathbf{Ind}^G_H(\beta)$} are given by matrices that consist of {$n\times n$} blocks of the matrix for {$\beta$}. In our case the equivariant map is a {$6\times 6$} matrix consisting of {$3\times 3$} blocks of size {$2\times 2$}:

{$\begin{pmatrix} b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12}\\ b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12}\\ b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} \end{pmatrix}$}.

Thus we see how the functor {$\mathbf{Ind}^G_H$} maps morphisms, which is to say, equivariant maps. (I should show how the functor satisfies the composition law and the identity condition.)

I should also note and show that, in general, the vector spaces {$U$} and {$V$} can be of different dimension and so the equivariant map may be expressed by a rectangular {$n\cdot \textrm{dim} U \times n\cdot \textrm{dim} V$} matrix.

The contragredient representation.

Given a representation {$\rho : G\rightarrow GL(V)$} with character {$\chi$}, let {$V'$} be the dual of {$V$}, i.e., the space of linear forms on {$V$}. Let {$<x,x'>$} denote the value of the form {$x'$} at {$x$}. The contragredient representation {$\rho' : G\rightarrow GL(V')$} (or dual representation) of {$\rho$} is the unique linear representation such that {$<\rho_s x, \rho'_s x'> = <x,x'>$} for {$s\in G$}, {$x\in V$}, {$x'\in V'$}. The character of {$\rho'$} is {$\chi^*$}.

Let {$\rho_1 : G\rightarrow GL(V_1)$} and {$\rho_1 : G\rightarrow GL(V_2)$} be two linear representations with characters {$\chi_1$} and {$\chi_2$}. Let {$W=\textrm{Hom}(V_1,V_2)$}, the vector space of linear mappings {$f:V_1 \rightarrow V_2$}. For {$s\in G$} and {$f\in W$} let {$\rho_s f = \rho_{2,s}\circ f\circ \rho^{-1}_{1,s}$}; so {$\rho_s f\in W$}. This defines a linear representation {$\rho : G\rightarrow GL(W)$}. Its character is {$\chi^*_1\cdot \chi_2$}. This representation is isomorphic to {$\rho_1'\bigotimes\rho_2$}.

Pairing an equivariant map in {$\mathbf{Rep}_G(\mathbf{Ind}^G_H(\theta_U),\phi_V)$} and an equivariant map in {$\mathbf{Rep}_H(\theta_U, \mathbf{Res}^G_H(\phi_V))$}.

The adjunction {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} pairs equivariant maps in {$\textrm{Rep}_G(\textrm{Ind}^G_H(\theta_U),(\phi_V))$} with equivariant maps in {$\textrm{Rep}_H((\theta_U), \textrm{Res}^G_H(\phi_V))$} as follows.

Let {$f:U\rightarrow V$} be an equivariant map such that {$f(\theta_t u)=\phi_t f$} for all {$t\in H$}, {$w\in W$}.

Let {$\{g_i | i\in I\}$} be a set of coset representatives of {$H$} in {$G$}. Let {$U_i$} be the copy of {$U$} associated with the coset {$g_iH$}. Set {$U_1=U$} and {$g_1H=H$}.

FIX: (Explain surjectivity). We have {$\textrm{Ind}^G_H(\theta_U)(g_i):U\rightarrow U_i$} for all {$i\in I$}. Note that if {$x_i\in U_i = \textrm{Ind}^G_H(\theta_U)(g_i)U$}, then {$\textrm{Ind}^G_H(\theta_U)(g_i)^{-1}U = \textrm{Ind}^G_H(\theta_U)(g_i^{-1})U \subset U$}.

Given {$x\in\bigoplus_{i\in I}U_i$}, write {$x=\sum_{i\in I}x_i$} where {$x_i\in U_i$}. Define {$F(x_i)=\phi_{g_i}f(\textrm{Ind}^G_H(g_i^{-1})x_i)$}. By linearity, this defines {$F:\bigoplus_{i\in I}U_i\rightarrow V$}.

Given an {$H$}-morphism {$\phi:V\rightarrow\textrm{Res}^G_H W$}, define a {$G$}-morphism {$\Phi:\textrm{ind}^G_H\rightarrow W$} which sends the summand {$gH\times_HV$} by {$(g,v)\rightarrow\mapsto g\cdot\phi(v)$}. Note that {$\phi(v)\in W$} and that the representation {$W$} defines the action {$g\cdot\phi(v)$}. Note also that any other representative of the summand such as {$(gh^{-1},hv$} where {$h\in H$} leads to the same value {$gh^{-1}\phi(hv)=g\phi(v)$} since {$\phi$} is an {$H$}-morphism. {$W$} is a specific representation, which is why {$\Phi$} is determined uniquely. This uniqueness in terms of inner structure matches the uniqueness that induction expresses in terms of external relations. The implicit information lost by the restriction functor is the same as the explicit information constructed by the induction functor.

ANSWER: Given representation {$\phi(v):V\rightarrow\textrm{Res}^G_H W$} and {$g\in G$}, is the representation {$g\cdot\phi(v)$} uniquely defined?

ANSWER: Is {$\textrm{Res}^G_H W$} an invertible functor?

ANSWER: How to understand the extension {$\textrm{Res}^G_H W\rightarrow\textrm{Res}^G_H U$}?

An example of an equivariant map in {$\mathbf{Rep}_G(\mathbf{Ind}^G_H(U,\theta_U),(V,\phi_V))$} and its pair in {$\mathbf{Rep}_H((U,\theta_U), \mathbf{Res}^G_H(V,\phi_V))$}.

Let {$G=S_3$} with subgroup {$H=\{(),(12)\}$}. Let {$U=\mathbb{C}^2$} and {$V=\mathbb{C}^3$}.

Let {$\theta_U$} be the representation of {$H$} that maps {$\mathbf{()}\rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$\mathbf{(12)}\rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$}

And let {$\phi_V$} be the representation of {$G$} that is given by the permutation representation consisting of {$3\times 3$} permutation matrices. Then the induced representation {$\mathbf{Ind}^G_H(U,\theta_U)$} of {$G$} acts on {$U \bigoplus U \bigoplus U =\mathbb{C}^6=\mathbb{C}^2\bigotimes\mathbb{C}^3$} whereas the restricted representation {$\mathbf{Res}^G_H(V,\phi_V))$} of {$H$} acts on {$V=\mathbb{C}^3$}.

We can calculate the equivariant map {$\alpha = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix}$} which satisfies the commutative map for representations of {$H$}:

{$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V \\ \theta_U(h) & \downarrow & & \downarrow & \mathbf{Res}^G_H(V,\phi_V)(h) \\ & U & \overset{\alpha}\longrightarrow & V \end{matrix}$$}

Given {$u=(u_1,u_2)$} and {$h\in H$} we have equations of the form {$\alpha \cdot \theta_U(h) \cdot u = \phi_V(h) \cdot \alpha \cdot u$}. The equation for {$h=\mathbf{()}$} simply says that {$\alpha u = \alpha u$}. The equation for {$h=\mathbf{(12)}$} is:

{$$\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}$$}

Multiplying out, this yields

{$$\begin{pmatrix} -\alpha_{11} u_1 + \alpha_{11} u_2 + \alpha_{12} u_2 & -\alpha_{21} u_1 + \alpha_{21} u_2 + \alpha_{22} u_2 & -\alpha_{31} u_1 + \alpha_{31} u_2 + \alpha_{32} u_2 \end{pmatrix} = \begin{pmatrix} \alpha_{21} u_1 + \alpha_{22} u_2 & \alpha_{11} u_1 + \alpha_{12} u_2 & \alpha_{31} u_1 + \alpha_{32} u_2 \end{pmatrix}$$}

Simplifying, we have equations {$\alpha_{21}=-\alpha_{11}$}, {$\alpha_{22}=\alpha_{11}+\alpha_{12}, \alpha_{31}=-\alpha_{31}=0, \alpha_{32}=\alpha_{32}$}. And so the six variables reduce to three variables. Fixing the values of {$a_{11}$}, {$a_{12}$}, {$a_{32}$} yields an equivariant map. Thus we have a family of maps given by:

{$\alpha = \begin{pmatrix} a_{11} & a_{12} \\ -a_{11} & a_{11}+a_{12} \\ 0 & a_{32} \end{pmatrix}$}

Similarly, we can calculate the equivariant map {$\gamma = \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix}$} which for every {$g\in G$} satisfies the commutative map:

{$$\begin{matrix} & U \bigoplus U \bigoplus U & \overset{\gamma}\longrightarrow & V \\ \mathbf{Ind}^G_H(\theta_U, U)(g) & \downarrow & & \downarrow & \phi_V(g) \\ & U \bigoplus U \bigoplus U & \overset{\gamma}\longrightarrow & V \end{matrix}$$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & -1 & 1 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & & & 1 & 0 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

{$\begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & & & -1 & 1 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix}$}

Solving these matrix equations I arrive at ... (need to redo!)

The solution should be:

{$$\gamma = \begin{pmatrix} c_{11} & c_{12} & 0 & c_{32} & c_{11} & c_{12} \\ -c_{11} & c_{11}+c_{12} & -c_{11} & c_{11}+c_{12} & 0 & c_{32} \\ 0 & c_{32} & c_{11} & c_{12} & -c_{11} & c_{11}+c_{12} \end{pmatrix}$$}

The morphisms can be thought of as machines. This says that looking through the front side of machines and through the back side of machines is the same information.

The HomSet definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$}

[Write up the HomSet definition of the adjunction.]

The online book Tammo ton Dieck. Representation Theory. 2009 is very helpful because in Proposition 4.1.2 (page 51) it makes explicit the bijection between the HomSets.

As regards finite-dimensional representations, I find it helpful to make explicit the dimensions of the matrices.

An example illustrating the HomSet definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$}

Consider the trivial group {$H=()$} as a subgroup of {$G=(12)$}.

 {$\{\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}\}$} {$\begin{pmatrix} a_{11} & a_{12} & 0 & 0 \\ 0 & 0 & a_{11} & a_{12} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\}$} {$\begin{pmatrix} \phi_{11} & -\phi_{11}+\phi_{21} \\ \phi_{22} & \phi_{21} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} 0 & b_{12} \\ 0 & b_{22} \\ 0 & b_{32} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix},\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}\}$} in {$\textrm{Rep}_G$} {$\textrm{Ind}^G_H \rho_{W'}$} {$\overset{\textrm{Ind}^G_H\alpha}{\longrightarrow}$} {$\textrm{Ind}^G_H \rho_W$} {$\overset{\Phi}{\longrightarrow}$} {$\theta_V$} {$\overset{\beta}{\longrightarrow}$} {$\theta_{V'}$} {$\Uparrow$} {$\Uparrow$} {$\Downarrow$} {$\Downarrow$} in {$\textrm{Rep}_H$} {$\rho_{W'}$} {$\overset{\alpha}{\longrightarrow}$} {$\rho_W$} {$\overset{\Phi\circ i^G_H}{\longrightarrow}$} {$\textrm{Res}^G_H\theta_V$} {$\overset{\textrm{Res}^G_H\beta}{\longrightarrow}$} {$\textrm{Res}^G_H\theta_{V'}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} a_{11} & a_{12}\end{pmatrix}$} {$\{\begin{pmatrix} 1 \end{pmatrix}\}$} {$\begin{pmatrix} \phi_{11} \\ \phi_{22}\end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} 0 & b_{12} \\ 0 & b_{22} \\ 0 & b_{32} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}\}$}

This example illustrates that in this case the bijection between the HomSet adjunction maps a matrix of independent entries {$a_{11}$}, {$a_{21}$} to a matrix that depends on these entries. [IS THIS BIJECTION ONTO ALL POSSIBLE EQUIVARIANT MORPHISMS

This example illustrates how the induction functor is not a full functor. The equivariant morphisms from {$\textrm{Ind}\rho_{W'}$} to {$\textrm{Ind}\rho_{W}$} are the matrices of the form {$\begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{13} & a_{14} & a_{11} & a_{12} \end{pmatrix}$}. The image of the induction functor gives only those for which {$0=a_{13}=a_{14}$}.

This example also illustrates how the restriction functor is not a full functor, which is to say, it is not surjective. Note that the equivariant morphisms from {$I_2$} to {$I_3$} are the matrices of the form {$\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ b_{31} & b_{32} \end{pmatrix}$}. But the image of the restriction functor gives only those for which {$0=b_{11}=b_{21}=b_{31}$}.

The Triangle Equalities definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$}

The Universal Mapping Property definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$}

Show that {$\mathbf{Res}^G_H \dashv \mathbf{Ind}^G_H$} when G and F are finite groups

This is known as coinduction. In the case of finite groups, induction and coinduction coincide.

Go through the three definitions of adjunction to show that {$\mathbf{Res}^G_H \dashv \mathbf{Ind}^G_H$}.

Understand the adjunction that relates extension of scalars and restriction of scalars.

Understand the adjoint string that relates change of rings

Understand Shapiro's lemma

The induced representation divides the group action into two actions: a permutation externally on the cosets and an action internally on the subgroup elements. How does it relate external relationships and internal structure? How does it divide one action (the adjoint of restriction) into two actions?

Restriction and induction

Work through the example for {$D_6$}. See: here

Šis puslapis paskutinį kartą keistas January 12, 2022, at 07:27 PM