Linear Algebra Done Right Ch.5 Notes

31 Aug 2018

Complex Conjugate of a Vector

Ex.1

$$ \begin{bmatrix}1&1\\1&1\end{bmatrix}\begin{bmatrix}1-i\\3+2i\end{bmatrix} =\begin{bmatrix}1-i+3+2i\\1-i+3+2i\end{bmatrix} =\begin{bmatrix}4+i\\4+i\end{bmatrix} $$

$$ \begin{bmatrix}1&1\\1&1\end{bmatrix}\begin{bmatrix}1+i\\3-2i\end{bmatrix} =\begin{bmatrix}1+i+3-2i\\1+i+3-2i\end{bmatrix} =\begin{bmatrix}4-i\\4-i\end{bmatrix} =\overline{\begin{bmatrix}4+i\\4+i\end{bmatrix}} $$

Ex.2

$$ \begin{bmatrix}1&2\\3&4\end{bmatrix}\begin{bmatrix}1-i\\3+2i\end{bmatrix} =\begin{bmatrix}1-i+6+4i\\3-3i+12+8i\end{bmatrix} =\begin{bmatrix}7+3i\\15+5i\end{bmatrix} $$

$$ \begin{bmatrix}1&2\\3&4\end{bmatrix}\begin{bmatrix}1+i\\3-2i\end{bmatrix} =\begin{bmatrix}1+i+6-4i\\3+3i+12-8i\end{bmatrix} =\begin{bmatrix}7-3i\\15-5i\end{bmatrix} =\overline{\begin{bmatrix}7+3i\\15+5i\end{bmatrix}} $$

Ex.3

$$ \begin{bmatrix}1&i\\1&1\end{bmatrix}\begin{bmatrix}1-i\\3+2i\end{bmatrix} =\begin{bmatrix}1-i-2+3i\\1-i+3+2i\end{bmatrix} =\begin{bmatrix}-1+2i\\4+i\end{bmatrix} $$

$$ \begin{bmatrix}1&i\\1&1\end{bmatrix}\begin{bmatrix}1+i\\3-2i\end{bmatrix} =\begin{bmatrix}1+i+2+3i\\1+i+3-2i\end{bmatrix} =\begin{bmatrix}3+4i\\4-i\end{bmatrix} \neq\overline{\begin{bmatrix}-1+2i\\4+i\end{bmatrix}} $$

$$ \begin{bmatrix}1&-i\\1&1\end{bmatrix}\begin{bmatrix}1+i\\3-2i\end{bmatrix} =\begin{bmatrix}1+i-2-3i\\1+i+3-2i\end{bmatrix} =\begin{bmatrix}-1-2i\\4-i\end{bmatrix} =\overline{\begin{bmatrix}-1+2i\\4+i\end{bmatrix}} $$

Proposition W.5.1 If $A\in\mathbb{R}^{m\times n}$ and $Ax=b$, then $A\overline{x}=\overline{b}$

Proof Note that

$$ b_j=(Av)_{j}=\sum_{r=1}^nA_{j,r}x_r $$

Hence

$$ \overline{b_j}=\overline{(Av)_{j}}=\overline{\sum_{r=1}^nA_{j,r}x_r}=\sum_{r=1}^n\overline{A_{j,r}x_r}=\sum_{r=1}^n\overline{A_{j,r}}\overline{x_r}=\sum_{r=1}^nA_{j,r}\overline{x_r}=(A\overline{x})_j\quad\blacksquare $$

Definition & Properties W.5.2 Complex conjugate of a vector

Let $v_1,\dots,v_n$ be a basis for $V$. For $v=\sum_{k=1}^nb_kv_k\in V$, define

$$ \overline{v}\equiv\sum_{k=1}^n\overline{b_k}v_k \quad\quad\text{or}\quad\quad \overline{\begin{bmatrix}b_1\\\vdots\\b_n\end{bmatrix}}\equiv\begin{bmatrix}\overline{b_1}\\\vdots\\\overline{b_n}\end{bmatrix} $$

These two definitions are equivalent.

Property W.5.2.1 $\overline{\lambda v}=\overline{\lambda}\overline{v}$

For $\lambda\in\mathbb{C}$, we have

$$ \overline{\lambda v}=\overline{\lambda\sum_{k=1}^nb_kv_k}=\overline{\sum_{k=1}^n\lambda b_kv_k}=\sum_{k=1}^n\overline{\lambda b_k}v_k=\overline{\lambda}\sum_{k=1}^n\overline{b_k}v_k=\overline{\lambda}\overline{v} $$

Property W.5.2.2 $\overline{u+v}=\overline{u}+\overline{v}$

For $u=\sum_{k=1}^na_kv_k\in V$, we have $u+w=\sum_k^na_kv_k+\sum_k^nb_kv_k=\sum_k^n(a_k+b_k)v_k$ and

$$ \overline{u+v}=\overline{\sum_{k=1}^n(a_k+b_k)v_k}=\sum_{k=1}^n\overline{(a_k+b_k)}v_k=\sum_{k=1}^n\overline{a_k}v_k+\sum_{k=1}^n\overline{b_k}v_k=\overline{u}+\overline{v} $$

Property W.5.2.3 $\overline{v_k}=v_k$

$$ \overline{v_k}=\overline{\sum_{j=1}^n\mathbb{1}_{\{j=k\}}v_j}=\sum_{j=1}^n\overline{\mathbb{1}_{\{j=k\}}}v_j=\sum_{j=1}^n\mathbb{1}_{\{j=k\}}v_j=v_k $$

Property W.5.2.4 $\mtrxof{\overline{v}}=\overline{\mtrxof{v}}$

$$ \mtrxof{\overline{v}}=\mtrxofsb\Big(\sum_{k=1}^n\overline{b_k}v_k\Big)=\begin{bmatrix}\overline{b_1}\\\vdots\\\overline{b_n}\end{bmatrix}=\overline{\begin{bmatrix}b_1\\\vdots\\b_n\end{bmatrix}}=\overline{\mtrxof{v}} $$

Proposition W.5.3 Suppose $V$ and $W$ are finite-dimensional, complex vector spaces and $T\in\lnmpsb(V,W)$. Then

$$ \mtrxof{\overline{Tv}}=\overline{\mtrxof{T}\mtrxof{v}} $$

for every $v\in V$.

Proof Property W.5.2.4 gives the first equality and proposition 3.65, p.85 gives the second:

$$ \mtrxof{\overline{Tv}}=\overline{\mtrxof{Tv}}=\overline{\mtrxof{T}\mtrxof{v}} $$

Alternatively, let $v_1,\dots,v_n$ be a basis of $V$ and let $v=\sum_k^na_kv_k$. Then

$$\begin{align*} \mtrxof{\overline{Tv}} &= \mtrxofsb\Bigg(\overline{T\Big[\sum_{k=1}^na_kv_k\Big]}\Bigg) \\ &= \mtrxofsb\Bigg(\overline{\sum_{k=1}^na_kTv_k}\Bigg) \\ &= \mtrxofsb\Big(\sum_{k=1}^n\overline{a_kTv_k}\Big)\tag{by W.5.2.2} \\ &= \mtrxofsb\Big(\sum_{k=1}^n\overline{a_k}\overline{Tv_k}\Big)\tag{by W.5.2.1} \\ &= \sum_{k=1}^n\overline{a_k}\mtrxof{\overline{Tv_k}}\tag{by 3.36 and 3.38} \\ &= \sum_{k=1}^n\overline{a_k}\overline{\mtrxof{Tv_k}}\tag{by W.5.2.4} \\ &= \sum_{k=1}^n\overline{a_k\mtrxof{Tv_k}} \tag{by W.5.2.1}\\ &= \overline{\sum_{k=1}^na_k\mtrxof{Tv_k}} \tag{by W.5.2.2}\\ &= \overline{\sum_{k=1}^na_k\mtrxof{T}_{:,k}} \tag{by 3.64}\\ &= \overline{\mtrxof{T}\mtrxof{v}}\tag{by 3.52}\quad\blacksquare \end{align*}$$

p.148, 5.26, Conditions for upper-triangular matrix

Proof Suppose $\mtrxofsb(T,(v_1,\dots,v_n))$ is upper triangular. Then

$$\begin{matrix} \mtrxof{T}_{:,1}\quad\text{ has }\quad0=A_{2,1}=\dotsb=A_{n,1}\quad\iff\quad Tv_1 =A_{1,1}v_1 \\\\ \mtrxof{T}_{:,2}\quad\text{ has }\quad0=A_{3,2}=\dotsb=A_{n,2}\quad\iff\quad Tv_2=A_{1,2}v_1+A_{2,2}v_2 \\\\ \mtrxof{T}_{:,3}\quad\text{ has }\quad0=A_{4,3}=\dotsb=A_{n,3}\quad\iff\quad Tv_3=\sum_{j=1}^3A_{j,3}v_j \\\\ \vdots \\\\ \mtrxof{T}_{:,n-2}\quad\text{ has }\quad0=A_{n-1,n-2}=A_{n,n-2}\quad\iff\quad Tv_{n-2}=\sum_{j=1}^{n-2}A_{j,n-2}v_j \\\\ \mtrxof{T}_{:,n-1}\quad\text{ has }\quad0=A_{n,n-1}\quad\iff\quad Tv_{n-1}=\sum_{j=1}^{n-1}A_{j,n-1}v_j \\\\ Tv_n=\sum_{j=1}^nA_{j,n}v_j \end{matrix}$$

Here we can see that $Tv_1$ is in $\text{span}(v_1)$. And $Tv_2$ is in $\text{span}(v_1,v_2)$. And $Tv_3$ is in $\text{span}(v_1,v_2,v_3)$. And so on. Hence (a)$\implies$(b). Now suppose that $Tv_j\in\text{span}(v_1,\dots,v_j)$ for each $j=1,\dots,n$. We can look at the above and see that this implies that $\mtrxof{T}$ is upper triangular.

Matrix Powers

Proposition W.5.4 For $T\in\lnmpsb(V)$, $\mtrxof{T^n}=\mtrxof{T}^n$.

Proof Proposition 3.43, p.75 gives the second equality:

$$ \mtrxofsb(T^2)=\mtrxofsb(TT)=\mtrxof{T}\mtrxof{T}=\mtrxof{T}^2 $$

The inductive assumption is $\mtrxof{T^{n-1}}=\mtrxof{T}^{n-1}$. Then again proposition 3.43 gives the second equality:

$$ \mtrxof{T^n}=\mtrxof{T^{n-1}T}=\mtrxof{T^{n-1}}\mtrxof{T}=\mtrxof{T}^{n-1}\mtrxof{T}=\mtrxof{T}^n\quad\blacksquare $$

Powers of Rotation with sin and cos

Proposition W.5.5 For any angle $\theta\in\mathbb{R}$ and any positive integer $n\in\mathbb{N}^+$, we have

$$ \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix}^n =\begin{bmatrix}\cos{n\theta}&\sin{n\theta}\\-\sin{n\theta}&\cos{n\theta}\end{bmatrix} $$

Proof We will prove this by induction:

$$\begin{align*} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix}^2 &= \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix} \\\\ &=\begin{bmatrix}\cos^2{\theta}-\sin^2{\theta}&\cos{\theta}\sin{\theta}+\sin{\theta}\cos{\theta}\\-\sin{\theta}\cos{\theta}-\cos{\theta}\sin{\theta}&-\sin^2{\theta}+\cos^2{\theta}\end{bmatrix} \\\\ &=\begin{bmatrix}\cos^2{\theta}-\sin^2{\theta}&\cos{\theta}\sin{\theta}+\cos{\theta}\sin{\theta}\\-\cos{\theta}\sin{\theta}-\cos{\theta}\sin{\theta}&\cos^2{\theta}-\sin^2{\theta}\end{bmatrix} \\\\ &=\begin{bmatrix}\cos^2{\theta}-\sin^2{\theta}&2\cos{\theta}\sin{\theta}\\-2\cos{\theta}\sin{\theta}&\cos^2{\theta}-\sin^2{\theta}\end{bmatrix} \\\\ &=\begin{bmatrix}\cos{2\theta}&\sin{2\theta}\\-\sin{2\theta}&\cos{2\theta}\end{bmatrix} \end{align*}$$

where the last equality follows from these trigonometric identities for any $\theta\in\mathbb{R}$:

$$ \cos{2\theta}=\cos^2{\theta}-\sin^2{\theta}\\\\ \sin{2\theta}=2\cos{\theta}\sin{\theta} $$

Our induction assumption is

$$ \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix}^{n-1} =\begin{bmatrix}\cos{\big((n-1)\theta\big)}&\sin{\big((n-1)\theta\big)}\\-\sin{\big((n-1)\theta\big)}&\cos{\big((n-1)\theta\big)}\end{bmatrix} $$

Recall these trigonometric identities for any $\phi,\theta\in\mathbb{R}$:

$$ \cos{(\phi+\theta)}=\cos{\phi}\cos{\theta}-\sin{\phi}\sin{\theta}\\\\ \sin{(\phi+\theta)}=\cos{\phi}\sin{\theta}+\sin{\phi}\cos{\theta} $$

Set $\phi\equiv(n-1)\theta$. Then

$$\begin{align*} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix}^n &= \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix}^{n-1} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{\big((n-1)\theta\big)}&\sin{\big((n-1)\theta\big)}\\-\sin{\big((n-1)\theta\big)}&\cos{\big((n-1)\theta\big)}\end{bmatrix} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{\phi}&\sin{\phi}\\-\sin{\phi}&\cos{\phi}\end{bmatrix} \begin{bmatrix}\cos{\theta}&\sin{\theta}\\-\sin{\theta}&\cos{\theta}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{\phi}\cos{\theta}-\sin{\phi}\sin{\theta}&\cos{\phi}\sin{\theta}+\sin{\phi}\cos{\theta}\\-\sin{\phi}\cos{\theta}-\cos{\phi}\sin{\theta}&-\sin{\phi}\sin{\theta}+\cos{\phi}\cos{\theta}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{\phi}\cos{\theta}-\sin{\phi}\sin{\theta}&\cos{\phi}\sin{\theta}+\sin{\phi}\cos{\theta}\\-(\sin{\phi}\cos{\theta}+\cos{\phi}\sin{\theta})&\cos{\phi}\cos{\theta}-\sin{\phi}\sin{\theta}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{(\phi+\theta)}&\sin{(\phi+\theta)}\\-\sin{(\phi+\theta)}&\cos{(\phi+\theta)}\end{bmatrix} \\\\ &= \begin{bmatrix}\cos{\big((n-1)\theta+\theta\big)}&\sin{\big((n-1)\theta+\theta\big)}\\-\sin{\big((n-1)\theta+\theta\big)}&\cos{\big((n-1)\theta+\theta\big)}\end{bmatrix} \\\\ &=\begin{bmatrix}\cos{n\theta}&\sin{n\theta}\\-\sin{n\theta}&\cos{n\theta}\end{bmatrix}\quad\blacksquare \end{align*}$$

Proposition W.5.6 Let $v_1,v_2$ be a basis of $V$ and let $n\in\mathbb{N}^+$. Define $T\in\lnmpsb(V)$ by

$$ T(v_1)\equiv\Big(\cos{\frac{\pi}n}\Big)v_1-\Big(\sin{\frac{\pi}n}\Big)v_2 $$

and

$$ T(v_2)\equiv\Big(\sin{\frac{\pi}n}\Big)v_1+\Big(\cos{\frac{\pi}n}\Big)v_2 $$

(Proposition 3.5, p.54 gives that $T$ is linear) Then $T^n=-I$ and $T^{2n}=I$.

Proof Note that

$$ \mtrxof{T}_{:,1}=\mtrxofsb(Tv_1)=\begin{bmatrix}\cos{\frac\pi{n}}\\-\sin{\frac\pi{n}}\end{bmatrix} $$

and

$$ \mtrxof{T}_{:,2}=\mtrxofsb(Tv_2)=\begin{bmatrix}\sin{\frac\pi{n}}\\\cos{\frac\pi{n}}\end{bmatrix} $$

Hence

$$ \mtrxof{T}=\begin{bmatrix}\cos{\frac\pi{n}}&\sin{\frac\pi{n}}\\-\sin{\frac\pi{n}}&\cos{\frac\pi{n}}\end{bmatrix} $$

Proposition W.5.5 implies that

$$ \mtrxofsb(T^{n})=\mtrxof{T}^{n}=\begin{bmatrix}\cos{({n}\cdot\frac\pi{n})}&\sin{({n}\cdot\frac\pi{n})}\\-\sin{({n}\cdot\frac\pi{n})}&\cos{({n}\cdot\frac\pi{n})}\end{bmatrix} =\begin{bmatrix}\cos{\pi}&\sin{\pi}\\-\sin{\pi}&\cos{\pi}\end{bmatrix} =\begin{bmatrix}-1&0\\0&-1\end{bmatrix} =\mtrxofsb(-I) $$

Since $\mtrxofsb$ is an isomorphism (3.60, p.83) between $\lnmpsb(V,V)$ and $\mathbb{F}^{2,2}$, then $\mtrxofsb$ is injective and $T^n=-I$. Similarly

$$ \mtrxofsb(T^{2n})=\mtrxof{T}^{2n}=\begin{bmatrix}\cos{({2n}\cdot\frac\pi{n})}&\sin{({2n}\cdot\frac\pi{n})}\\-\sin{({2n}\cdot\frac\pi{n})}&\cos{({2n}\cdot\frac\pi{n})}\end{bmatrix} =\begin{bmatrix}\cos{2\pi}&\sin{2\pi}\\-\sin{2\pi}&\cos{2\pi}\end{bmatrix} =\begin{bmatrix}1&0\\0&1\end{bmatrix} =\mtrxof{I} $$

Or we can see that $T^{2n}=(T^n)^2=(-I)^2=I$. $\blacksquare$

Eigenvectors, Invariant Subspaces & Span

Proposition W.5.7 Let $T\in\lnmpsb(V)$ and let $v\in V$ be nonzero. Then $v$ is an eigenvector of $T$ if and only if $\text{span}(v)$ is invariant under $T$.

Proof Suppose $v\in V$ is an eigenvector of $T$ and let $u\in \text{span}(v)$. Then $Tv=\lambda_1v$ for some $\lambda_1\in\mathbb{F}$. And $u=\lambda_2v$ for some $\lambda_2\in\mathbb{F}$. Hence

$$ Tu=T(\lambda_2v)=\lambda_2Tv=\lambda_2\lambda_1v\in\text{span}(v) $$

Conversely, suppose $\text{span}(v)$ is invariant under $T$. Clearly $v=1\cdot v\in \text{span}(v)$. Hence $Tv\in \text{span}(v)$. Hence there exists $\lambda\in\mathbb{F}$ such that $Tv=\lambda v$. $\blacksquare$

Example W.5.8 Define $T\in\lnmpsb(\mathbb{R}^2)$ by

$$ Te_1=e_1 \quad\quad\quad\quad Te_2=e_2 $$

Note that $e_1$ and $e_2$ are both eigenvectors of $T$ corresponding to the same eigenvalue $1$. But $e_1,e_2$ is linearly independent.

Let $x=x_1e_1+x_2e_2\in\mathbb{R}^2$. Then

$$ Tx=T(x_1e_1+x_2e_2)=x_1Te_1+x_2Te_2=x_1e_1+x_2e_2=x $$

So $T=I$ and $T-I=0$. Then

$$ \eignsb(1,T)=\mathscr{N}(T-I)=\mathbb{R}^2 $$

Proposition W.5.9 Let nonzero $v\in V$ and let $T\in\lnmpsb(V)$. Then $v$ is an eigenvector of $T$ corresponding to $\lambda$ if and only if $v\in\eignsb(\lambda,T)$.

Proof Suppose $v$ is an eigenvector of $T$ corresponding to $\lambda$. Then $Tv=\lambda v$ and

$$ (T-\lambda I)v=Tv-\lambda Iv=\lambda v-\lambda v=0 $$

Hence $v\in\mathscr{N}(T-\lambda I)=\eignsb(\lambda,T)$.

Suppose $v\in\eignsb(\lambda,T)$. Then $v\in\mathscr{N}(T-\lambda I)$ and

$$ 0=(T-\lambda I)v=Tv-\lambda Iv=Tv-\lambda v $$

Adding $\lambda v$ to both sides gives $Tv=\lambda v$. $\blacksquare$

Proposition W.5.10 Let $\lambda$ be an eigenvalue of $T\in\lnmpsb(V)$ such that $\dim{\eignsb(\lambda,T)}=1$. If $v\in V$ is an eigenvector corresponding to $\lambda$, then $\eignsb(\lambda,T)=\text{span}(v)$.

Proof Proposition W.5.9 implies that $v\in\eignsb(\lambda,T)$. Since nonzero $v$ is linearly independent, then proposition 2.39, p.45 gives that $v$ is a basis of $\eignsb(\lambda,T)$. Hence $\text{span}(v)=\eignsb(\lambda,T)$. $\blacksquare$

Proposition W.5.11 Let $\lambda$ be an eigenvalue of $T\in\lnmpsb(V)$. If $v_1,\dots,v_{\dim{\eignsb(\lambda,T)}}\in V$ is a list of linearly independent eigenvectors corresponding to $\lambda$, then $\eignsb(\lambda,T)=\text{span}(v_1,\dots,v_{\dim{\eignsb(\lambda,T)}})$.

Proof Proposition W.5.9 implies that $v_1,\dots,v_{\dim{\eignsb(\lambda,T)}}\in\eignsb(\lambda,T)$. Since $v_1,\dots,v_{\dim{\eignsb(\lambda,T)}}$ is linearly independent, then proposition 2.39, p.45 gives that $v_1,\dots,v_{\dim{\eignsb(\lambda,T)}}$ is a basis of $\eignsb(\lambda,T)$. Hence $\text{span}(v_1,\dots,v_{\dim{\eignsb(\lambda,T)}})=\eignsb(\lambda,T)$. $\blacksquare$

Proposition W.5.12 Let $T\in\lnmpsb(V)$ and let $\lambda\in\mathbb{F}$. Then $\lambda$ is an eigenvalue of $T$ if and only if $\eignsb(\lambda,T)\neq\{0\}$.

Proof Suppose $\lambda$ is an eigenvalue of $T$. Then there exists a nonzero $v\in V$ such that $Tv=\lambda v$. Equivalently

$$ 0=Tv-\lambda v=Tv-\lambda(Iv)=Tv-(I\lambda)v=(T-\lambda I)v $$

Hence $\mathscr{N}(T-\lambda I)$ contains a nonzero vector. That is

$$ \eignsb(\lambda,T)=\mathscr{N}(T-\lambda I)\neq\{0\} $$

Conversely, suppose $\eignsb(\lambda,T)\neq{0}$. Then $\mathscr{N}(T-\lambda I)\neq{0}$ and $\mathscr{N}(T-\lambda I)$ contains a nonzero vector. That is, there exists a nonzero vector $v$ such that

$$ 0=(T-\lambda I)v=Tv-(I\lambda)v=Tv-\lambda(Iv)=Tv-\lambda v\quad\blacksquare $$

Example W.5.13 Eigenvalues do not depend on basis: Define $T\in\lnmpsb(\mathbb{R}^2)$ by

$$ Te_1=e_1 \quad\quad\quad Te_2=e_2 $$

Then the only eigenvalue of $T$ is $1$ and $T$ is the identity map:

$$ T(x)=T(x_1,x_2)=T(x_1e_1+x_2e_2)=x_1Te_1+x_2Te_2=x_1e_1+x_2e_2=(x_1,x_2) $$

and

$$ \mtrxof{T}_{:,1}=\mtrxofsb(Te_1)=\mtrxofsb(e_1) \quad\quad\quad \mtrxof{T}_{:,2}=\mtrxofsb(Te_2)=\mtrxofsb(e_2) $$

and

$$ \mtrxof{T}=\begin{bmatrix}1&0\\0&1\end{bmatrix} $$

Now define a new basis $v_1=e_1$ and $v_2=e_1+e_2=(1,1)$. Then

$$ Tv_1=Te_1=e_1=v_1 $$

and

$$ Tv_2=T(e_1+e_2)=Te_1+Te_2=e_1+e_2=v_2 $$

Again $T$ is the identity map and has only eigenvalue $1$. Also

$$ \mtrxofsb(T, (v_1,v_2))_{:,1}=\mtrxofsb(Tv_1, (v_1,v_2))=\mtrxofsb(v_1, (v_1,v_2))=\mtrxofsb(1\cdot v_1+0\cdot v_2, (v_1,v_2)) \\ \mtrxofsb(T, (v_1,v_2))_{:,2}=\mtrxofsb(Tv_2, (v_1,v_2))=\mtrxofsb(v_2, (v_1,v_2))=\mtrxofsb(0\cdot v_1+1\cdot v_2, (v_1,v_2)) $$

and

$$ \mtrxofsb(T, (v_1,v_2))=\begin{bmatrix}1&0\\0&1\end{bmatrix} $$

Interesting:

$$\begin{align*} \mtrxofsb(T, (e_1,e_2), (v_1,v_2))_{:,1} &= \mtrxofsb(Te_1, (e_1,e_2), (v_1,v_2)) \\ &= \mtrxofsb(e_1, (v_1,v_2)) \\ &= \mtrxofsb(v_1, (v_1,v_2)) \\ &= \mtrxofsb(1\cdot v_1+0\cdot v_2, (v_1,v_2)) \end{align*}$$

and

$$\begin{align*} \mtrxofsb(T, (e_1,e_2), (v_1,v_2))_{:,2} &= \mtrxofsb(Te_2, (e_1,e_2), (v_1,v_2)) \\ &= \mtrxofsb(e_2, (v_1,v_2)) \\ &= \mtrxofsb(v_2-e_1, (v_1,v_2)) \\ &= \mtrxofsb(v_2-v_1, (v_1,v_2)) \\ &= \mtrxofsb(-v_1+v_2, (v_1,v_2)) \end{align*}$$

and

$$ \mtrxofsb(T, (e_1,e_2), (v_1,v_2))=\begin{bmatrix}1&-1\\0&1\end{bmatrix} $$

Example 5.37, p.155

Show that $\eignsp{8}{T}=\span{v_1}$ and $\eignsp{5}{T}=\span{v_2,v_3}$.

Put $v_B\equiv v_1,v_2,v_3$ and note that

$$ \mtrxofsb(Tv_1,v_B,v_B)=\mtrxofsb(Tv_1,v_B)=\mtrxof{T,v_B}_{:,1}=\begin{bmatrix}8\\0\\0\end{bmatrix} $$

Hence $Tv_1=8v_1+0v_2+0v_3=8v_1$. Similarly $Tv_2=5v_2$ and $Tv_3=5v_3$. Note that $v_1$ and $8$ form an eigenpair of $T$ since $Tv_1=8v_1$. Similarly $v_2$ and $5$ form an eigenpair of $T$. And $v_3$ and $5$ form an eigenpair of $T$.

Let $a\in\eignsp{8}{T}=\nullsp{T-8I}\subset V$. Then $a=\sum_{k=1}^3a_kv_k$ for some scalars $a_1,a_2,a_3\in\wF$ and

$$\align{ 0 &= (T-8I)a \\ &= (T-8I)(a_1v_1+a_2v_2+a_3v_3) \\ &= a_1(T-8I)v_1+a_2(T-8I)v_2+a_3(T-8I)v_3 \\ &= a_1(Tv_1-8Iv_1)+a_2(Tv_2-8Iv_2)+a_3(Tv_3-8Iv_3) \\ &= a_1(8v_1-8v_1)+a_2(5v_2-8v_2)+a_3(5v_3-8v_3) \\ &= a_2(-3v_2)+a_3(-3v_3) \\ &= (-3a_2)v_2+(-3a_3)v_3 \\ }$$

Then the linear independence of $v_2,v_3$ implies that $0=-3a_2=-3a_3$ and hence that $0=a_2=a_3$. Then

$$ \eignsp{8}{T}=\nullsp{T-8I}=\setb{a_1v_1\in V: a_1\in\wF}=\span{v_1} $$

Let $a\in\eignsp{5}{T}=\nullsp{T-5I}\subset V$. Then $a=\sum_{k=1}^3a_kv_k$ and

$$\align{ 0 &= (T-5I)a \\ &= (T-5I)(a_1v_1+a_2v_2+a_3v_3) \\ &= a_1(T-5I)v_1+a_2(T-5I)v_2+a_3(T-5I)v_3 \\ &= a_1(Tv_1-5Iv_1)+a_2(Tv_2-5Iv_2)+a_3(Tv_3-5Iv_3) \\ &= a_1(8v_1-5v_1)+a_2(5v_2-5v_2)+a_3(5v_3-5v_3) \\ &= a_1(3v_1) \\ &= (3a_1)v_1 }$$

Then the linear independence of $v_1$ implies that $0=3a_1$ and hence that $0=a_1$. Then

$$ \eignsp{5}{T}=\nullsp{T-5I}=\setb{a_2v_2+a_3v_3\in V: a_2,a_3\in\wF}=\span{v_2,v_3} $$

There are alternative ways to compute these eigenspaces, which I detail next.

Claim 1 $\eignsb(8,T)=\text{span}(v_1)$

Proof Note that $(T-8I)\in\lnmpsb(V)$ is defined by

$$\begin{align*} (T-8I)v_1&=Tv_1-8Iv_1=8v_1-8v_1=0 \\ (T-8I)v_2&=Tv_2-8Iv_2=5v_2-8v_2=-3v_2 \\ (T-8I)v_3&=Tv_3-8Iv_3=5v_3-8v_3=-3v_3 \end{align*}$$

We see that the length of the largest linearly independent sublist of $(T-8I)v_1,(T-8I)v_2,(T-8I)v_3$ is 2. Then proposition W.3.10.b gives $\dim{\mathscr{R}(T-8I)}=2$. Then the Fundamental Theorem of Linear Maps gives

$$ \dim{\eignsb(8,T)}=\dim{\mathscr{N}(T-8I)}=\dim{V}-\dim{\mathscr{R}(T-8I)}=3-2=1 $$

Since $\dim{\eignsb(8,T)}=1$ and $v_1$ is an eigenvector of $T$ corresponding to $\lambda=8$, then proposition W.5.10 implies that $\eignsb(8,T)=\text{span}(v_1)$. $\blacksquare$

Note Proposition 3.117, p.112 gives an alternative way to compute $\dim{\mathscr{R}(T-8I)}$ with the rank of $\mtrxof{T}$. This matrix is given by

$$ \mtrxof{T-8I}=\mtrxof{T}-8\mtrxof{I}=\begin{bmatrix}8&0&0\\0&5&0\\0&0&5\end{bmatrix}-\begin{bmatrix}8&0&0\\0&8&0\\0&0&8\end{bmatrix}=\begin{bmatrix}0&0&0\\0&-3&0\\0&0&-3\end{bmatrix} $$

Propositions 3.36 and 3.38, p.73 give the first equality. Recall from definition 3.115, p.111 that the rank is the dimension of the span of the columns. We can clearly see that the dimension of the column span is $2$. And we can compute this. Clearly the zero first column is not part of any linearly independent list, so we’ll discard it.

$$ \begin{bmatrix}0\\0\\0\end{bmatrix}=a\begin{bmatrix}0\\-3\\0\end{bmatrix}+b\begin{bmatrix}0\\0\\-3\end{bmatrix}=\begin{bmatrix}0\\-3a\\-3b\end{bmatrix} $$

This becomes the system of equations

$$ -3a=0\\-3b=0 $$

Hence $0=a=b$ is the only solution and the last two columns of $\mtrxof{T-8I}$ are linearly independent.

Lastly we recall that the rank can be computed by the number of pivots in the upper-triangular matrix (or the diagonal matrix). Again we can see that there are $2$ pivots and hence the rank is $2$.

Claim 2 $\eignsb(5,T)=\text{span}(v_2,v_3)$

Proof Recall from the proof of Claim 1 that $Tv_1=8v_1,Tv_2=5v_2,Tv_3=5v_3$. Hence $v_2$ and $v_3$ are linearly independent eigenvectors corresponding to the eigenvalue $\lambda=5$. Again we can compute $\mathscr{R}(T-5I)$ in a couple ways. Note that $(T-5I)\in\lnmpsb(V)$ is defined by

$$\begin{align*} (T-5I)v_1&=Tv_1-5Iv_1=8v_1-5v_1=3v_1 \\ (T-5I)v_2&=Tv_2-5Iv_2=5v_2-5v_2=0 \\ (T-5I)v_3&=Tv_3-5Iv_3=5v_3-5v_3=0 \end{align*}$$

We see that the length of the largest linearly independent sublist of $(T-5I)v_1,(T-5I)v_2,(T-5I)v_3$ is 1. Then proposition W.3.10.b gives $\dim{\mathscr{R}(T-5I)}=1$. Then the Fundamental Theorem of Linear Maps gives

$$ \dim{\eignsb(5,T)}=\dim{\mathscr{N}(T-5I)}=\dim{V}-\dim{\mathscr{R}(T-5I)}=3-1=2 $$

Since $\dim{\eignsb(5,T)}=2$ and $v_2,v_3$ are linearly independent eigenvectors of $T$ corresponding to $\lambda=5$, then proposition W.5.11 implies that $\eignsb(5,T)=\text{span}(v_2,v_3)$. $\blacksquare$

Note Proposition 3.117, p.112 gives an alternative way to compute $\dim{\mathscr{R}(T-5I)}$ with the rank of $\mtrxof{T}$. This matrix is given by

$$ \mtrxof{T-5I}=\mtrxof{T}-5\mtrxof{I}=\begin{bmatrix}8&0&0\\0&5&0\\0&0&5\end{bmatrix}-\begin{bmatrix}5&0&0\\0&5&0\\0&0&5\end{bmatrix}=\begin{bmatrix}3&0&0\\0&0&0\\0&0&0\end{bmatrix} $$

Propositions 3.36 and 3.38, p.73 give the first equality. Recall from definition 3.115, p.111 that the rank is the dimension of the span of the columns. We can clearly see that the dimension of the column span is $1$.

Lastly we recall that the rank can be computed by the number of pivots in the upper-triangular matrix (or the diagonal matrix). Again we can see that there is $1$ pivot and hence the rank is $1$.

Example 5.40, p.157

Define $T\in\lnmpsb(\mathbb{R}^2)$ by

$$ T(x,y)\equiv(41x+7y,-20x+74y) $$

Then

$$\begin{align*} Te_1 &= T(1,0) \\ &= (41\cdot1+7\cdot0,-20\cdot1+74\cdot0) \\ &= (41,-20) \\ &= (41,0)+(0,-20) \\ &= 41(1,0)-20(0,1) \\ &= 41e_1-20e_2 \end{align*}$$

And

$$\begin{align*} Te_2 &= T(0,1) \\ &= (41\cdot0+7\cdot1,-20\cdot0+74\cdot1) \\ &= (7,74) \\ &= (7,0)+(0,74) \\ &= 7(1,0)+74(0,1) \\ &= 7e_1+74e_2 \end{align*}$$

Let’s check that $(1,4),(7,5)$ spans $\mathbb{R}$:

$$ (x,y)=a(1,4)+b(7,5)=(a,4a)+(7b,5b)=(a+7b,4a+5b) $$

This becomes the system of equations:

$$ a+7b=x\\ 4a+5b=y $$

Then

$$ \begin{bmatrix}1&7&x\\4&5&y\end{bmatrix} \rightarrow \begin{bmatrix}1&7&x\\0&5-28&y-4x\end{bmatrix} \rightarrow \begin{bmatrix}1&7&x\\0&23&y-4x\end{bmatrix} $$

$$ \rightarrow \begin{bmatrix}1&7&x\\0&1&\frac{y-4x}{23}\end{bmatrix} \rightarrow \begin{bmatrix}1&0&x-7\cdot\frac{y-4x}{23}\\0&1&\frac{y-4x}{23}\end{bmatrix} $$

Let’s check that $(1,4),(7,5)$ is independent in $\mathbb{R}$:

$$ (0,0)=a(1,4)+b(7,5)=(a,4a)+(7b,5b)=(a+7b,4a+5b) $$

This becomes the system of equations:

$$ a+7b=0\\ 4a+5b=0 $$

Then

$$ \begin{bmatrix}1&7&0\\4&5&0\end{bmatrix} \rightarrow \begin{bmatrix}1&7&0\\0&5-28&0-0\end{bmatrix} \rightarrow \begin{bmatrix}1&7&0\\0&23&0\end{bmatrix} $$

$$ \rightarrow \begin{bmatrix}1&7&0\\0&1&0\end{bmatrix} \rightarrow \begin{bmatrix}1&0&0\\0&1&0\end{bmatrix} \rightarrow 0=a=b $$

So our basis is $v_1,v_2$ where $v_1=(1,4)$ and $v_2=(7,5)$. We can compute the columns of $\mtrxofsb\big(T,(v_1,v_2)\big)$:

$$\begin{align*} Tv_1 &= T(1,4) \\ &= (41\cdot1+7\cdot4,-20\cdot1+74\cdot4) \\ &= (41+28,-20+296) \\ &= (69,276) \\ &= (69\cdot1,69\cdot4) \\ &= 69(1,4) \\ &= 69(1,4)+0(7,5) \\ &= 69v_1+0v_2 \end{align*}$$

and

$$\begin{align*} Tv_2 &= T(7,5) \\ &= (41\cdot7+7\cdot5,-20\cdot7+74\cdot5) \\ &= (287+35,-140+370) \\ &= (322,230) \\ &= (46\cdot7,46\cdot5) \\ &= 46(7,5) \\ &= 0(1,4)+46(7,5) \\ &= 0v_1+46v_2 \end{align*}$$

p.157, 5.41, Conditions equivalent to Diagonalizability

We want to show (a)$\iff$(b). That is, we want to show that $T$ is diagonalizable if and only if $V$ has a basis consisting of eigenvectors of $T$.

For concreteness, suppose $6=\dim{V}$. Then any operator on $V$ has at most $6=\dim{V}$ distinct eigenvalues (by 5.13).

First suppose that $T$ is diagonalizable. Then $T$ has a diagonal matrix with respect to some basis of $V$. That is, $\mtrxof{T,(v_1,\dots,v_6)}$ is a diagonal matrix for some basis $v_1,\dots,v_6$ of $V$:

$$ \mtrxof{T,(v_1,\dots,v_6)}=\bmtrx{\lambda_1&0&0&0&0&0\\0&\lambda_2&0&0&0&0\\0&0&\lambda_3&0&0&0\\0&0&0&\lambda_3&0&0\\0&0&0&0&\lambda_2&0\\0&0&0&0&0&\lambda_4} \tag{W.5.CeD.1} $$

Proposition 3.64 (without the typo) gives $\mtrxof{Tv_k}=\mtrxof{T}_{:,k}$. With the basis included in the notation, this is

$$ \mtrxof{Tv_k,(v_1,\dots,v_6)}=\mtrxof{T,(v_1,\dots,v_6)}_{:,k} $$

Hence

$$\align{ \mtrxof{Tv_1}=\mtrxof{T}_{:,1}=\bmtrx{\lambda_1\\0\\0\\0\\0\\0} \quad\quad \mtrxof{Tv_2}=\mtrxof{T}_{:,2}=\bmtrx{0\\\lambda_2\\0\\0\\0\\0} \quad\quad \mtrxof{Tv_3}=\mtrxof{T}_{:,3}=\bmtrx{0\\0\\\lambda_3\\0\\0\\0} \\ \tag{W.5.CeD.2} \\ \mtrxof{Tv_4}=\mtrxof{T}_{:,4}=\bmtrx{0\\0\\0\\\lambda_3\\0\\0} \quad\quad \mtrxof{Tv_5}=\mtrxof{T}_{:,5}=\bmtrx{0\\0\\0\\0\\\lambda_2\\0} \quad\quad \mtrxof{Tv_6}=\mtrxof{T}_{:,6}=\bmtrx{0\\0\\0\\0\\0\\\lambda_4} }$$

Hence definition 3.62 gives

$$\align{ &Tv_1=\lambda_1v_1+0v_2+0v_3+0v_4+0v_5+0v_6=\lambda_1v_1 \\ &Tv_2=0v_1+\lambda_1v_2+0v_3+0v_4+0v_5+0v_6=\lambda_2v_2 \\ &Tv_3=0v_1+0v_2+\lambda_3v_3+0v_4+0v_5+0v_6=\lambda_3v_3 \\ &Tv_4=0v_1+0v_2+0v_3+\lambda_3v_4+0v_5+0v_6=\lambda_3v_4 \tag{W.5.CeD.3} \\ &Tv_5=0v_1+0v_2+0v_3+0v_4+\lambda_2v_5+0v_6=\lambda_2v_5 \\ &Tv_6=0v_1+0v_2+0v_3+0v_4+0v_5+\lambda_4v_6=\lambda_4v_6 }$$

Hence $V$ has a basis consisting of eigenvectors of $T$. As we can see, the corresponding eigenvalues are the diagonal entries of $\mtrxof{T,(v_1,\dots,v_6)}$.

Notice that we reasoned that

$$ T\text{ diagonalizable}\implies \text{ W.5.CeD.1}\implies \text{W.5.CeD.2} \implies \text{W.5.CeD.3} \implies V\text{ has basis of eigenvectors of }T $$

It’s a good exercise to reason in reverse:

$$ V\text{ has basis of eigenvectors of }T\implies \text{ W.5.CeD.3}\implies \text{W.5.CeD.2} \implies \text{W.5.CeD.1} \implies T\text{ diagonalizable} $$

Example 5.43, p.159

Let $\lambda$ be an eigenvalue of $T$. The eigenpair equation is

$$ (\lambda w,\lambda z)=\lambda(w,z)=T(w,z)=(z,0) $$

This becomes the system of equations

$$ z=\lambda w\\ 0=\lambda z $$

If $\lambda\neq0$, then $z=0$. Then $\lambda w=0$. Hence $w=0$. But $(w,z)=(0,0)$ cannot be an eigenvector because eigenvectors are nonzero.

If $\lambda=0$, then $z=\lambda w=0\cdot w=0$ and $w$ can be anything. Hence

$$ \eignsp{0}{T}=\setb{(w,0)\in\wC^2:w\in\wC} $$

Note that $\dim{\eignsp{0}{T}}=1$ since the standard basis vector $e_1=(1,0)$ spans and is linearly independent in $\eignsp{0}{T}$. Hence condition (b) from prop 5.41 fails because any basis of $\eignsp{0}{T}$ consists of a single eigenvector and any such eigenvector cannot span $\wC^2$.

For condition (c), suppose there exist $1$-dimensional subspaces $U_1,U_2$ of $\wC^2$, each invariant under $T$, such that $\wC^2=U_1\oplus U_2$. Select any nonzero vectors $u_1,u_2$ from each. Then $Tu_1\in U_1$ and $Tu_2\in U_2$. Since $\dim{U_1}=\dim{U_2}=1$, then any vector in $U_1$ is a scalar multiple of $u_1$ and any vector in $U_2$ is a scalar multiple of $u_2$. In particular $Tu_1=\lambda_1u_1$ and $Tu_2=\lambda_2u_2$ for some scalars $\lambda_1,\lambda_2$. That is, $\lambda_1$ is an eigenvalue of $T$ with eigenvector $u_1$ and $\lambda_2$ is an eigenvalue of $T$ with eigenvector $u_2$.

But we showed that $\lambda=0$ is the only eigenvalue of $T$. Hence $0=\lambda_1=\lambda_2$. We also showed that any eigenvector of $T$ corresponding to eigenvalue $0$ is of the form $(w,0)$. Hence $u_1$ and $u_2$ must take the form $(w,0)$. Since $u_1$ and $u_2$ were chosen arbitrarily, then any vectors $x_1\in U_1$, $x_2\in U_2$ must take the form

$$ x_1=(w_1,0) \quad\quad\quad\quad x_2=(w_2,0) $$

But then it cannot be that $\wC^2=U_1\oplus U_2$ because any $(w,z)\in\wC^2$ with $z\neq0$ cannot be written as the sum of a vector from $U_1$ and vector from $U_2$.

Condition (d) clearly fails because $\wC^2\neq\eignsb(0,T)$. For the same reason, proposition W.2.12 implies that $\dim{\wC^2}\neq\dim{\eignsb(0,T)}$ so condition (e) fails too.

Examples of Eigenspaces

Example W.5.14 Let $e_1,e_2,e_3$ be the standard basis of $\wR^3$ and define $T\in\oper{\wR^3}$ by

$$ Te_1=3e_1 \quad\quad\quad\quad Te_2=3e_2 \quad\quad\quad\quad Te_3=2e_3 $$

Then

$$ \mtrxof{T}=\begin{bmatrix}3&0&0\\0&3&0\\0&0&2\end{bmatrix} $$

Then 5.32 implies that the eigenvalues of $T$ are precisely $3$ and $2$.

Let $v\in\eignsb(3,T)=\nullsp{T-3I}\subset\wR^3$. Then

$$ v=a_1e_1+a_2e_2+a_3e_3 $$

and

$$\begin{align*} 0 &= (T-3I)v \\ &= (T-3I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-3I)e_1+a_2(T-3I)e_2+a_3(T-3I)e_3 \\ &= a_1(Te_1-3Ie_1)+a_2(Te_2-3Ie_2)+a_3(Te_3-3Ie_3) \\ &= a_1(3e_1-3e_1)+a_2(3e_2-3e_2)+a_3(2e_3-3e_3) \\ &= a_1\cdot0+a_2\cdot0+a_3(-e_3) \\ &= -a_3e_3 \\ \end{align*}$$

Hence $0=-a_3$ and hence $0=a_3$ and

$$ \eignsb(3,T)=\mathscr{N}(T-3I)=\setb{(a_1,a_2,0)\in\wR^3:a_1,a_2\in\wR} $$

Similarly, let $v\in\eignsb(2,T)=\nullsp{T-2I}\subset\wR^3$. Then

$$\begin{align*} 0 &= (T-2I)v \\ &= (T-2I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-2I)e_1+a_2(T-2I)e_2+a_3(T-2I)e_3 \\ &= a_1(Te_1-2Ie_1)+a_2(Te_2-2Ie_2)+a_3(Te_3-2Ie_3) \\ &= a_1(e_1-2e_1)+a_2(e_2-2e_2)+a_3(2e_3-2e_3) \\ &= a_1(-e_1)+a_2(-e_2)+a_3\cdot0 \\ &= -a_1e_1-a_2e_2 \\ &= (-a_1)e_1+(-a_2)e_2 \end{align*}$$

Then the linear independence of $e_1,e_2$ implies that $0=-a_1=-a_2$. Hence $0=a_1=a_2$. Hence

$$ \eignsb(2,T)=\mathscr{N}(T-2I)=\setb{(0,0,a_3)\in\wR^3:a_3\in\wR} $$

Define

$$ U_1=\text{span}(e_1) \quad\quad\quad U_2=\text{span}(e_2) \quad\quad\quad U_3=\text{span}(e_3) $$

Example W.5.15 Let $e_1,e_2,e_3$ be the standard basis of $\wR^3$ and define $T\in\oper{\wR^3}$ by

$$ Te_1=3e_1 \quad\quad\quad\quad Te_2=7e_1+3e_2 \quad\quad\quad\quad Te_3=41e_1+8e_2+2e_3 $$

Then

$$ \mtrxof{T}=\begin{bmatrix}3&7&41\\0&3&8\\0&0&2\end{bmatrix} $$

Then 5.32 implies that the eigenvalues of $T$ are precisely $3$ and $2$.

Let $v\in\eignsb(3,T)=\nullsp{T-3I}\subset\wR^3$. Then

$$ v=a_1e_1+a_2e_2+a_3e_3 $$

and

$$\begin{align*} 0 &= (T-3I)v \\ &= (T-3I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-3I)e_1+a_2(T-3I)e_2+a_3(T-3I)e_3 \\ &= a_1(Te_1-3Ie_1)+a_2(Te_2-3Ie_2)+a_3(Te_3-3Ie_3) \\ &= a_1(3e_1-3e_1)+a_2(7e_1+3e_2-3e_2)+a_3(41e_1+8e_2+2e_3-3e_3) \\ &= a_1\cdot0+a_2(7e_1)+a_3(41e_1+8e_2-e_3) \\ &= (7a_2)e_1+a_3(41e_1)+a_3(8e_2)+a_3(-e_3) \\ &= (7a_2)e_1+(41a_3)e_1+(8a_3)e_2+(-a_3)e_3 \\ &= (7a_2+41a_3)e_1+(8a_3)e_2+(-a_3)e_3 \end{align*}$$

Then the linear independence of $e_1,e_2,e_3$ implies that $0=-a_3$ and hence $0=a_3$. This linear independence also implies that $0=7a_2+41a_3=7a_2$ and hence $0=a_2$. Hence

$$ \eignsb(3,T)=\mathscr{N}(T-3I)=\setb{(a_1,0,0)\in\wR^3:a_1\in\wR} $$

Similarly, let $v\in\eignsb(2,T)=\nullsp{T-2I}\subset\wR^3$. Then

$$\begin{align*} 0 &= (T-2I)v \\ &= (T-2I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-2I)e_1+a_2(T-2I)e_2+a_3(T-2I)e_3 \\ &= a_1(Te_1-2Ie_1)+a_2(Te_2-2Ie_2)+a_3(Te_3-2Ie_3) \\ &= a_1(3e_1-2e_1)+a_2(7e_1+3e_2-2e_2)+a_3(41e_1+8e_2+2e_3-2e_3) \\ &= a_1(e_1)+a_2(7e_1+e_2)+a_3(41e_1+8e_2) \\ &= (a_1)e_1+a_2(7e_1)+a_2(e_2)+a_3(41e_1)+a_3(8e_2) \\ &= (a_1)e_1+(7a_2)e_1+(a_2)e_2+(41a_3)e_1+(8a_3)e_2 \\ &= (a_1)e_1+(7a_2)e_1+(41a_3)e_1+(a_2)e_2+(8a_3)e_2 \\ &= (a_1+7a_2+41a_3)e_1+(a_2+8a_3)e_2 \\ \end{align*}$$

Then the linear independence of $e_1,e_2$ implies that $a_2=-8a_3$. Hence

$$\align{ 0 &= a_1+7a_2+41a_3 \\ &= a_1+7(-8)a_3+41a_3 \\ &= a_1-56a_3+41a_3 \\ &= a_1-15a_3 }$$

Hence $a_1=15a_3$ and

$$ \eignsb(2,T)=\mathscr{N}(T-2I)=\setb{(15a_3,-8a_3,a_3)\in\wR^3:a_3\in\wR} $$

We can visualize these eigenspaces in this graph.

Example W.5.16 Let $e_1,e_2,e_3$ be the standard basis of $\wR^3$ and define $T\in\oper{\wR^3}$ by

$$ Te_1=3e_1 \quad\quad\quad\quad Te_2=7e_1+4e_2 \quad\quad\quad\quad Te_3=41e_1+8e_2+2e_3 $$

Then

$$ \mtrxof{T}=\begin{bmatrix}3&7&41\\0&4&8\\0&0&2\end{bmatrix} $$

Then 5.32 implies that the eigenvalues of $T$ are precisely $3$, $4$, and $2$.

Let $v\in\eignsb(3,T)=\nullsp{T-3I}\subset\wR^3$. Then

$$ v=a_1e_1+a_2e_2+a_3e_3 $$

and

$$\begin{align*} 0 &= (T-3I)v \\ &= (T-3I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-3I)e_1+a_2(T-3I)e_2+a_3(T-3I)e_3 \\ &= a_1(Te_1-3Ie_1)+a_2(Te_2-3Ie_2)+a_3(Te_3-3Ie_3) \\ &= a_1(3e_1-3e_1)+a_2(7e_1+4e_2-3e_2)+a_3(41e_1+8e_2+2e_3-3e_3) \\ &= a_1\cdot0+a_2(7e_1+e_2)+a_3(41e_1+8e_2-e_3) \\ &= a_2(7e_1)+a_2(e_2)+a_3(41e_1)+a_3(8e_2)+a_3(-e_3) \\ &= (7a_2)e_1+(a_2)e_2+(41a_3)e_1+(8a_3)e_2+(-a_3)e_3 \\ &= (7a_2)e_1+(41a_3)e_1+(a_2)e_2+(8a_3)e_2+(-a_3)e_3 \\ &= (7a_2+41a_3)e_1+(a_2+8a_3)e_2+(-a_3)e_3 \end{align*}$$

Then the linear independence of $e_1,e_2,e_3$ implies that $0=-a_3$ and hence $0=a_3$. This linear independence also implies that $0=7a_2+41a_3=7a_2$ and hence $0=a_2$. Hence

$$ \eignsb(3,T)=\mathscr{N}(T-3I)=\setb{(a_1,0,0)\in\wR^3:a_1\in\wR} $$

Similarly, let $v\in\eignsb(4,T)=\nullsp{T-4I}\subset\wR^3$. Then

$$ v=a_1e_1+a_2e_2+a_3e_3 $$

and

$$\begin{align*} 0 &= (T-4I)v \\ &= (T-4I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-4I)e_1+a_2(T-4I)e_2+a_3(T-4I)e_3 \\ &= a_1(Te_1-4Ie_1)+a_2(Te_2-4Ie_2)+a_3(Te_3-4Ie_3) \\ &= a_1(3e_1-4e_1)+a_2(7e_1+4e_2-4e_2)+a_3(41e_1+8e_2+2e_3-4e_3) \\ &= a_1(-e_1)+a_2(7e_1)+a_3(41e_1+8e_2-2e_3) \\ &= (-a_1)e_1+(7a_2)e_1+a_3(41e_1)+a_3(8e_2)+a_3(-2e_3) \\ &= (-a_1)e_1+(7a_2)e_1+(41a_3)e_1+(8a_3)e_2+(-2a_3)e_3 \\ &= (-a_1+7a_2+41a_3)e_1+(8a_3)e_2+(-2a_3)e_3 \\ \end{align*}$$

Then the linear independence of $e_1,e_2,e_3$ implies that $0=-2a_3$ and hence $0=a_3$. This linear independence also implies that $0=-a_1+7a_2+41a_3=-a_1+7a_2$ and hence $a_1=7a_2$. Hence

$$ \eignsb(4,T)=\mathscr{N}(T-4I)=\setb{(7a_2,a_2,0)\in\wR^3:a_2\in\wR} $$

Similarly, let $v\in\eignsb(2,T)=\nullsp{T-2I}\subset\wR^3$. Then

$$\begin{align*} 0 &= (T-2I)v \\ &= (T-2I)(a_1e_1+a_2e_2+a_3e_3) \\ &= a_1(T-2I)e_1+a_2(T-2I)e_2+a_3(T-2I)e_3 \\ &= a_1(Te_1-2Ie_1)+a_2(Te_2-2Ie_2)+a_3(Te_3-2Ie_3) \\ &= a_1(3e_1-2e_1)+a_2(7e_1+4e_2-2e_2)+a_3(41e_1+8e_2+2e_3-2e_3) \\ &= a_1(e_1)+a_2(7e_1+2e_2)+a_3(41e_1+8e_2) \\ &= (a_1)e_1+a_2(7e_1)+a_2(2e_2)+a_3(41e_1)+a_3(8e_2) \\ &= (a_1)e_1+(7a_2)e_1+(2a_2)e_2+(41a_3)e_1+(8a_3)e_2 \\ &= (a_1)e_1+(7a_2)e_1+(41a_3)e_1+(2a_2)e_2+(8a_3)e_2 \\ &= (a_1+7a_2+41a_3)e_1+(2a_2+8a_3)e_2 \\ \end{align*}$$

Then the linear independence of $e_1,e_2$ implies that $2a_2=-8a_3$ or $a_2=-4a_3$. Hence

$$\align{ 0 &= a_1+7a_2+41a_3 \\ &= a_1+7(-4)a_3+41a_3 \\ &= a_1-28a_3+41a_3 \\ &= a_1+13a_3 }$$

Hence $a_1=-13a_3$ and

$$ \eignsb(2,T)=\mathscr{N}(T-2I)=\setb{(-13a_3,-4a_3,a_3)\in\wR^3:a_3\in\wR} $$

Alternate Solution to Exercise 5.C.7

(7) Suppose $T\in\lnmpsb(V)$ has a diagonal matrix $A$ with respect to some basis of $V$. And let $\lambda\in\mathbb{F}$. Then $\lambda$ appears on the diagonal of $A$ precisely $\dim{\eignsb(\lambda,T)}$ times.

Proof Let $v_1,\dots,v_n$ be the basis with respect to which $T$ has the diagonal matrix $A$. Then

$$ Tv_1=\lambda_1v_1\\ Tv_2=\lambda_2v_2\\ \vdots\\ Tv_n=\lambda_nv_n $$

Let $\lambda_{i_1},\dots,\lambda_{i_m}$ be the list of unique eigenvalues. And let $\phi_k$ be the index of vectors from $v_1,\dots,v_n$ that are eigenvectors corresponding to $\lambda_{i_k}$. Note the $\text{len}(\phi_k)$ equals the number of times that $\lambda_{i_k}$ appears on the diagonal. So it suffices to show that $\text{len}(\phi_k)=\dim{\eignsb(\lambda_{i_k},T)}$.

We will show that $\eignsb(\lambda_{i_k},T)=\bigoplus_{j\in\phi_k}\text{span}(v_j)$. To do this, let’s first show that $\sum_{j\in\phi_k}\text{span}(v_j)=\text{span}\big((v_j)_{j\in\phi_k}\big)$. Proposition W.2.11 gives this result but it’s easy to prove directly.

Let $w\in\text{span}\big((v_j)_{j\in\phi_k}\big)$. Then $w=\sum_{j\in\phi_k}a_jv_j$. Since $a_jv_j\in\text{span}(v_j)$, then

$$ w=\sum_{j\in\phi_k}a_jv_j\in\sum_{j\in\phi_k}\text{span}(v_j) $$

and $\text{span}\big((v_j)_{j\in\phi_k}\big)\subset\sum_{j\in\phi_k}\text{span}(v_j)$.

In the other direction, let $w\in\sum_{j\in\phi_k}\text{span}(v_j)$. Then $w=\sum_{j\in\phi_k}s_j$ where $s_j\in\text{span}(v_j)$. Hence $s_j=a_jv_j$ and

$$ w=\sum_{j\in\phi_k}s_j=\sum_{j\in\phi_k}a_jv_j\in\text{span}\big((v_j)_{j\in\phi_k}\big) $$

and $\text{span}\big((v_j)_{j\in\phi_k}\big)=\sum_{j\in\phi_k}\text{span}(v_j)$.

Next we will show that $\eignsb(\lambda_{i_k},T)=\sum_{j\in\phi_k}\text{span}(v_j)$. Let $w\in\eignsb(\lambda_{i_k},T)\subset V$. Then $w=\sum_{j=1}^na_jv_j$ and

$$\begin{align*} 0 &= (T-\lambda_{i_k}I)w \\ &= (T-\lambda_{i_k}I)\Big(\sum_{j=1}^na_jv_j\Big) \\ &= \sum_{j=1}^na_j(T-\lambda_{i_k}I)v_j \\ &= \sum_{j\notin\phi_k}a_j(T-\lambda_{i_k}I)v_j \\ &= \sum_{j\notin\phi_k}a_j(Tv_j-\lambda_{i_k}Iv_j) \\ &= \sum_{j\notin\phi_k}a_j(\lambda_jv_j-\lambda_{i_k}v_j) \\ &= \sum_{j\notin\phi_k}a_j(\lambda_j-\lambda_{i_k})v_j \\ \end{align*}$$

Since $0\neq\lambda_j-\lambda_{i_k}$ for $j\notin\phi_k$ and the $v_j$ are linearly independent, then $0=a_j$ for $j\notin\phi_k$. Hence $w\in\text{span}\big((v_j)_{j\in\phi_k}\big)$ and

$$ \eignsb(\lambda_{i_k},T)\subset\text{span}\big((v_j)_{j\in\phi_k}\big)=\sum_{j\in\phi_k}\text{span}(v_j) $$

In the other direction, let $w\in\text{span}\big((v_j)_{j\in\phi_k}\big)$. Then $w=\sum_{j\in\phi_k}a_jv_j$ and

$$\begin{align*} Tw &= T\Big(\sum_{j\in\phi_k}a_jv_j\Big) \\ &= \sum_{j\in\phi_k}a_jTv_j \\ &= \sum_{j\in\phi_k}a_j\lambda_{i_k}v_j \\ &= \lambda_{i_k}\sum_{j\in\phi_k}a_jv_j \\ &= \lambda_{i_k}w \end{align*}$$

Hence $w$ is an eigenvector of $T$ corresponding to $\lambda_{i_k}$ and $w\in\eignsb(\lambda_{i_k},T)$. Hence

$$ \sum_{j\in\phi_k}\text{span}(v_j)=\text{span}\big((v_j)_{j\in\phi_k}\big)\subset\eignsb(\lambda_{i_k},T) $$

and

$$ \sum_{j\in\phi_k}\text{span}(v_j)=\eignsb(\lambda_{i_k},T) $$

Since $(v_j)_{j\in\phi_k}$ is linearly independent, then $\bigoplus_{j\in\phi_k}\text{span}(v_j)$ is a direct sum and $\dim{\bigoplus_{j\in\phi_k}\text{span}(v_j)}=\text{len}(\phi_k)$ (by W.2.11). Hence

$$ \dim{\eignsb(\lambda_{i_k},T)}=\dim{\bigoplus_{j\in\phi_k}\text{span}(v_j)}=\text{len}(\phi_k)\quad\blacksquare $$

General

Proposition W.5.16 Scaling preserves eigenpairs.

Proof Let $T\in\linmap{V}{W}$ and let $\lambda$ be an eigenvalue of $T$ corresponding to eigenvector $v$. Then

$$ T(\beta v)=\beta Tv=\beta\lambda v=\lambda(\beta v) $$

for any $\beta\in\wF$. Hence $\beta v$ is an eigenvector of $T$ corresponding to the eigenvalue $\lambda$. $\wes$

Proposition W.5.17 Let $T\wiov$. Then $T$ is invertible if and only if $0$ is not an eigenvalue of $T$.

Proof We have

$$\align{ 0\text{ is an eigenvalue of }T &\iff T-0\cdot I\text{ is not invertible}\tag{by 5.6} \\ &\iff T\text{ is not invertible} }$$

$\wes$

Example W.5.17 This is an example of linearly independent eigenvectors corresponding to the same eigenvalue.

Define $T\in\oper{\wR^2}$ by

$$ Te_1=3e_1 \dq\dq Te_2=3e_2 $$

where $e_1,e_2$ is the standard basis of $\wR^2$. By 3.5, this defines a unique operator on $\wR^2$ and we have

$$ T(x,y)=T(xe_1+ye_2)=xTe_1+yTe_2=x3e_1+y3e_2=(3x,3y) $$

for all $(x,y)\in\wR^2$.

Since $e_1$ and $e_2$ are both eigenvectors of $T$ corresponding to the eigenvalue $\lambda=3$, then $T$ has linearly independent eigenvectors corresponding to the same eigenvalue. $\wes$