Linear Algebra Done Right Ch.5 Exercises

31 Aug 2018

Exercises 5.A

(1) Suppose $T\in\lnmpsb(V)$ and $U$ is a subspace of $V$. Then

(A) If $U\subset\mathscr{N}(T)$, then $U$ is invariant under $T$.

(B) If $\mathscr{R}(T)\subset U$, then $U$ is invariant under $T$.

Proof (A) Let $u\in U\subset\mathscr{N}(T)$. Then $Tu=0\in U$ because every subspace contains the origin. (B) Let $u\in U$. Then $Tu\in\mathscr{R}(T)\subset U$. $\blacksquare$

(2) Suppose $S,T\in\lnmpsb(V)$ are such that $ST=TS$. Then $\mathscr{N}(S)$ is invariant under $T$.

Proof Let $u\in\mathscr{N}(S)$. Then $S(Tu)=(ST)u=(TS)u=T(Su)=T(0)=0$. Hence $Tu\in\mathscr{N}(S)$. $\blacksquare$

(3) Suppose $S,T\in\lnmpsb(V)$ are such that $ST=TS$. Then $\mathscr{R}(S)$ is invariant under $T$.

Proof Let $u\in\mathscr{R}(S)$. Then $u=Sv$ for some $v\in V$ and $Tu=T(Sv)=(TS)v=(ST)v=S(Tv)\in\mathscr{R}(S)$. $\blacksquare$

(4) Suppose $T\in\lnmpsb(V)$ and $U_1,\dots,U_m$ are subspaces of $V$ invariant under $T$. Then $U_1+\dotsb+U_m$ is invariant under $T$.

Proof Let $u=\sum_{k=1}^mu_k\in\sum_{k=1}^mU_k$. Then $Tu=\big(\sum_k^mu_k\big)=\sum_k^mTu_k\in\sum_k^mU_k$ since the invariance under $T$ of $U_k$ implies $Tu_k\in U_k$. $\blacksquare$

(5) Suppose $T\in\lnmpsb(V)$. Then the intersection of every collection of subspaces of $V$ invariant under $T$ is invariant under $T$.

Proof Let $\{U_\phi\}_{\phi\in\Gamma}$ be a collection of subspaces of $V$ invariant under $T$. Here $\Gamma$ is an arbitrary index set. Let $u\in\cap_{\phi\in\Gamma}U_\phi$. Then $u\in U_\phi$ for every $\phi\in\Gamma$. Then the invariance under $T$ of $U_\phi$ implies that $Tu\in U_\phi$ for every $\phi\in\Gamma$. Hence $Tu\in\cap_{\phi\in\Gamma}U_\phi$. $\blacksquare$

(6) If $U$ is a subspace of $V$ that is invariant under every operator on $V$, then $U=\{0\}$ or $U=V$.

Proof We will prove the contrapositive statement: if $U$ is a subspace of $V$ such that $U\neq\{0\}$ and $U\neq V$, then there exists $T\in\lnmpsb(V)$ such that $U$ is not invariant under $T$.

Suppose $U$ is a subspace of $V$ such that $U\neq\{0\}$ and $U\neq V$. Since $U\neq\{0\}$, then there exists $u\in U\setminus\{0\}$. And since $U\neq V$, then there exists $w\in V\setminus U$. Since $u\neq0$, we can extend the linearly independent list $u$ to a basis $u,v_1,\dots,v_n$ of $V$. Define $T\in\lnmpsb(V)$ by

$$ T\Big(au+\sum_{k=1}^nb_kv_k\Big)\equiv aw $$

The proof in proposition 3.5, p.54 shows that $T$ is linear. Then $Tu=w\notin U$. $\blacksquare$

(7) Suppose $T\in\lnmpsb(\mathbb{R}^2)$ is defined by $T(x,y)\equiv(-3y,x)$. Find the eigenvalues of $T$.

Solution The answer depends on whether $\mathbb{F}=\mathbb{R}$ or $\mathbb{F}=\mathbb{C}$. Let $(x,y)$ be an eigenvector of $T$ corresponding to the eigenvalue $\lambda$. Then

$$ (-3y,x)=T(x,y)=\lambda(x,y)=(\lambda x,\lambda y) $$

Hence

$$ \lambda x=-3y\\x=\lambda y $$

If $y=0$, then $x=\lambda0=0$. And if $x=0$, then $y=\frac{\lambda}{-3}0=0$. So we can ignore the possibilities that $x$ or $y$ is $0$ since $(0,0)$ is not an eigenvector.

So $y=-\frac{\lambda}{3}x$ and $x=\lambda y=-\frac{\lambda^2}{3}x$. Dividing by $x$, we see that $1=-\frac{\lambda^2}{3}$ hence $\lambda^2=-3$. If $\mathbb{F}=\mathbb{C}$, then

$$ \lambda=\sqrt{-3}=\sqrt{-1}\sqrt{3}=\pm i\sqrt{3} $$

If $\mathbb{F}=\mathbb{R}$, then no such $\lambda$ exists. $\blacksquare$

(8) Define $T\in\lnmpsb(\mathbb{F}^2)$ by

$$ T(w,z)\equiv(z,w) $$

Find all eigenvalues and eigenvectors of $T$.

Solution Suppose $\lambda$ is an eigenvaloe of $T$. For this particular operator, the eigenvalue-eigenvector equation is

$$ (z,w)=T(w,z)=\lambda(w,z)=(\lambda w,\lambda z) $$

This becomes the system of equations

$$ z=\lambda w\\w=\lambda z $$

If $w=0$, then $z=\lambda\cdot0=0$. And if $z=0$, then $w=\lambda\cdot0=0$. Since $(0,0)$ is not an eigenvalue, then we can ignore the possibilities that $w=0$ or $z=0$.

Substituting the value for $z$ from the first equation into the second equation gives $w=\lambda z=\lambda^2 w$. Dividing by $w$ gives $\lambda^2=1$. Hence $\lambda=\pm1$. Substituting this back into the system of equations, we get

$$\begin{matrix} \lambda=1&&\lambda=-1\\z=w&&z=-w\\w=z&&w=-z \end{matrix}$$

Hence the set of eigenvectors corresponding to the eigenvalue $1$ is

$$\begin{matrix} \big\{(w,w):w\in\mathbb{F}\big\} \end{matrix}$$

And the set of eigenvectors corresponding to the eigenvalue $-1$ is

$$\begin{matrix} \big\{(w,-w):w\in\mathbb{F}\big\}\quad\blacksquare \end{matrix}$$

(9) Define $T\in\oper{\wF^3}$ by

$$\begin{matrix} T(z_1,z_2,z_3)\equiv(2z_2,0,5z_3) \end{matrix}$$

Find all eigenvalues and eigenvectors of $T$.

Proof Suppose $\lambda$ is an eigenvalue of $T$. For this particular operator, the eigenvalue-eigenvector equation is

$$\begin{matrix} (2z_2,0,5z_3)=T(z_1,z_2,z_3)=\lambda(z_1,z_2,z_3)=(\lambda z_1,\lambda z_2,\lambda z_3) \end{matrix}$$

This becomes the system of equations

$$ 2z_2=\lambda z_1\\0=\lambda z_2\\5z_3=\lambda z_3 $$

If $\lambda\neq0$, then the second equation implies that $z_2=0$. Then the first equation implies that $z_1=0$. Because an eigenvalue must have a nonzero eigenvector, then it must be that $z_3\neq0$. Hence the third equation shows that $\lambda=5$. That is, $5$ is the only nonzero eigenvalue of $T$. The set of eigenvectors corresponding to the eigenvalue $5$ is

$$ \big\{(0,0,z_3):z_3\in\mathbb{F}\big\} $$

If $\lambda=0$, then the first and third equations above show that $z_2=0$ and $z_3=0$. Because eigenvectors must be nonzero, then $z_1\neq0$. But otherwise $z_1$ can take on any value in $\mathbb{F}$ and still satisfy this system of equations, with $z_2=0$ and $z_3=0$. The set of eigenvectors corresponding to the eigenvalue $0$ is

$$ \big\{(z_1,0,0):z_1\in\mathbb{F}\big\}\quad\blacksquare $$

(11) Define $T\in\lnmpsb\big(\mathscr{P}(\mathbb{R}),\mathscr{P}(\mathbb{R})\big)$ by $Tp=p’$. Find all eigenvalues and eigenvectors of $T$.

Solution Suppose $\lambda$ is an eigenvalue of $T$ with an eigenvector $q$. Then

$$ q'=Tq=\lambda q $$

Note that, in general, we have $\deg{p’}<\deg{p}$, since the text (bottom of p.120) defines $\deg{0}=-\infty$. If $\lambda\neq0$, then $\deg{\lambda q}>\deg{q’}$. This contradicts $q’=\lambda q$.

If $\lambda=0$, then $q’=0$ hence $q=c$ is constant. But we must restrict $q=c\neq0$ since eigenvectors cannot be zero. Hence the only eigenvalue of $T$ is $\lambda=0$ with nonzero constant polynomials as eigenvectors. $\blacksquare$

(12) Define $T\in\lnmpsb\big(\mathscr{P}_{4}(\mathbb{R})\big)$ by

$$ (Tp)(x)=xp'(x) $$

for all $x\in\mathbb{R}$. Find all eigenvalues and eigenvectors of $T$.

Solution Suppose $\lambda$ is an eigenvalue of $T$ with an eigenvector $q$. Then

$$ xq'=Tq=\lambda q $$

Let $q=\sum_{k=0}^na_kx^k$ for $n\leq4$ and $a_n\neq0$. Then

$$ xq'=x\Big(\sum_{k=0}^na_kx^k\Big)'=x\sum_{k=0}^nka_kx^{k-1}=\sum_{k=0}^nka_kx^k $$

and

$$ \lambda q=\lambda\sum_{k=0}^na_kx^k=\sum_{k=0}^n\lambda a_kx^k $$

and

$$ \sum_{k=0}^nka_kx^k=xq'=\lambda q=\sum_{k=0}^n\lambda a_kx^k $$

With $0=a_0=\dotsb=a_{n-1}$ and since $0\neq a_n$, we see that $\lambda=n$ is an eigenvalue with eigenvector $q=x^n$:

$$ na_nx^n=\lambda a_nx^n $$

This satisfies $Tq=\lambda q$ since

$$ Tq=xq'=x(x^n)'=x(nx^{n-1})=nx^n=\lambda q $$

We can perform the same procedure with $n=4,3,2,1,0$ and find that the eigenvalues of $T$ are $0,1,2,3,4$ with the corresponding eigenvectors $1,x,x^2,x^3,x^4$. $\blacksquare$

(14) Suppose $V=U\oplus W$, where $U$ and $W$ are nonzero subspaces of $V$. Define $P\in\lnmpsb(V)$ by $P(u+w)=u$ for $u\in U$ and $w\in W$. Find all eigenvalues and eigenvectors of $P$.

Solution Since $V=U\oplus W$, the any $v\in V$ can be written as $v=u+w$ for some $u\in U$ and some $w\in W$. Hence $Pv=P(u+w)=u$ is well-defined on $V$. It’s simple to check that $P$ is linear.

Suppose $\lambda$ and $v$ are an eigenpair of $P$. Then the eigenpair equation is

$$ u=P(u+w)=Pv=\lambda v=\lambda(u+w)=\lambda u+\lambda w $$

Subtracting $u$ from boths sides, we get

$$ 0=\lambda u-u+\lambda w=(\lambda-1)u+\lambda w $$

Proposition 1.44, p.23 gives that $0=(\lambda-1)u=\lambda w$.

If $u\neq0$, then $\lambda=1$ and hence $w=0$. Hence any nonzero $v=u+0\in U$ is an eigenvector.

If $w\neq0$, then $\lambda=0$ and hence $u=0$. Hence any nonzero $v=0+w\in W$ is an eigenvector. $\blacksquare$

(15) Suppose $T\in\lnmpsb(V)$. Also suppose $S\in\lnmpsb(V)$ is invertible. Then

(A) $T$ and $S^{-1}TS$ have the same eigenvalues.

(B) What is the relationship between eigenvectors of $T$ and the eigenvectors of $S^{-1}TS$?

Solution Suppose $\lambda$ is an eigenvalue of $T$ with eigenvector $v\in V$. Since $S$ is invertible, then so is $\inv{S}$. Then 3.69 implies that $\inv{S}$ is injective. Hence $\nullsp{\inv{S}}=\set{0}$ (by 3.16). Hence, since $v\neq0$, then $\inv{S}v\neq0$. Also note that

$$ (\inv{S}TS)(\inv{S}v)=(\inv{S}TS\inv{S})v=\inv{S}Tv=\inv{S}(\lambda v)=\lambda\inv{S}v $$

Then this equality and the fact that $\inv{S}v\neq0$ implies that $\inv{S}v$ is an eigenvector of $\inv{S}TS$ corresponding to eigenvalue $\lambda$.

Hence every eigenpair $\lambda,v$ of $T$ gives an eigenpair $\lambda,\inv{S}v$ of $\inv{S}TS$.

Conversely, if $\lambda,S^{-1}v$ is an eigenpair of $S^{-1}TS$, then

$$ Tv=S(S^{-1}TS)(S^{-1}v)=S\lambda(S^{-1}v)=\lambda SS^{-1}(v)=\lambda v $$

Hence $\lambda,v$ is an eigenpair of $T$ if and only if $\lambda,S^{-1}v$ is an eigenpair of $S^{-1}TS$.

Now suppose $\lambda,u$ is an eigenpair of $\inv{S}TS$. Then $u\neq0$ and hence $Su\neq0$ since $S$ is invertible and hence injective. Also note that

$$ T(Su)=(S\inv{S}T)(Su)=S(\inv{S}TS)u=S(\lambda u)=\lambda(Su) $$

Hence every eigenpair $\lambda,u$ of $\inv{S}TS$ gives an eigenpair $\lambda,Su$ of $T$.

Conversely, if $\lambda, Su$ is an eigenpair of $T$, then

$$ (\inv{S}TS)u=\inv{S}(T(Su))=\inv{S}(\lambda Su)=\lambda\inv{S}Su=\lambda u $$

Hence $\lambda,u$ is an eigenpair of $S^{-1}TS$ if and only if $\lambda,Su$ is an eigenpair of $T$. $\blacksquare$

(16) Suppose $V$ is a complex vector space, $T\in\lnmpsb(V)$, and the matrix of $T$ with respect to some basis of $V$ contains only real entries. Then if $\lambda$ is an eigenvalue of $T$, then so is $\overline{\lambda}$.

Proof Let $v_1,\dots,v_n$ be a basis of $V$ such that $\mtrxof{T}$ contains only real entries. That is

$$ Tv_k=\sum_{j=1}^nA_{j,k}v_j\quad A_{j,k}\in\mathbb{R}\text{ for }k=1,\dots,n $$

Note that

$$ \overline{Tv_k}=\overline{\sum_{j=1}^nA_{j,k}v_j}=\sum_{j=1}^n\overline{A_{j,k}v_j}=\sum_{j=1}^n\overline{A_{j,k}}v_j=\sum_{j=1}^nA_{j,k}v_j=Tv_k $$

Suppose $\lambda $ and $v\equiv\sum_{k=1}^na_kv_k$ form an eigenpair of $T$. Then we have

$$ \lambda v=Tv=T\Big(\sum_{k=1}^na_kv_k\Big)=\sum_{k=1}^na_kTv_k $$

Taking complex conjugates of both sides, we get

$$ \overline{\lambda}\overline{v}=\overline{\sum_{k=1}^na_kTv_k}=\sum_{k=1}^n\overline{a_kTv_k}=\sum_{k=1}^n\overline{a_k}Tv_k=T\Big(\sum_{k=1}^n\overline{a_k}v_k\Big)=T\overline{v} $$

Notice that $\overline{v}\neq0$ since $v\neq0$. Hence $\overline{\lambda}$ and $\overline{v}$ are also an eigenpair of $T$. $\blacksquare$

(18) Show that the forward-shift operator $T\in\lnmpsb(\mathbb{C}^{\infty}$ defined by

$$ T(z_1,z_2,z_3,\dots)\equiv(0,z_1,z_2,\dots) $$

has no eigenvalues.

Proof Suppose $\lambda$ is an eigenvalue of $T$. Then

$$ (\lambda z_1,\lambda z_2,\lambda z_3,\dots)=(0,z_1,z_2,\dots) $$

If $\lambda=0$, then $z_1=\lambda z_2=0$ and $z_2=\lambda z_3=0$ and so on. Hence if $\lambda=0$, then the only vector satisfying $Tz=\lambda z$ is $z=0$.

If $\lambda\neq0$, then $0=\lambda z_1$ implies that $z_1=0$. This implies that $z_2=0$ since $\lambda z_2=z_1=0$. And so on. Hence if $\lambda\neq0$, then the only vector satisfying $Tz=\lambda z$ is $z=0$.

(19) Suppose $n$ is a positive integer and $T\in\lnmpsb(\mathbb{F}^n)$ defined by

$$ T(x_1,\dots,x_n)\equiv\Big(\sum_{k=1}^nx_k,\dots,\sum_{k=1}^nx_k\Big) $$

In other words, $T$ is the operator whose matrix (with respect to the standard basis) consists of all $1$’s. Find all eigenvalues and eigenvectors of $T$.

Solution Suppose $\lambda$ is an eigenvalue of $T$. The eigenpair equation $Tx=\lambda x$ is

$$ (\lambda x_1,\dots,\lambda x_n)=T(x_1,\dots,x_n)=\Big(\sum_{k=1}^nx_k,\dots,\sum_{k=1}^nx_k\Big) $$

This becomes the system of equations

$$ \sum_{k=1}^nx_k=\lambda x_1 \\ \vdots \\ \sum_{k=1}^nx_k=\lambda x_n $$

Hence $\lambda x_1=\dotsb=\lambda x_n$

If $\lambda=0$, then all equations in this system become $0=\sum_{k=1}^nx_k$. Hence $\lambda=0$ is an eigenvalue of $T$ and the corresponding set of eigenvectors is

$$ \Big\{(x_1,\dots,x_n)\in\mathbb{F}^n:0=\sum_{k=1}^nx_k\Big\} $$

If $\lambda\neq0$, then the observation $\lambda x_1=\dotsb=\lambda x_n$ implies that $x_1=\dotsb=x_n$. Let $\phi$ denote the common value of $x_1,\dots,x_n$. Then all of the equations in the system above become the equation $n\phi=\lambda\phi$. Since eigenvectors cannot be $0$, then $\phi$ cannot be $0$. Hence $\lambda=n$ and $\lambda=n$ is an eigenvalue of $T$ and the corresponding set of eigenvectors is

$$ \big\{(x_1,\dots,x_n)\in\mathbb{F}^n:x_1=\dotsb=x_n\big\}\quad\blacksquare $$

(21) Suppose $T\in\lnmpsb(V)$ is invertible. Then

(A) $\lambda\in\mathbb{F}\setminus\{0\}$ is an eigenvalue of $T$ if and only if $\frac1\lambda$ is an eigenvalue of $T^{-1}$

(B) $T$ and $T^{-1}$ have the same eigenvectors.

Proof Suppose $\lambda$ is an eigenvalue of $T$. Then there exists a nonzero $v\in V$ such that

$$ Tv=\lambda v $$

Applying $T^{-1}$ to both sides, we get $v=\lambda T^{-1}v$. This is equivalent to $T^{-1}v=\frac1\lambda v$. Hence $\frac1\lambda$ and $v$ form an eigenpair of $T^{-1}$.

Suppose $\frac1\lambda$ is an eigenvalue of $T^{-1}$. Then there exists a nonzero $v\in V$ such that

$$ T^{-1}v=\frac1\lambda v $$

Applying $T$ to both sides, we get $v=\frac1\lambda Tv$. This is equivalent to $Tv=\lambda v$. Hence $\lambda$ and $v$ form an eigenpair of $T$. $\blacksquare$

(23) Suppose $V$ is finite-dimensional and $S,T\in\lnmpsb(V)$. Then $ST$ and $TS$ have the same eigenvalues.

Proof Let $\lambda$ be an eigenvalue of $ST$. And let nonzero $v$ be a corresponding eigenvector. Then $STv=\lambda v$ and

$$ TS(Tv)=T(STv)=T(\lambda v)=\lambda Tv $$

If $Tv\neq0$, then this equation shows that $\lambda$ is an eigenvalue of $TS$.

If $Tv=0$, then

$$ \lambda v=(ST)v=S(Tv)=S(0)=0 $$

Since $v\neq0$, then this equality implies that $\lambda=0$.

Additionally, if $Tv=0$, then $\mathscr{N}(T)\neq\{0\}$ since $v\neq0$. Hence $T$ is not injective (by 3.16, p.61). Hence $T$ is not invertible (by 3.56, p.80-81). Hence $TS$ is not invertible (by Exercise 3.D.9).

Let’s assume that $TS$ has an upper-triangular matrix with respect to some basis (existence is guaranteed if $V$ is a complex vector space). Then the non-invertibility of $TS$ implies that $0$ must appear on the diagonal of this matrix (by 5.30, p.150). Hence $0=\lambda$ is an eigenvalue of $TS$ (by 5.32, p.152).

Reversing the roles of $S$ and $T$ proves the other direction. $\blacksquare$

(25) Suppose $T\in\lnmpsb(V)$ and $u,v$ are eigenvectors of $T$ such that $u+v$ is also an eigenvector of $T$. Then $u$ and $v$ share the same eigenvalue.

Proof Let $\lambda_u,\lambda_v$ be the eigenvalues corresponding to $u,v$ respectively. Let $\lambda$ be the eigenvalue corresponding to $u+v$. Then

$$ \lambda u+\lambda v=\lambda(u+v)=T(u+v)=Tu+Tv=\lambda_uu+\lambda_vv $$

Equivalently

$$ (\lambda-\lambda_u) u+(\lambda-\lambda_v) v=0 $$

If $\lambda_u\neq\lambda_v$, then proposition 5.10, p.136 implies that $u,v$ is linearly independent. Hence $\lambda-\lambda_u=0$ and $\lambda-\lambda_v=0$. Hence $\lambda_u=\lambda=\lambda_v$. Contradiction and we have $\lambda_u=\lambda_v$. $\blacksquare$

(26) Suppose $T\in\lnmpsb(V)$ is such that every nonzero vector in $V$ is an eigenvector of $T$. Then $T$ is a scalar multiple of the identity operator.

Proof For each $v\in V$, there exists $a_v\in\mathbb{F}$ such that

$$ Tv=a_vv $$

Because $T0=0$, we can choose $a_0$ to be any number in $\mathbb{F}$. For instance, $T0=0=7\cdot0$.

But for $v\in V\setminus\{0\}$, the value of $a_v$ is uniquely determined by the equation above. To show that $T$ is a scalar multiple of the identity, let $v,w\in V\setminus\{0\}$. We want to show that $a_v=a_w$.

First consider the case where $v,w$ is linearly independent. Then there exists $b\in\mathbb{F}$ such that $w=bv$. Then

$$ a_ww=Tw=T(bv)=bTv=b(a_vv)=a_vbv=a_vw $$

Hence $a_v=a_w$. Now consider the case where $v,w$ is linearly independent. Then

$$ a_{v+w}v+a_{v+w}w=a_{v+w}(v+w)=T(v+w)=Tv+Tw=a_vv+a_ww $$

Equivalently

$$ (a_{v+w}-a_v)v+(a_{v+w}-a_w)w=0 $$

The linear independence of $v,w$ implies that $a_v=a_{v+w}=a_w$. $\blacksquare$

(27) Suppose $V$ is finite-dimensional and $T\in\lnmpsb(V)$ is such that every subspace of $V$ with dimension $\dim{V}=1$ is invariant under $T$. Then $T$ is a scalar multiple of the identity operator.

Proof Suppose $T$ is not a scalar multiple of the identity operator. By the previous exercise, there exists $u\in V$ such that $u$ is not an eigenvector of $T$. That is no scalar $\lambda$ exists such that $Tu=\lambda u$. That is $Tu$ is not a linear combination of $u$. Hence $u,Tu$ is linearly independent. Extend $u,Tu$ to a basis $u,Tu,v_1,\dots,v_n$ of $V$. Define

$$ U\equiv\text{span}(u,v_1,\dots,v_n) $$

Then $U$ is a subspace of $V$ and $\dim{U}=\dim{V}-1$. But $U$ is not invariant under $T$ because $u\in U$ but $Tu\notin U$. Contradiction and $T$ must be a scalar multiple of the identity operator. $\blacksquare$

(28) Suppose $V$ is finite-dimensional with $\dim{V}\geq3$ and $T\in\lnmpsb(V)$ is such that every $2$-dimensional subspace of $V$ is invariant under $T$. Prove that $T$ is a scalar multiple of the identity operator.

Proof For any nonzero $v_1\in V$, extend it to a basis $v_1,\dots,v_n$ of $V$. Since $Tv_1\in V$, then the Criterion for a Basis, p.39-40 gives that $Tv_1$ is a unique linear combination of this basis:

$$ Tv_1=\sum_{k=1}^n\lambda_kv_k $$

Define $U_2\equiv\text{span}(v_1,v_2)$. Since $U_2$ is $2$-dimensional, then it’s invariant under $T$. That is $Tv_1\in U_2$. Hence $Tv_1$ is a linear combination of $v_1,v_2$. This implies that $0=\lambda_3=\lambda_4=\lambda_n$ in this unique linear combination that is $Tv_1$.

Similarly, define $U_3\equiv\text{span}(v_1,v_3)$. Since $U_3$ is $2$-dimensional, then it’s invariant under $T$. That is $Tv_1\in U_3$. Hence $Tv_1$ is a linear combination of $v_1,v_3$. This implies that $0=\lambda_2=\lambda_4=\dotsb=\lambda_n$ in this unique linear combination that is $Tv_1$.

Then we’re left with

$$ Tv_1=\lambda_1v_1 $$

That is the arbitrarily selected $v_1$ is an eigenvector of $T$. Hence every nonzero vector in $V$ is an eigenvector or $T$. By problem (26), $T$ is a scalar multiple of the identity operator. $\blacksquare$

(29) Suppose $T\in\lnmpsb(V)$ and $\dim{\mathscr{R}(T)}=k$. Then $T$ has at most $k+1$ distinct eigenvalues.

Proof Let $\lambda_1,\dots,\lambda_m$ be the distinct eigenvalues of $T$ and let $v_1,\dots,v_m$ be the corresponding nonzero eigenvectors. Note that there can be at most one distinct eigenvalue that is $0$. Hence there are at least $m-1$ nonzero eigenvalues. If $\lambda_j\neq0$, then

$$ v_j=\frac1{\lambda_j}Tv_j=T\Big(\frac1{\lambda_j}v_j\Big) $$

That is, these $v_j$’s are in $\mathscr{R}(T)$. Proposition 5.10, p.136 implies that these $v_j$’s are linearly independent. Hence $\mathscr{R}(T)$ contains at least $m-1$ linearly independent vectors. Hence $\dim{\mathscr{R}(T)}\geq m-1$ and

$$ m-1\leq\dim{\mathscr{R}(T)}=k\quad\blacksquare $$

(31) Suppose $V$ is finite-dimensional and $v_1,\dots,v_m$ is a list of vectors in $V$. Then $v_1,\dots,v_m$ is linearly independent if and only if there exists $T\in\lnmpsb(V)$ such that $v_1,\dots,v_m$ are eigenvectors of $T$ corresponding to distinct eigenvalues.

Proof Suppose there exists $T\in\lnmpsb(V)$ such that $v_1,\dots,v_m$ are eigenvectors of $T$ corresponding to distinct eigenvalues. Then $v_1,\dots,v_m$ is linearly independent by 5.10, p.136.

Conversely, suppose $v_1,\dots,v_m$ is linearly independent. Then we can extend it to a basis $v_1,\dots,v_m,v_{m+1},\dots,v_n$ of $V$. Proposition 3.5, p.54 gives the existence of $T\in\lnmpsb(V)$ such that

$$ Tv_k=kv_k\quad\text{for }k=1,\dots,n $$

Hence $v_1,\dots,v_m$ are eigenvectors of $T$ corresponding to distinct eigenvalues $1,\dots,m$. $\blacksquare$

(32) Suppose $\lambda_1,\dots,\lambda_n$ is a list of distinct real numbers. Then the list $e^{\lambda_1x},\dots,e^{\lambda_nx}$ is linearly independent in the vector space of real-valued functions on $\mathbb{R}$.

Proof Recall definition 1.23, p.14. $\mathbb{F}^S$ denotes the set of functions from $S$ to $\mathbb{F}$. Then $\mathbb{R}^{\mathbb{R}}$ denotes the set of functions from $\mathbb{R}$ to $\mathbb{R}$.

Define $V\equiv\text{span}(e^{\lambda_1x},\dots,e^{\lambda_nx})\subset\mathbb{R}^{\mathbb{R}}$ and define the differentiation operator $D\in\lnmpsb(V)$ by $Df=f’$. Then

$$ De^{\lambda_kx}=\frac{d}{dx}e^{\lambda_kx}=\lambda_ke^{\lambda_kx} $$

Hence $\lambda_k$ is an eigenvalue of $T$ with corresponding eigenvector $e^{\lambda_kx}$. Since $\lambda_1,\dots,\lambda_n$ is a list of distinct real numbers, then proposition 5.10, p.136 implies that $e^{\lambda_1x},\dots,e^{\lambda_nx}$ is linearly independent. $\blacksquare$

Exercises 5.B

(1) Suppose $T\in\lnmpsb(V)$ and there exists a positive integer $n$ such that $T^n=0$. Then $I-T$ is invertible and

$$ (I-T)^{-1}=I+T+\dotsb+T^{n-1} $$

Proof Note that for any $z\in\mathbb{C}$, we have

$$\begin{align*} (1-z)(1+z+z^2+z^3) &=1+z+z^2+z^3-z(1+z+z^2+z^3) \\ &=1+z+z^2+z^3-z-z^2-z^3-z^4 \\ &=1-z^4 \\ \end{align*}$$

More generally, we have

$$\begin{align*} 1-z^n&=(1-z)(1+z+z^2+\dotsb+z^{n-1}) \\ &=(1+z+z^2+\dotsb+z^{n-1})(1-z) \end{align*}$$

This familiar formula should lead us to guess the inverse of $I-T$ because the properties of operators allow us to replace the $1$’s with $I$’s and the $z$’s with $T$’s. Define $S\equiv I-T$ and note the use of the definition $T^0\equiv I$ and the equality $T^mT^n=T^{m+n}$ (5.16, p.143):

$$\begin{align*} (I-T)(I+T+T^2+\dotsb+T^{n-1}) &= S(I+T+T^2+\dotsb+T^{n-1}) \\ &= S\sum_{k=0}^{n-1}T^k\tag{by 5.16, p.143} \\ &= \sum_{k=0}^{n-1}ST^k\tag{by 3.9, p.56} \\ &= \sum_{k=0}^{n-1}(I-T)T^k \\ &= \sum_{k=0}^{n-1}(IT^k-TT^k)\tag{by 3.9, p.56} \\ &= \sum_{k=0}^{n-1}(T^k-T^{k+1})\tag{by 5.16, p.143} \\ &= \sum_{k=0}^{n-1}T^k+\sum_{k=0}^{n-1}(-T^{k+1}) \\ &= \sum_{k=0}^{n-1}T^k-\sum_{k=0}^{n-1}T^{k+1} \\ &= T^0+\sum_{k=1}^{n-1}T^k-\Big(T^n+\sum_{k=0}^{n-2}T^{k+1}\Big) \\ &= I+\sum_{k=1}^{n-1}T^k-\sum_{k=0}^{n-2}T^{k+1} \\ &= I+\sum_{k=1}^{n-1}T^k-\sum_{j=1}^{n-1}T^j \\ &= I \end{align*}$$

Or this might be more readable:

$$\begin{align*} (I-T)(I+T+T^2+\dotsb+T^{n-1}) &= S(I+T+T^2+\dotsb+T^{n-1}) \\ &= SI+ST+ST^2+\dotsb+ST^{n-1} \\ &= (I-T)+(I-T)T+(I-T)T^2+\dotsb+(I-T)T^{n-1} \\ &= I-T+T-T^2+T^2-T^3+\dotsb+T^{n-1}+T^n \\ &= I-T^n \\ &= I-0 \\ &= I \end{align*}$$

Similarly

$$ (I+T+\dotsb+T^{n-1})(I-T)=I-T^n=I $$

Hence $I-T$ is invertible and

$$ (I-T)^{-1}=I+T+\dotsb+T^{n-1}\quad\blacksquare $$

(2) Suppose $T\in\lnmpsb(V)$ and $(T-2I)(T-3I)(T-4I)=0$. If $\lambda$ is an eigenvalue of $T$, then $\lambda=2$ or $\lambda=3$ or $\lambda=4$.

Proof Let $v$ be an eigenvector of $T$ corresponding to $\lambda$. For any nonzero $c\in\mathbb{F}$, note that $cv$ is an eigenvector of $T$ corresponding to $\lambda$ since

$$ T(cv)=cTv=c\lambda v=\lambda(cv) $$

In exercise 5.B.10 below, we prove that $p(T)v=p(\lambda)v$ for any $p\in\mathscr{P}(\mathbb{F})$ and for any eigenvector $v$ of $T$ corresponding to $\lambda$. Now define

$$\begin{align*} p(T)&\equiv T-4I &\implies &\quad\quad (T-4I)v=p(T)v=p(\lambda)v=(\lambda-4)v \\ q(T)&\equiv T-3I &\implies &\quad\quad (T-3I)(cv)=q(T)(cv)=q(\lambda)(cv)=(\lambda-3)(cv)=\big[(\lambda-3)c\big]v \\ r(T)&\equiv T-2I &\implies &\quad\quad (T-2I)(cv)=r(T)(cv)=r(\lambda)(cv)=(\lambda-2)(cv)=\big[(\lambda-2)c\big]v \end{align*}$$

Then

$$\begin{align*} 0 &= (T-2I)(T-3I)(T-4I)v \\ &= \big[(T-2I)(T-3I)(T-4I)\big]v \\ &= \big[(T-2I)(T-3I)\big]\big((T-4I)v\big)\tag{by 3.8, p.55} \\ &= \big[(T-2I)(T-3I)\big]\big((\lambda-4)v\big) \\ &= (T-2I)\big[(T-3I)\big((\lambda-4)v\big)\big]\tag{by 3.8, p.55} \\ &= (T-2I)\big[\big((\lambda-3)(\lambda-4)\big)v\big] \\ &= \big[(\lambda-2)(\lambda-3)(\lambda-4)\big]v \end{align*}$$

This might be more readable:

$$\begin{align*} 0 &= (T-2I)(T-3I)(T-4I)v \\ &= \big[r(T)q(T)p(T)\big]v \\ &= \big[r(T)q(T)\big]\big(p(T)v\big)\tag{by 3.8, p.55} \\ &= \big[r(T)q(T)\big]\big((\lambda-4)v\big) \\ &= r(T)\big[q(T)\big((\lambda-4)v\big)\big]\tag{by 3.8, p.55} \\ &= r(T)\big[\big((\lambda-3)(\lambda-4)\big)v\big] \\ &= \big[(\lambda-2)(\lambda-3)(\lambda-4)\big]v \end{align*}$$

Since the eigenvector $v$ is not zero, then

$$ 0=(\lambda-2)(\lambda-3)(\lambda-4) $$

Hence $\lambda=2$ or $\lambda=3$ or $\lambda=4$. $\blacksquare$

(3) Suppose $T\in\lnmpsb(V)$ and $T^2=I$ and $-1$ is not an eigenvalue of $T$. Then $T=I$.

Proof For any $v\in V$, we have

$$ v=\frac12(v-Tv)+\frac12(v+Tv)\tag{5.B.3.1} $$

Since $T^2=I$, we have

$$ (T+I)\Big(\frac12(v-Tv)\Big)=\frac12(T+I)(I-T)v=\frac12(I-T^2)v=0 $$

Hence $\frac12(v-Tv)\in\mathscr{N}(T+I)$. Similarly

$$ (T-I)\Big(\frac12(v+Tv)\Big)=\frac12(T-I)(I+T)v=\frac12(T^2-I)v=0 $$

Hence $\frac12(v+Tv)\in\mathscr{N}(T-I)$. By 5.B.3.1, we have $V=\mathscr{N}(T-I)+\mathscr{N}(T+I)$.

But $-1$ is not an eigenvalue of $T$. Hence no $v\in V$ exists such that $Tv=-v$. That is no $v\in V$ exists such that $(T+I)v=Tv+v=0$. That is $\mathscr{N}(T+I)=\{0\}$. Hence

$$ V=\mathscr{N}(T-I)+\{0\}=\big\{u+0:u\in\mathscr{N}(T-I)\big\}=\mathscr{N}(T-I) $$

Hence every $v\in V$ satisfies $Tv-v=(T-I)v=0$. Equivalently we have $Tv=v=Iv$ for every $v\in V$. Hence $T=I$. $\blacksquare$

(4) Suppose $P\in\lnmpsb(V)$ and $P^2=P$. Then $V=\mathscr{N}(P)\oplus\mathscr{R}(P)$.

Proof For any $v\in V$, we have

$$ v=(v-Pv)+Pv \tag{5.B.4.1} $$

In the next equation, the supposition $P-P^2=0$ gives the last equality:

$$\begin{align*} P(v-Pv) &= P(Iv-Pv) \\ &= P\big((I-P)v\big)\tag{by 3.6, p.55} \\ &= \big(P(I-P)\big)v\tag{by 3.8, p.55} \\ &= (P-P^2)v\tag{by 3.9, p.56} \\ &=0 \end{align*}$$

Hence $v-Pv\in\mathscr{N}(P)$. And clearly $Pv\in\mathscr{R}(P)$. Then 5.B.4.1 gives $V=\mathscr{N}(P)+\mathscr{R}(P)$.

Suppose $v\in\mathscr{N}(P)\cap\mathscr{R}(P)$. Then $Pv=0$ and there exists $u\in V$ such that $Pu=v$. Hence

$$ 0=Pv=P(Pu)=P^2u=Pu=v $$

Then proposition 1.45, p.23 implies that $V=\mathscr{N}(P)\oplus\mathscr{R}(P)$. $\blacksquare$

(6) Suppose $T\in\lnmpsb(V)$ and $U$ is a subspace of $V$ invariant under $T$. Then $U$ is invariant under $p(T)$ for every polynomial $p\in\mathscr{P}(\mathbb{F})$.

Proof Let $u\in U$. Hence $Tu\in U$. Hence $T^2u=T(Tu)\in U$. Hence $T^3u=T(T^2u)\in U$. A simple induction proof shows that $U$ is invariant under $T^n$ for any $n\in\mathbb{N}^+$.

Proceeding with an induction proof, let $p\in\mathscr{P}(\mathbb{F})$ with $\deg{p}=0$. Then $p(T)=a_0T^0=a_0I$ by definition 5.16, p.144. And $p(T)u=a_0Iu=a_0u\in U$. Hence $U$ is invariant under $p(T)$. Similarly let $p\in\mathscr{P}(\mathbb{F})$ with $\deg{p}=1$. Then

$$ p(T)=a_0T^0+a_1T^1=a_0I+a_1T $$

and

$$ p(T)u=(a_0I+a_1T)u=a_0Iu+a_1Tu=a_0u+a_1Tu\in U $$

The “$\in U$” holds because $a_0u\in U$ and $a_1Tu\in U$ by the supposition that $U$ is a subspace invariant under $T$.

Now assume $U$ is invariant under $p(T)$ for any $p\in\mathscr{P}(\mathbb{F})$ with $\deg{p}\leq n-1$. Let $q\in\mathscr{P}(\mathbb{F})$ with $\deg{q}=n$. Then $q=\sum_{k=0}^na_kx^k$ where $a_n\neq0$ and

$$ q(T)u=\Big(\sum_{k=0}^na_kT^k\Big)u=\Big(\sum_{k=0}^{n-1}a_kT^k+a_nT^n\Big)u=\Big(\sum_{k=0}^{n-1}a_kT^k\Big)u+a_nT^nu $$

The induction assumption implies that $\big(\sum_{k=0}^{n-1}a_kT^k\big)u\in U$. We also showed that $U$ is invariant under $T^n$. Hence $a_nT^nu\in U$. Since $U$ is a subspace and closed under addition, then $q(T)u\in U$ and $U$ is invariant under $q(T)$. $\blacksquare$

(7) Suppose $T\in\lnmpsb(V)$. Then $9$ is an eigenvalue of $T^2$ if and only if $3$ or $-3$ is an eigenvalue of $T$.

Proof Let’s assume $V$ is finite-dimensional.

Suppose $\lambda$ and $v$ form an eigenpair of $S\in\lnmpsb(V)$. Then

$$ S^2v=(SS)v=S(Sv)=S(\lambda v)=\lambda Sv=\lambda\lambda v=\lambda^2v $$

That is, $\lambda^2$ and $v$ form an eigenpair of $S^2$. We will prove a more general result in exercise 5.B.10 below.

Hence if $3$ or $-3$ is an eigenvalue of $T$, then $9$ is an eigenvalue of $T^2$.

Conversely, if $9$ is an eigenvalue of $T^2$, then $T^2-9I$ is not injective (by 5.6, p.134). Then $(T-3I)(T+3I)$ is not injective since

$$\begin{align*} (T-3I)(T+3I) &= S(T+3I)\tag{$S\equiv T-3I$} \\ &= ST+3SI\tag{by 3.9, p.56} \\ &= (T-3I)T+3S \\ &= TT-3IT+3(T-3I)\tag{by 3.9, p.56} \\ &= T^2-3T+3T-9I \\ &= T^2-9I \end{align*}$$

Then the contrapositive statement of problem 3.B.11 implies that $(T-3I)$ or $(T+3I)$ is not injective. Hence $3$ or $-3$ is an eigenvalue of $T$ (by 5.6, p.134). $\blacksquare$

(8) Give an example of $T\in\lnmpsb(\mathbb{R}^2)$ such that $T^4=-I$.

Example Proposition 3.5, p.54 gives the existence of $T\in\lnmpsb(\mathbb{R}^2)$ such that

$$ T(e_1)\equiv\Big(\cos{\frac{\pi}4}\Big)e_1-\Big(\sin{\frac{\pi}4}\Big)e_2=\Big(\cos{\frac{\pi}4},-\sin{\frac{\pi}4}\Big) $$

and

$$ T(e_2)\equiv\Big(\sin{\frac{\pi}4}\Big)e_1+\Big(\cos{\frac{\pi}4}\Big)e_2=\Big(\sin{\frac{\pi}4},\cos{\frac{\pi}4}\Big) $$

where $e_1,e_2$ is the standard basis of $\mathbb{R}^2$. Equivalently

$$\begin{align*} T(x_1,x_2) &= T\big((x_1,0)+(0,x_2)\big) \\ &= T\big(x_1(1,0)+x_2(0,1)\big) \\ &= T\big(x_1e_1+x_2e_2\big) \\ &= x_1Te_1+x_2Te_2 \\ &= x_1\Big(\cos{\frac{\pi}4},-\sin{\frac{\pi}4}\Big)+x_2\Big(\sin{\frac{\pi}4},\cos{\frac{\pi}4}\Big) \\ &= \Big(x_1\cos{\frac{\pi}4}+x_2\sin{\frac{\pi}4},-x_1\sin{\frac{\pi}4}+x_2\cos{\frac{\pi}4}\Big) \end{align*}$$

Then

$$ \mtrxof{T}_{:,1}=\mtrxof{Te_1}=\mtrxofsb\Bigg[\Big(\cos{\frac{\pi}4},-\sin{\frac{\pi}4}\Big)\Bigg]=\begin{bmatrix}\cos{\frac\pi4}\\-\sin{\frac\pi4}\end{bmatrix} $$

and

$$ \mtrxof{T}_{:,2}=\mtrxof{Te_2}=\mtrxofsb\Bigg[\Big(\sin{\frac{\pi}4},\cos{\frac{\pi}4}\Big)\Bigg]=\begin{bmatrix}\sin{\frac\pi4}\\\cos{\frac\pi4}\end{bmatrix} $$

Then

$$ \mtrxof{T}=\begin{bmatrix}\cos{\frac\pi4}&\sin{\frac\pi4}\\-\sin{\frac\pi4}&\cos{\frac\pi4}\end{bmatrix} $$

Proposition W.5.4 gives $\mtrxof{T^4}=\mtrxof{T}^4$. And proposition W.5.5 implies the second equality:

$$ \mtrxof{T^4}=\mtrxof{T}^4=\begin{bmatrix}\cos{(4\cdot\frac\pi4)}&\sin{(4\cdot\frac\pi4)}\\-\sin{(4\cdot\frac\pi4)}&\cos{4\cdot\frac\pi4)}\end{bmatrix} =\begin{bmatrix}\cos{\pi}&\sin{\pi}\\-\sin{\pi}&\cos{\pi}\end{bmatrix} =\begin{bmatrix}-1&0\\0&-1\end{bmatrix} =\mtrxof{-I} $$

Since $\mtrxofsb$ is an isomorphism (3.60, p.83) between $\lnmpsb(\mathbb{R}^2)$ and $\mathbb{R}^{2,2}$, then $\mtrxofsb$ is injective and $T^4=-I$. $\blacksquare$

(10) Suppose $T\in\lnmpsb(V)$ and $v$ is an eigenvector of $T$ with eigenvalue $\lambda$. Then $p(T)v=p(\lambda)v$ for any $p\in\mathscr{P}(\mathbb{F})$.

In words, if $v$ and $\lambda$ form an eigenpair of $T$, then $v$ and $p(\lambda)$ form an eigenpair of $p(T)$ for any $p\in\mathscr{P}(\mathbb{F})$.

Proof Since $Tv=\lambda v$, then

$$ T^2v=T(Tv)=T(\lambda v)=\lambda Tv=\lambda^2v\quad\quad\quad T^3v=T(T^2v)=T(\lambda^2 v)=\lambda^2 Tv=\lambda^3v $$

and $T^nv=\lambda^nv$ for $n\in\mathbb{N}^+$. Define $p\in\mathscr{P}(\mathbb{F})$ by

$$ p\equiv\sum_{k=0}^na_kx^k $$

Then

$$ p(T)v=\Big(\sum_{k=0}^na_kT^k\Big)v=\sum_{k=0}^na_kT^kv=\sum_{k=0}^na_k\lambda^kv=\Big(\sum_{k=0}^na_k\lambda^k\Big)v=p(\lambda)v\quad\blacksquare $$

(11) Suppose $\mathbb{F}=\mathbb{C}$, $T\in\lnmpsb(V)$, $p\in\mathscr{P}(\mathbb{C})$ is a polynomial, and $\alpha\in\mathbb{C}$. Then $\alpha$ is an eigenvalue of $p(T)$ if and only if $\alpha=p(\lambda)$ for some eigenvalue $\lambda$ of $T$.

Proof Suppose that $\alpha$ is an eigenvalue of $p(T)$. Proposition 4.14, p.125 gives that we can write the polynomial $p(z)-\alpha$ in factored form:

$$ p(z)-\alpha=c(z-\lambda_1)\dotsb(z-\lambda_m)\tag{5.B.11.1} $$

where $c,\lambda_1,\dots,\lambda_m\in\mathbb{C}$.

If $c=0$ then $p(z)=\alpha$ and $p(T)=\alpha I$ (by 5.17, p.143). Hence $p(T)v=\alpha Iv=\alpha v$ for every $v\in V$. Hence $\alpha$ is an eigenvalue of $T$ and $p(\alpha)=\alpha$.

If $c\neq0$, then the equation above implies

$$ p(T)-\alpha I=c(T-\lambda_1I)\dotsb(T-\lambda_mI) $$

Since $p(T)-\alpha I$ is not injective (by 5.6, p.134), then the contrapositive statement of problem 3.B.11 implies that $T-\lambda_jI$ not injective for some $j$. Hence $\lambda_j$ is an eigenvalue of $T$ (by 5.6). Since the formula in 5.B.11.1 holds for all $z\in\mathbb{C}$, then $p(\lambda_j)-\alpha=0$ and $p(\lambda_j)=\alpha$.

Conversely, suppose $\alpha=p(\lambda)$ for some eigenvalue $\lambda$ of $T$. Let $v\in V$ be the corresponding eigenvector for $\lambda$. Then the result from the previous problem gives the first equality:

$$ p(T)v=p(\lambda)v=\alpha v $$

Hence $\alpha$ is an eigenvalue of $p(T)$. $\blacksquare$

(12) Show that the result in the previous exercise doesn’t hold if $\mathbb{C}$ is replaced with $\mathbb{R}$.

Proof Define $T\in\lnmpsb(\mathbb{R}^2)$ by $T(x,y)\equiv(-y,x)$. Define $p(x)\equiv x^2$. Then $p(T)=T^2=-I$ since

$$ p(T)(x,y)=T^2(x,y)=T(-y,x)=(-x,-y)=-(x,y) $$

Hence $-1$ is an eigenvalue of $p(T)$. But $T$ has no eigenvalues if $\mathbb{F}=\mathbb{R}$:

If $\mathbb{F}=\mathbb{R}$, then this operator has a nice geometric interpretation: $T$ is just a counterclockwise rotation by $90^{\circ}$ about the origin in $\mathbb{R}^2$. An operator has an eigenvalue if and only if there exists a nonzero vector in its domain that gets sent by the operator to a scalar multiple of itself. The rotation of a nonzero vector in $\mathbb{R}^2$ obviously never equals a scalar multiple of itself. Conclusion: if $\mathbb{F}=\mathbb{R}$, the operator $T$ has no eigenvalues. $\blacksquare$

(13) Suppose $W$ is a complex vector space and $T\in\lnmpsb(W)$ has no eigenvalues. Then every subspace of $W$ invariant under $T$ is either $\{0\}$ or infinite-dimensional.

Proof Suppose a finite-dimensional, nonzero subspace $U$ of $W$ is invariant under $T$. Then proposition 5.21, p.145 implies that $T\bar_U$ has an eigenpair $\lambda,u$ with $u\in U\subset W$. That is

$$ Tu=T\bar_U(u)=\lambda u $$

That is $T$ has an eigenvalue $\lambda$. We get a contradiction since $T\in\lnmpsb(W)$ has no eigenvalues. $\blacksquare$

(14) Give an example of an operator whose matrix with respect to some basis contains only $0$’s on the diagonal, but the operator is invertible.

This exercise and the next one show that 5.30 fails without the hypothesis that an upper-triangular matrix is under consideration.

Counterexample Proposition 3.5, p.54 gives the existence of an operator $T\in\lnmpsb(\mathbb{R}^2)$ such that

$$ Te_1\equiv e_2\quad\quad\quad\quad Te_2\equiv e_1 $$

where $e_1,e_2$ is the standard basis of $\mathbb{R}^2$. Then

$$ \mtrxof{T}_{:,1}=\mtrxof{Te_1}=\mtrxofsb(e_2)=\begin{bmatrix}0\\1\end{bmatrix} \quad\quad\quad \mtrxof{T}_{:,2}=\mtrxof{Te_2}=\mtrxofsb(e_1)=\begin{bmatrix}1\\0\end{bmatrix} $$

Then the matrix of $T$ has only zeros on the diagonal:

$$ \mtrxof{T}=\begin{bmatrix}0&1\\1&0\end{bmatrix} $$

But $T$ is invertible. We can see this by squaring the matrix or noting that

$$ TTe_1=Te_2=e_1\quad\quad\quad\quad TTe_2=Te_1=e_2 $$

so that

$$\begin{align*} TT(x,y) &= TT\big((x,0)+(0,y)\big) \\ &= TT\big(x(1,0)+y(0,1)\big) \\ &= TT\big(xe_1+ye_2\big) \\ &= xTTe_1+yTTe_2 \\ &= xe_1+ye_2 \\ &= (x,y) \end{align*}$$

That is $T$ is its own inverse. $\blacksquare$

(15) Give an example of an operator whose matrix with respect to some basis contains only nonzero numbers on the diagonal, but the operator is not invertible.

Counterexample Proposition 3.5, p.54 gives the existence of an operator $T\in\lnmpsb(\mathbb{R}^2)$ such that

$$ Te_1\equiv e_1+e_2\quad\quad\quad\quad Te_2\equiv e_1+e_2 $$

where $e_1,e_2$ is the standard basis of $\mathbb{R}^2$. Then

$$ \mtrxof{T}_{:,1}=\mtrxof{Te_1}=\mtrxofsb(e_1+e_2)=\begin{bmatrix}1\\1\end{bmatrix} \quad\quad\quad \mtrxof{T}_{:,1}=\mtrxof{Te_2}=\mtrxofsb(e_1+e_2)=\begin{bmatrix}1\\1\end{bmatrix} $$

Then the matrix of $T$ has only nonzeros on the diagonal:

$$ \mtrxof{T}=\begin{bmatrix}1&1\\1&1\end{bmatrix} $$

But $T$ is not injective:

$$ T(1,0)=Te_1=e_1+e_2=Te_2=T(0,1) $$

Hence $T$ is not invertible. $\blacksquare$

(16) Rewrite the proof of 5.21 using the linear map that sends $p\in\mathscr{P}_n(\mathbb{C})$ to $p(T)v\in V$ (and use 3.23).

Proof Let $n=\dim{V}$ and let $v\in V$ with $v\neq0$. Define $\phi:\mathscr{P}_n(\mathbb{C})\mapsto V$ by

$$ \phi(p)\equiv p(T)v $$

Then $\phi$ is linear:

$$ \phi(p+q)=(p+q)(T)v=\Big(\sum_{k=0}^n(a_k+b_k)T_k\Big)v=\sum_{k=0}^na_kT_kv+\sum_{k=0}^nb_kT_kv=p(T)v+q(T)v $$

The homogeneity check is similar. Note that $\dim{\mathscr{P}_n(\mathbb{C})}=n+1$ and $\dim{V}=n$. Proposition 3.23, p.64 gives that a map to a smaller dimensional space is not injective. Hence $\phi$ is not injective and $\mathscr{N}(\phi)\neq\{0\}$. Then there exists a nonzero $p\in\mathscr{P}_n(\mathbb{C})$ such that

$$ p(T)v=\phi(p)=0 $$

Then the Fundamental Theorem of Algebra (4.14) implies a factorization $\dots$. $\blacksquare$

(17) Rewrite the proof of 5.21 using the linear map that sends $p\in\mathscr{P}_{n^2}(\mathbb{C})$ to $p(T)\in\lnmpsb(V)$ (and use 3.23).

Proof Let $n=\dim{V}$ and let $T\in\lnmpsb(V)$. Proposition 3.61, p.84 gives that $n^2=\dim{\lnmpsb(V)}$. Define $\phi:\mathscr{P}_{n^2}(\mathbb{C})\mapsto\lnmpsb(V)$ by

$$ \phi(p)\equiv p(T) $$

Then $\phi$ is linear:

$$ \phi(p+q)=(p+q)(T)=\sum_{k=0}^n(a_k+b_k)T_k=\sum_{k=0}^na_kT_k+\sum_{k=0}^nb_kT_k=p(T)+q(T) $$

The homogeneity check is similar. Note that $\dim{\mathscr{P}_{n^2}(\mathbb{C})}=n^2+1$ and $\dim{\lnmpsb(V)}=n^2$. Proposition 3.23, p.64 gives that a map to a smaller dimensional space is not injective. Hence $\phi$ is not injective and $\mathscr{N}(\phi)\neq\{0\}$. Then there exists a nonzero $p\in\mathscr{P}_{n^2}(\mathbb{C})$ such that

$$ p(T)=\phi(p)=0 $$

Then the Fundamental Theorem of Algebra (4.14) implies a factorization $\dots$. $\blacksquare$

(18) Suppose $V$ is a finite-dimensional complex vector space and $T\in\lnmpsb(V)$. Define a function $f:\mathbb{C}\mapsto\mathbb{R}$ by

$$ f(\lambda)\equiv\dim{\mathscr{R}(T-\lambda I)} $$

Then $f$ is not continuous.

Proof Let $\lambda_0$ be an eigenvalue of $T$. Proposition 5.6 gives that $T-\lambda_0I$ is not surjective. Then $\mathscr{R}(T-\lambda_0 I)\neq V$ and proposition W.2.12 gives $\dim{\mathscr{R}(T-\lambda_0 I)}<\dim{V}$.

Proposition 5.13, p.136 gives that there are a finite number of eigenvalues of $T$. Hence there exists a sequence $\{\lambda_n\}_{n=1}^\infty$ none of which are eigenvalues of $T$ such that

$$ \lim_{n\rightarrow\infty}\lambda_n=\lambda_0 $$

Then proposition 5.6 gives that $T-\lambda_nI$ is surjective for $n=1,2,\dots$. Hence $\mathscr{R}(T-\lambda_nI)=V$ and

$$ f(\lambda_n)=\dim{\mathscr{R}(T-\lambda_nI)}=\dim{V} $$

Hence

$$ \lim_{n\rightarrow\infty}f(\lambda_n)=\lim_{n\rightarrow\infty}\dim{V}=\dim{V}>\dim{\mathscr{R}(T-\lambda_0I)}=f(\lambda_0) $$

Hence $f$ is not continuous. $\blacksquare$

(20) Suppose $V$ is a finite-dimensional complex vector space and $T\in\lnmpsb(V)$. Then $T$ has an invariant subspace of dimension $k$ for each $k=1,\dots,\dim{V}$.

Proof Proposition 5.27, p.149 gives that $T$ has an upper-triangular matrix with respect to some basis $v_1,\dots,v_{\dim{V}}$ of $V$. Then proposition 5.26, p.148 gives that the subspace $\text{span}(v_1,\dots,v_k)$ is invariant under $T$ for each $k=1,\dots,\dim{V}$. $\blacksquare$

Exercises 5.C

(1) Suppose $T\in\lnmpsb(V)$ is diagonalizable. Then $V=\mathscr{N}(T)\oplus\mathscr{R}(T)$.

Proof We will presume that $V$ is finite-dimensional.

If $T$ is invertible, then its injectivity (by 3.69) implies $\mathscr{N}(T)=\{0\}$ (by 3.16) and its surjectivity (by 3.69) implies $\mathscr{R}(T)=V$ (by 3.20). Since $V\cap\{0\}=\{0\}$, then proposition 1.45 implies

$$ V=\{0\}\oplus V=\mathscr{N}(T)\oplus\mathscr{R}(T) $$

since

$$ V=\big\{v:v\in V\big\}=\big\{0+v:v\in V\big\}=\big\{u+v:u\in\{0\},v\in V\big\}=\{0\}+V $$

Suppose $T$ is not invertible. Since $T$ is diagonalizable, then it has a diagonal matrix representation and hence has an upper-triangular matrix representation. Since $T$ is not invertible, then proposition 5.30 implies that some of the entries on the diagonal are $0$. Then proposition 5.32 implies that $0$ is an eigenvalue of $T$. Let $0,\lambda_1,\dots,\lambda_m$ be all of the distinct eigenvalues of $T$, so that $\lambda_k\neq0$ for $k=1,\dots,m$.

By 5.41(d), the diagonalizability of $T$ implies that

$$ V=\eignsb(0,T)\oplus\eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T) \tag{5.C.1.1} $$

Note that

$$ \eignsb(0,T)=\mathscr{N}(T-0I)=\mathscr{N}(T) \tag{5.C.1.2} $$

Let $v_k\in\eignsb(\lambda_k,T)$. Then

$$ v_k=\frac1{\lambda_k}\lambda_kv_k=\frac1{\lambda_k}Tv_k=T\Big(\frac1{\lambda_k}v_k\Big)\in\mathscr{R}(T) $$

and $\eignsb(\lambda_k,T)\subset\mathscr{R}(T)$ for $k=1,\dots,m$. Since the sum of subspaces is the smallest containing subspace of the subspaces (by 1.39, p.21), then

$$ \eignsb(\lambda_1,T)+\dotsb+\eignsb(\lambda_m,T)\subset\mathscr{R}(T) $$

But 5.38 implies that $\eignsb(\lambda_1,T)+\dotsb+\eignsb(\lambda_m,T)$ is a direct sum. Hence

$$ \eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T)\subset\mathscr{R}(T) \tag{5.C.1.3} $$

In the other direction, let $r\in\mathscr{R}(T)$. Then $r=Tv$ for some $v\in V$. Then 5.C.1.1 implies that $v$ can be written as

$$ v=\sum_{k=0}^mv_k $$

where $v_0\in\eignsb(0,T)$ and $v_k\in\eignsb(\lambda_k,T)$ for $k=1,\dots,m$. Then

$$ r=Tv=T\Big(\sum_{k=0}^mv_k\Big)=\sum_{k=0}^mTv_k=0\cdot v_0+\sum_{k=1}^m\lambda_kv_k\in\eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T) $$

This implies

$$ \mathscr{R}(T)\subset\eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T) \tag{5.C.1.4} $$

Combining 5.C.1.3 and 5.C.1.4, we get

$$ \mathscr{R}(T)=\eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T) \tag{5.C.1.5} $$

And combining 5.C.1.1, 5.C.1.2, and 5.C.1.5, we get

$$\begin{align*} V &= \eignsb(0,T)\oplus\eignsb(\lambda_1,T)\oplus\dotsb\oplus\eignsb(\lambda_m,T) \\ &= \mathscr{N}(T)\oplus\mathscr{R}(T) \quad\blacksquare \end{align*}$$

$\blacksquare$

(3) Suppose $V$ is finite-dimensional and $T\in\lnmpsb(V)$. Then the following are equivalent:

(a) $V=\mathscr{N}(T)\oplus\mathscr{R}(T)$
(b) $V=\mathscr{N}(T)+\mathscr{R}(T)$
(c) $\mathscr{N}(T)\cap\mathscr{R}(T)=\{0\}$

(a)$\implies$(b):

$$ V=\mathscr{N}(T)\oplus\mathscr{R}(T)=\mathscr{N}(T)+\mathscr{R}(T) $$

(b)$\implies$(c):

$$\begin{align*} \dim{\mathscr{N}(T)}+\dim{\mathscr{R}(T)} &= \dim{V}\tag{by 3.22} \\ &= \dim{\big(\mathscr{N}(T)+\mathscr{R}(T)\big)}\tag{by (b)} \\ &= \dim{\mathscr{N}(T)}+\dim{\mathscr{R}(T)}-\dim{\big(\mathscr{N}(T)\cap\mathscr{R}(T)\big)}\tag{by 2.43} \\ \end{align*}$$

Hence $0=\dim{\big(\mathscr{N}(T)\cap\mathscr{R}(T)\big)}$ and $\{0\}=\mathscr{N}(T)\cap\mathscr{R}(T)$.

(c)$\implies$(a):

$$\begin{align*} \dim{V} &= \dim{\mathscr{N}(T)}+\dim{\mathscr{R}(T)}\tag{by 3.22} \\ &= \dim{\mathscr{N}(T)}+\dim{\mathscr{R}(T)}-\dim{\big(\mathscr{N}(T)\cap\mathscr{R}(T)\big)}\tag{by (c)} \\ &= \dim{\big(\mathscr{N}(T)+\mathscr{R}(T)\big)}\tag{by 2.43} \\ \end{align*}$$

Since $\mathscr{N}(T)+\mathscr{R}(T)$ is a subspace of $V$, then proposition W.2.12 implies that $V=\mathscr{N}(T)+\mathscr{R}(T)$. And proposition 1.45 implies $V=\mathscr{N}(T)\oplus\mathscr{R}(T)$ since $\mathscr{N}(T)\cap\mathscr{R}(T)=\{0\}$. $\blacksquare$

(6) Suppose $V$ is finite-dimensional, $T\in\lnmpsb(V)$ has $\dim{V}$ distinct eigenvalues, and $S\in\lnmpsb(V)$ has the same eigenvectors as $T$ (not necessarily with the same eigenvalues). Then $ST=TS$.

Proof Let $\lambda_1,\dots,\lambda_{\dim{V}}$ be the distinct eigenvalues of $T$. And let $v_1,\dots,v_{\dim{V}}$ be the corresponding eigenvectors, respectively. Also let $\theta_1,\dots,\theta_{\dim{V}}$ be the corresponding eigenvalues of $S$, respectively. Then for $i=1,\dots,\dim{V}$, we have

$$ Tv_i=\lambda_iv_i \quad\quad\quad\quad Sv_i=\theta_iv_i $$

and

$$ STv_i=S(\lambda_iv_i)=\lambda_iSv_i=\lambda_i\theta_iv_i \\ TSv_i=T(\theta_iv_i)=\theta_iTv_i=\theta_i\lambda_iv_i $$

Proposition 5.10 gives that $v_1,\dots,v_{\dim{V}}$ is linearly independent so proposition 2.39 gives that $v_1,\dots,v_{\dim{V}}$ is a basis of $V$. Since $(ST-TS)v_i=0$ for $i=1,\dots,m$, then proposition W.3.12 implies that $ST-TS=0$ on $V$. $\blacksquare$.

(7) Suppose $T\in\lnmpsb(V)$ has a diagonal matrix $A$ with respect to some basis of $V$. And let $\lambda\in\mathbb{F}$. Then $\lambda$ appears on the diagonal of $A$ precisely $\dim{\eignsb(\lambda,T)}$ times.

Proof Let $v_1,\dots,v_n$ be the basis with respect to which $T$ has the diagonal matrix $A$. And let $\beta_1,\dots,\beta_n$ be the nondistinct eigenvalues corresponding $v_1,\dots,v_n$. Then, by 5.32, the $\beta$’s are precisely the entries on the diagonal of $\mtrxofb{T,(v_1,\dots,v_n)}$:

$$ \nbmtrx{ Tv_1=\beta_1v_1\\ Tv_2=\beta_2v_2\\ \vdots\\ Tv_n=\beta_nv_n } \dq\dq \iff \dq\dq \mtrxofb{T,(v_1,\dots,v_n)}=\pmtrx{\beta_1&&&&\\&\beta_2&&&\\&&\ddots&&\\&&&\ddots&\\&&&&\beta_n} $$

Let $\lambda_1,\dots,\lambda_m$ be the distinct eigenvalues of $T$. For $k=1,\dots,m$, define $\phi_k$ to be the list of indices of nondistinct eigenvalues $\beta_1,\dots,\beta_n$ that are equal to $\lambda_k$:

$$ \phi_k\equiv\prn{j\in\set{1,\dots,n}:\lambda_k=\beta_j} $$

For intuition, here’s a concrete example: suppose $n=6$ and suppose that there are $4$ distinct eigenvalues. Then

$$ \mtrxofb{T,(v_1,\dots,v_6)}=\pmtrx{\lambda_1&&&&&\\&\lambda_2&&&&\\&&\lambda_3&&&\\&&&\lambda_2&&\\&&&&\lambda_4&\\&&&&&\lambda_3}=\pmtrx{\beta_1&&&&&\\&\beta_2&&&&\\&&\beta_3&&&\\&&&\beta_4&&\\&&&&\beta_5&\\&&&&&\beta_6} $$

and

$$ \phi_1=(1) \dq \phi_2=(2,4) \dq \phi_3=(3,6) \dq \phi_4=(5) $$

Note that $\text{len}(\phi_k)$ equals the number of times that $\lambda_k$ appears on the diagonal. So it suffices to show that $\text{len}(\phi_k)=\dim{\eignsb(\lambda_k,T)}$.

First we will show that $\eignsb(\lambda_k,T)=\bigoplus_{j\in\phi_k}\text{span}(v_j)$. Let $w\in\eignsb(\lambda_k,T)\subset V$. Then $w=\sum_{j=1}^na_jv_j$ for some $a_1,\dots,a_n\in\wF$ and

$$\begin{align*} 0 &= (T-\lambda_kI)w \\ &= (T-\lambda_kI)\Big(\sum_{j=1}^na_jv_j\Big) \\ &= \sum_{j=1}^na_j(T-\lambda_kI)v_j \\ &= \sum_{j=1}^na_j(Tv_j-\lambda_kIv_j) \\ &= \sum_{j=1}^na_j(\beta_jv_j-\lambda_kv_j) \\ &= \sum_{j=1}^na_j(\beta_j-\lambda_k)v_j \\ &= \sum_{j\notin\phi_k}a_j(\beta_j-\lambda_k)v_j\tag{since $\beta_j=\lambda_k$ for $j\in\phi_k$} \end{align*}$$

Since $0\neq\beta_j-\lambda_k$ for $j\notin\phi_k$ and the $v_j$ are linearly independent, then $0=a_j$ for $j\notin\phi_k$ and

$$ w=\sum_{j=1}^na_jv_j=\sum_{j\in\phi_k}a_jv_j\in\sum_{j\in\phi_k}\span{v_j} $$

since $a_jv_j\in\text{span}(v_j)$. Hence

$$ \eignsb(\lambda_k,T)\subset\sum_{j\in\phi_k}\text{span}(v_j) $$

In the other direction, let $w\in\sum_{j\in\phi_k}\text{span}(v_j)$. Then $w=\sum_{j\in\phi_k}s_j$ where $s_j\in\text{span}(v_j)$ for $j\in\phi_k$. Hence $s_j=a_jv_j$ and $w=\sum_{j\in\phi_k}a_jv_j$ and

$$ Tw = T\Big(\sum_{j\in\phi_k}a_jv_j\Big) = \sum_{j\in\phi_k}a_jTv_j = \sum_{j\in\phi_k}a_j\lambda_kv_j = \lambda_k\sum_{j\in\phi_k}a_jv_j = \lambda_kw $$

Hence $w$ is an eigenvector of $T$ corresponding to $\lambda_k$ and $w\in\eignsb(\lambda_k,T)$. Hence $\sum_{j\in\phi_k}\text{span}(v_j)\subset\eignsb(\lambda_k,T)$ and

$$ \sum_{j\in\phi_k}\text{span}(v_j)=\eignsb(\lambda_k,T) $$

Next we want to show that $\sum_{j\in\phi_k}\text{span}(v_j)$ is a direct sum (this is unnecessary to reach the desired conclusion but it’s good to see). Suppose $s_j\in\span{v_j}$ for $j\in\phi_k$. Then $s_j=a_jv_j$ for some $a_j\in\wF$. Further suppose that $0=\sum_{j\in\phi_k}s_j$. Then

$$ 0 = \sum_{j\in\phi_k}s_j = \sum_{j\in\phi_k}a_jv_j $$

Then the linear independence of $(v_j)_{j\in\phi_k}$ implies that $0=a_j$ for $j\in\phi_k$. Hence $0=s_j$ for $j\in\phi_k$ and $\bigoplus_{j\in\phi_k}\text{span}(v_j)$ is a direct sum (by 1.44, p.23).

Proposition W.2.11 implies that $(v_j)_{j\in\phi_k}$ is a basis of $\bigoplus_{j\in\phi_k}\text{span}(v_j)$. Hence

$$ \dim{\bigoplus_{j\in\phi_k}\text{span}(v_j)}=\text{len}\big((v_j)_{j\in\phi_k}\big)=\text{len}(\phi_k) $$

and

$$ \dim{\eignsb(\lambda_k,T)}=\dim{\sum_{j\in\phi_k}\text{span}(v_j)}=\dim{\bigoplus_{j\in\phi_k}\text{span}(v_j)}=\text{len}(\phi_k)\quad\blacksquare $$

(8) Suppose $T\in\lnmpsb(\mathbb{F}^5)$ and $\dim{\eignsb(8,T)}=4$. Then $T-2I$ or $T-6I$ is invertible.

Proof Suppose both $T-2I$ and $T-6I$ are not invertible. Then both $2$ and $6$ are eigenvalues of $T$ (by 5.6, p.134). Hence $\eignsb(2,T)\neq\{0\}$ and $\eignsb(6,T)\neq\{0\}$ (by W.5.12). Hence $\dim{\eignsb(2,T)}\geq1$ and $\dim{\eignsb(6,T)}\geq1$. Let $\lambda_1,\dots,\lambda_m$ be all of the distinct eigenvalues of $T$. Then proposition 5.38, p.156 gives the last inequality:

$$ 6=4+1+1\leq\dim{\eignsb(8,T)}+\dim{\eignsb(2,T)}+\dim{\eignsb(6,T)}\leq\sum_{k=1}^m\dim{\eignsb(\lambda_k,T)}\leq\dim{\mathbb{F}^5}=5 $$

Contradiction and $T-2I$ or $T-6I$ is invertible. $\blacksquare$

(9) Suppose $T\in\lnmpsb(V)$ is invertible. Then $\eignsb(\lambda,T)=\eignsb\big(\frac1\lambda,T^{-1}\big)$ for every $\lambda\in\mathbb{F}$ with $\lambda\neq0$.

Proof Let $\lambda\in\mathbb{F}$ with $\lambda\neq0$. And let $v\in\eignsb(\lambda,T)$. Then proposition W.5.9 gives that $v=0$ or $v$ is an eigenvector of $T$ corresponding to $\lambda$. In either case, we have $Tv=\lambda v$. Since $T$ is invertible, then

$$ v=Iv=(T^{-1}T)v=T^{-1}(Tv)=T^{-1}(\lambda v)=\lambda T^{-1}v $$

And since $\lambda\neq0$, then

$$ T^{-1}v=\frac1\lambda v $$

Hence $v\in\eignsb\big(\frac1\lambda,T^{-1}\big)$ and $\eignsb(\lambda,T)\subset\eignsb\big(\frac1\lambda,T^{-1}\big)$. Symmetry (i.e. just flip the roles of $T$ and $T^{-1}$ in this proof) allows us to conclude that $\eignsb(\lambda,T)=\eignsb\big(\frac1\lambda,T^{-1}\big)$. $\blacksquare$

(10) Suppose $V$ is finite-dimensional and $T\in\lnmpsb(V)$. Let $\lambda_1,\dots,\lambda_m$ denote the distinct nonzero eigenvalues of $T$. Then

$$ \sum_{k=1}^m\dim{\eignsb(\lambda_k,T)}\leq\dim{\mathscr{R}(T)} $$

Proof Proposition 5.38 gives the inequality and the Fundamental Theorem of Linear Algebra gives the last equality:

$$ \dim{\mathscr{N}(T)}+\sum_{k=1}^m\dim{\eignsb(\lambda_k,T)}=\dim{\eignsb(0,T)}+\sum_{k=1}^m\dim{\eignsb(\lambda_k,T)}\leq\dim{V}=\dim{\mathscr{N}(T)}+\dim{\mathscr{R}(T)} $$

since $\eignsb(0,T)=\mathscr{N}(T-0I)=\mathscr{N}(T)$. Subtracting $\dim{\mathscr{N}(T)}$ from both sides give the result. $\blacksquare$

(11) Verify the assertion in Example 5.40.

Solution We did this in our ch.5 notes under the heading Example 5.40. $\blacksquare$

(12) Suppose $R,T\in\lnmpsb(\mathbb{F}^3)$ each have $2,6,7$ as eigenvalues. Then there exists an invertible operator $S\in\lnmpsb(\mathbb{F}^3)$ such that $R=S^{-1}TS$.

Cross-reference with Exercise 7.C.11

Proof Since $R$ and $T$ each have $3$ distinct eigenvalues and $\dim{\mathbb{F}^3}=3$, then proposition 5.44 gives that $R$ and $T$ are diagonalizable. Hence there exist bases $v_1,v_2,v_3$ and $w_1,w_2,w_3$ of $\mathbb{F}^3$ such that

$$ Tv_1=2v_1 \quad\quad\quad\quad Tv_2=6v_2 \quad\quad\quad\quad Tv_3=7v_3 $$

$$ Rw_1=2w_1 \quad\quad\quad\quad Rw_2=6w_2 \quad\quad\quad\quad Rw_3=7w_3 $$

Proposition 3.5 gives the existence of $S\in\lnmpsb(\mathbb{F}^3)$ such that

$$ Sw_1=v_1 \quad\quad\quad\quad Sw_2=v_2 \quad\quad\quad\quad Sw_3=v_3 $$

Since $S$ maps a basis to a basis, then proposition W.3.17 gives that $S$ is invertible. Note that $S^{-1}v_k=S^{-1}Sw_k=w_k$. Hence

$$ S^{-1}TSw_1=S^{-1}Tv_1=S^{-1}(2v_1)=2S^{-1}v_1=2w_1=Rw_1 \\ S^{-1}TSw_2=S^{-1}Tv_2=S^{-1}(6v_2)=6S^{-1}v_2=6w_2=Rw_2 \\ S^{-1}TSw_3=S^{-1}Tv_3=S^{-1}(7v_3)=7S^{-1}v_3=7w_3=Rw_3 $$

Since $S^{-1}TS-R$ is zero on a basis, then it’s zero on the domain (by W.3.12). $\blacksquare$

(13) Find $R,T\in\lnmpsb(\mathbb{F}^4)$ such that $R$ and $T$ each have $2,6,7$ as eigenvalues, $R$ and $T$ have no other eigenvalues, and there doesn’t exist an invertible operator $S\in\lnmpsb(\mathbb{F}^4)$ such that $R=S^{-1}TS$.

Cross-reference with Exercise 7.C.12

Proof Let $v_1,v_2,v_3,v_4$ be a basis of $\mathbb{F}^4$ and define $R,T\in\lnmpsb(\mathbb{F}^4)$ by

$$ Tv_1=2v_1 \quad\quad\quad\quad Tv_2=2v_2+v_1 \quad\quad\quad\quad Tv_3=6v_3 \quad\quad\quad\quad Tv_4=7v_4 $$

$$ Rv_1=2v_1 \quad\quad\quad\quad Rv_2=2v_2 \quad\quad\quad\quad Rv_3=6v_3 \quad\quad\quad\quad Rv_4=7v_4 $$

Then $R$ is diagonalizable because $V$ has a basis consisting of $R$’s eigenvectors (5.41.b) - or we can use 5.44 to see this. Now let’s look at

$$\begin{align*} \mtrxofsb(T-2I) &= \mtrxof{T}-\mtrxofsb(2I) \\ &=\begin{bmatrix}2&1&0&0\\0&2&0&0\\0&0&6&0\\0&0&0&7\end{bmatrix} -\begin{bmatrix}2&0&0&0\\0&2&0&0\\0&0&2&0\\0&0&0&2\end{bmatrix} \\ &=\begin{bmatrix}2-2&1&0&0\\0&2-2&0&0\\0&0&6-2&0\\0&0&0&7-2\end{bmatrix} \\ &=\begin{bmatrix}0&1&0&0\\0&0&0&0\\0&0&4&0\\0&0&0&5\end{bmatrix} \end{align*}$$

We see that the column rank of $\mtrxofsb(T-2I)$ is $3$. Then proposition 3.117 gives that $\dim{\mathscr{R}(T-2I)}=3$. Alternatively

$$\begin{align*} (T-2I)v_1&=Tv_1-2Iv_1=2v_1-2v_1=0 \\ (T-2I)v_2&=Tv_2-2Iv_2=v_1+2v_2-2v_2=v_1 \\ (T-2I)v_3&=Tv_3-2Iv_3=6v_3-2v_3=4v_3 \\ (T-2I)v_4&=Tv_4-2Iv_4=7v_4-2v_4=5v_4 \end{align*}$$

Again, we see that the length of the largest linearly independent sublist of $(T-2I)v_k$ for $k=1,2,3,4$ is $3$. Then proposition W.3.10.b gives $\mathscr{R}(T-2I)=3$. Hence

$$ \dim{\eignsb(2,T)}=\dim{\mathscr{N}(T-2I)}=\dim{\mathbb{F}^4}-\dim{R}(T-2I)=1 $$

Similarly

$$\begin{align*} \mtrxofsb(T-6I) &=\begin{bmatrix}2-6&1&0&0\\0&2-6&0&0\\0&0&6-6&0\\0&0&0&7-6\end{bmatrix} \\ &=\begin{bmatrix}-4&1&0&0\\0&-4&0&0\\0&0&0&0\\0&0&0&1\end{bmatrix} \end{align*}$$

or

$$\begin{align*} (T-6I)v_1&=Tv_1-6Iv_1=2v_1-6v_1=-4v_1 \\ (T-6I)v_2&=Tv_2-6Iv_2=v_1+2v_2-6v_2=v_1-4v_2 \\ (T-6I)v_3&=Tv_3-6Iv_3=6v_3-6v_3=0 \\ (T-6I)v_4&=Tv_4-6Iv_4=7v_4-6v_4=v_4 \end{align*}$$

Hence $\mathscr{R}(T-6I)=3$ and

$$ \dim{\eignsb(6,T)}=\dim{\mathscr{N}(T-6I)}=\dim{\mathbb{F}^4}-\dim{R}(T-6I)=1 $$

Similarly

$$\begin{align*} \mtrxofsb(T-7I) &=\begin{bmatrix}2-7&1&0&0\\0&2-7&0&0\\0&0&6-7&0\\0&0&0&7-7\end{bmatrix} \\ &=\begin{bmatrix}-5&1&0&0\\0&-5&0&0\\0&0&-1&0\\0&0&0&0\end{bmatrix} \end{align*}$$

Hence $\mathscr{R}(T-7I)=3$ and

$$ \dim{\eignsb(7,T)}=\dim{\mathscr{N}(T-7I)}=\dim{\mathbb{F}^4}-\dim{R}(T-7I)=1 $$

Now let’s add up the dimensions of these eigenspaces:

$$ \dim{\eignsb(2,T)}+\dim{\eignsb(6,T)}+\dim{\eignsb(7,T)}=1+1+1=3<4=\dim{\mathbb{F}^4} $$

Hence $T$ is not diagonalizable (by 5.41.e).

Now suppose there exists an invertible operator $S\in\lnmpsb(\mathbb{F}^4)$ such that $R=S^{-1}TS$. Since $S$ is invertible, then it maps a basis to a basis (by W.3.17). That is, $Sv_1,Sv_2,Sv_3,Sv_4$ is a basis of $\mathbb{F}^4$. Also note that

$$\begin{align*} R=S^{-1}TS\quad \iff \quad SRS^{-1}=S(S^{-1}TS)S^{-1}=T \end{align*}$$

Then

$$ T(Sv_1)=SRS^{-1}(Sv_1)=SRv_1=S(2v_1)=2Sv_1 \\ T(Sv_2)=SRS^{-1}(Sv_2)=SRv_2=S(2v_2)=2Sv_2 \\ T(Sv_3)=SRS^{-1}(Sv_3)=SRv_3=S(6v_3)=6Sv_3 \\ T(Sv_4)=SRS^{-1}(Sv_4)=SRv_4=S(7v_4)=7Sv_4 $$

This implies $T$ is diagonalizable (which we should clearly see) since $\mathbb{F}^4$ has a basis consisting of eigenvectors of $T$ (by 5.41.b). But we computed above that $T$ is not diagonalizable. Hence we have a contradiction and the assumption that there exists such an $S$ is wrong. $\blacksquare$

(14) Find $T\in\lnmpsb(\mathbb{C}^3)$ such that $T$ does not have a diagonal matrix with respect to any basis of $\mathbb{C}^3$.

Proof Let $v_1,v_2,v_3$ be a basis of $\mathbb{C}^3$ and define $T\in\lnmpsb(\mathbb{C}^3)$ by

$$ Tv_1=6v_1 \quad\quad\quad Tv_2=6v_2+v_1 \quad\quad\quad Tv_3=7v_3 $$

Then

$$\begin{align*} \mtrxofsb(T-6I) &=\begin{bmatrix}6-6&1&0\\0&6-6&0\\0&0&7-6\end{bmatrix} \\ &=\begin{bmatrix}0&1&0\\0&0&0\\0&0&1\end{bmatrix} \end{align*}$$

or

$$\begin{align*} (T-6I)v_1&=Tv_1-6Iv_1=6v_1-6v_1=0 \\ (T-6I)v_2&=Tv_2-6Iv_2=v_1+6v_2-6v_2=v_1 \\ (T-6I)v_3&=Tv_3-6Iv_3=7v_3-6v_3=v_3 \end{align*}$$

So $\dim{\mathscr{R}(T-6I)}=2$ and

$$ \dim{\eignsb(6,T)}=\dim{\mathscr{N}(T-6I)}=\dim{\mathbb{C}^3}-\dim{\mathscr{R}(T-6I)}=3-2=1 $$

Similarly

$$\begin{align*} \mtrxofsb(T-7I) &=\begin{bmatrix}6-7&1&0\\0&6-7&0\\0&0&7-7\end{bmatrix} \\ &=\begin{bmatrix}-1&1&0\\0&-1&0\\0&0&0\end{bmatrix} \end{align*}$$

or

$$\begin{align*} (T-7I)v_1&=Tv_1-7Iv_1=6v_1-7v_1=-v_1 \\ (T-7I)v_2&=Tv_2-7Iv_2=v_1+6v_2-7v_2=v_1-v_2 \\ (T-7I)v_3&=Tv_3-7Iv_3=7v_3-7v_3=0 \end{align*}$$

So $\dim{\mathscr{R}(T-7I)}=2$ and

$$ \dim{\eignsb(7,T)}=\dim{\mathscr{N}(T-7I)}=\dim{\mathbb{C}^3}-\dim{\mathscr{R}(T-7I)}=3-2=1 $$

Now let’s add up the dimensions of these eigenspaces:

$$ \dim{\eignsb(6,T)}+\dim{\eignsb(7,T)}=1+1=2<3=\dim{\mathbb{C}^3} $$

Hence $T$ is not diagonalizable (by 5.41.e). $\blacksquare$

(15) Suppose $T\in\lnmpsb(\mathbb{C}^3)$ is such that $6$ and $7$ are eigenvalues of $T$. Futhermore, suppose $T$ does not have a diagonal matrix with respect to any basis of $\mathbb{C}^3$. Then there exists $(x,y,z)\in\mathbb{F}^3$ such that

$$ T(x,y,z)=(17+8x,\sqrt{5}+8y,2\pi+8z) $$

Proof Note that $8$ is not an eigenvalue of $T$. If it were, then $T$ would have $3=\dim{\mathbb{C}^3}$ distinct eigenvalues. Hence it would be diagonalizable (by 5.44). But the problem states otherwise.

Hence $T-8I$ is surjective (by 5.6). Hence $(17,\sqrt{5},2\pi)\in\mathscr{R}(T-8I)$. Hence there exists $(x,y,z)\in\mathbb{C}^3$ such that

$$\begin{align*} (17,\sqrt{5},2\pi) &= (T-8I)(x,y,z) \\ &= T(x,y,z)-(8I)(x,y,z) \\ &= T(x,y,z)-8\big(I(x,y,z)\big) \\ &= T(x,y,z)-8(x,y,z) \\ &= T(x,y,z)-(8x,8y,8z) \\ \end{align*}$$

Adding $(8x,8y,8z)$ to both sides gives

$$ T(x,y,z)=(17+8x,\sqrt{5}+8y,2\pi+8z)\quad\blacksquare $$

(16) The Fibonacci sequence $F_1,F_2,\dots$ is defined by

$$ F_1\equiv1 \quad\quad\quad F_2\equiv1 \quad\quad\quad F_n\equiv F_{n-2}+F_{n-1}\quad\text{for }n\geq3 $$

So

$$\begin{align*} F_3 &=F_{3-2}+F_{3-1}=F_1+F_2=1+1=2 \\ F_4 &=F_{4-2}+F_{4-1}=F_2+F_3=1+2=3 \\ F_5 &=F_{5-2}+F_{5-1}=F_3+F_4=2+3=5 \\ F_6 &=F_{6-2}+F_{6-1}=F_4+F_5=3+5=8 \\ F_7 &=F_{7-2}+F_{7-1}=F_5+F_6=5+8=13 \\ F_8 &=F_{8-2}+F_{8-1}=F_6+F_7=8+13=21 \end{align*}$$

Define $T\in\lnmpsb(\mathbb{R}^2)$ by $T(x,y)\equiv(y,x+y)$. Hence

$$ T(xe_1+ye_2)=ye_1+(x+y)e_2 $$

Hence

$$\begin{align*} Te_1 &=T(1\cdot e_1+0\cdot e_2)=0\cdot e_1+(1+0)e_2=e_2 \\ Te_2 &=T(0\cdot e_1+1\cdot e_2)=1\cdot e_1+(0+1)e_2=e_1+e_2 \end{align*}$$

(a) Show that $T^n(0,1)=(F_n,F_{n+1})$ for each positive integer $n$.
(b) Find the eigenvalues of $T$.
(c) Find a basis of $\mathbb{R}^2$ consisting of eigenvectors of $T$.
(d) Use the solution to part (c) to compute $T^n(0,1)$. Conclude that

$$ F_n=\frac1{\sqrt{5}}\Big[\Big(\frac{1+\sqrt{5}}2\Big)^n-\frac{1-\sqrt{5}}2\Big)^n\Big] $$

$\quad$ for each positive integer $n$.
(e) Use part (d) to conclude that for each positive integer $n$, the Fibonacci number $F_n$ is the integer that is closest to

$$ \frac1{\sqrt{5}}\Big(\frac{1+\sqrt{5}}2\Big)^n $$

Proof

(a) Note that

$$ T(0,1)=(1,0+1)=(1,1)=(F_1,F_2) $$

Similarly

$$ T^2(0,1)=T\big(T(0,1)\big)=T(F_1,F_2)=(F_2,F_1+F_2)=(F_2,F_3) $$

So $T^n(0,1)=(F_n,F_{n+1})$ is true for $n=1,2$. Our induction assumption is

$$ T^{n-1}(0,1)=(F_{n-1},F_{n-1+1})=(F_{n-1},F_n) $$

Note that

$$ F_{n+1}=F_{n+1-2}+F_{n+1-1}=F_{n-1}+F_n $$

Then

$$\begin{align*} T^n(0,1) &= T\big(T^{n-1}(0,1)\big) \\ &= T\big((F_{n-1},F_n)\big) \\ &= (F_n,F_{n-1}+F_n) \\ &= (F_n,F_{n+1}) \end{align*}$$

(b) & (c) We want to find $\lambda$ such that

$$ (\lambda x,\lambda y)=\lambda(x,y)=T(x,y)=(y,x+y) $$

This becomes the system

$$\begin{align*} \lambda x &=y \\ \lambda y &=x+y \end{align*}$$

If $y=0$, then the second equation becomes $0=\lambda\cdot0=x+0=x$. So we cannot have $y=0$ since eigenvectors aren’t $(0,0)$.

Similarly, if $x=0$, the first equation becomes $0=\lambda\cdot0=y$. So again we cannot have $x=0$.

Since we cannot have $x=0$ or $y=0$, the first equation implies that $\lambda\neq0$. For if $\lambda=0$, then the first equation becomes $y=\lambda x=0\cdot x=0$.

Now substitude $y=\lambda x$ into the second equation:

$$ \lambda\lambda x=\lambda y=x+y=x+\lambda x=(1+\lambda)x $$

Dividing both sides by $x$, we get

$$ \lambda^2=1+\lambda \quad\quad\iff\quad\quad \lambda^2-\lambda-1=0 $$

We can solve this with the quadratic formula:

$$\begin{align*} \lambda &= \frac{-(-1)\pm\sqrt{(-1)^2-4\cdot1\cdot(-1)}}{2\cdot1} \\ &= \frac{1\pm\sqrt{1+4}}{2} \\ &= \frac{1\pm\sqrt{5}}{2} \end{align*}$$

By the first equation in the system, we have $\lambda=\frac{y}{x}$. Hence we can set $x=1$ and $y=\frac{1\pm\sqrt{5}}{2}$ so that we have two eigenpairs $\lambda_1,v_1$ and $\lambda_2,v_2$ defined by:

$$ \lambda_1\equiv\frac{1+\sqrt{5}}2 \quad\quad\quad\quad\quad v_1\equiv\Big(1,\frac{1+\sqrt{5}}2\Big) $$

and

$$ \lambda_2\equiv\frac{1-\sqrt{5}}2 \quad\quad\quad\quad\quad v_2\equiv\Big(1,\frac{1-\sqrt{5}}2\Big) $$

The eigenvectors $v_1,v_2$ correspond to distinct eigenvalues so they’re linearly independent (by 5.10). Hence $v_1,v_2$ is a basis of $\mathbb{R}^2$ (by 2.39).

(d) Since

$$ \frac{1+\sqrt{5}}2-\frac{1-\sqrt{5}}2=\frac12+\frac{\sqrt{5}}2-\Big(\frac12-\frac{\sqrt{5}}2\Big)=\frac{\sqrt{5}}2+\frac{\sqrt{5}}2=\sqrt{5} $$

then

$$\begin{align*} \frac1{\sqrt{5}}(v_1-v_2) &= \frac1{\sqrt{5}}\Big[\Big(1,\frac{1+\sqrt{5}}2\Big)-\Big(1,\frac{1-\sqrt{5}}2\Big)\Big] \\ &= \frac1{\sqrt{5}}(0,\sqrt{5}) \\ &=(0,1) \end{align*}$$

Hence

$$\begin{align*} (F_n,F_{n+1}) &= T^n(0,1)\tag{by part(a)} \\ &= T^n\Big(\frac1{\sqrt{5}}(v_1-v_2)\Big) \\ &= \frac1{\sqrt{5}}T^n(v_1-v_2) \\ &= \frac1{\sqrt{5}}(T^nv_1-T^nv_2) \\ &= \frac1{\sqrt{5}}(\lambda_1^nv_1-\lambda_2^nv_2)\tag{by exercise 5.B.10} \\ &= \frac1{\sqrt{5}}\lambda_1^nv_1-\frac1{\sqrt{5}}\lambda_2^nv_2 \\ &= \frac1{\sqrt{5}}\Big[\frac{1+\sqrt{5}}2\Big]^n\Big(1,\frac{1+\sqrt{5}}2\Big)-\frac1{\sqrt{5}}\Big[\frac{1-\sqrt{5}}2\Big]^n\Big(1,\frac{1-\sqrt{5}}2\Big) \\ &= \frac1{\sqrt{5}}\Big(\Big[\frac{1+\sqrt{5}}2\Big]^n,\Big[\frac{1+\sqrt{5}}2\Big]^{n+1}\Big)-\frac1{\sqrt{5}}\Big(\Big[\frac{1-\sqrt{5}}2\Big]^n,\Big[\frac{1-\sqrt{5}}2\Big]^{n+1}\Big) \\ &= \frac1{\sqrt{5}}\Big(\Big[\frac{1+\sqrt{5}}2\Big]^n-\Big[\frac{1-\sqrt{5}}2\Big]^n,\Big[\frac{1+\sqrt{5}}2\Big]^{n+1}-\Big[\frac{1-\sqrt{5}}2\Big]^{n+1}\Big) \\ \end{align*}$$

Equating components, we get

$$ F_n=\frac1{\sqrt{5}}\Big(\Big[\frac{1+\sqrt{5}}2\Big]^n-\Big[\frac{1-\sqrt{5}}2\Big]^n\Big) $$

(e) Note that

$$ (1-\sqrt{5})(1+\sqrt{5})=1+\sqrt{5}-\sqrt{5}-5=-4 $$

Dividing boths sides by $2(1+\sqrt{5})$, we get

$$ \frac{1-\sqrt{5}}{2}=\frac{(1-\sqrt{5})(1+\sqrt{5})}{2(1+\sqrt{5})}=-\frac4{2(1+\sqrt{5})}=-\frac2{1+\sqrt{5}} $$

Hence

$$ \Big\lvert\frac{1-\sqrt{5}}{2}\Big\rvert=\Big\lvert\frac2{1+\sqrt{5}}\Big\rvert $$

Also note that $\sqrt5\geq2$. Hence $\frac1{\sqrt5}\leq\frac12$ and $1+\sqrt5\geq3$. Hence $\frac1{1+\sqrt5}\leq\frac13$ and $\frac2{1+\sqrt5}\leq\frac23$. Hence

$$ \frac1{\sqrt5}\Big\lvert\frac{1-\sqrt{5}}{2}\Big\rvert^n=\frac1{\sqrt5}\Big\lvert\frac2{1+\sqrt{5}}\Big\rvert^n\leq\frac12\cdot\frac23=\frac13 $$

Also note that

$$ \frac1{\sqrt{5}}\Big[\frac{1+\sqrt5}2\Big]^n-F_n=\frac1{\sqrt{5}}\Big[\frac{1+\sqrt5}2\Big]^n-\frac1{\sqrt{5}}\Big(\Big[\frac{1+\sqrt{5}}2\Big]^n-\Big[\frac{1-\sqrt{5}}2\Big]^n\Big)=\frac1{\sqrt{5}}\Big[\frac{1-\sqrt5}2\Big]^n $$

Hence

$$ \Big\lvert\frac1{\sqrt{5}}\Big(\frac{1+\sqrt5}2\Big)^n-F_n\Big\rvert=\frac1{\sqrt{5}}\Big\lvert\frac{1-\sqrt5}2\Big\rvert^n\leq\frac13<\frac12 $$

In words, for any $n\in\mathbb{Z}^+$, the integer $F_n$ is the closest integer to $\frac1{\sqrt{5}}\Big(\frac{1+\sqrt5}2\Big)^n$. $\blacksquare$