Linear Algebra I: Homework 8

Due: Friday, October 20, 2017
  1. Find the eigenvalues and their corresponding eigenvectors for the matrix

    \[A = \begin{pmatrix} 1&-3&3\\3&-5&3\\6&-6&4 \end{pmatrix}.\]

    (Hint. One of the eigenvalues is \(-2\), and you can use polynomial long division to factor the characteristic polynomial of \(A\). You are also welcome to use a calculator to find the roots of the characteristic polynomial, but all other work must be shown.)

    This is a pretty tricky characteristic polynomial to compute, but the rest is the same work as always. To find the eigenvalues, we’re trying to solve,

    \[0 = \det{(A - \lambda I)} = -\lambda^3 + 12\lambda + 16 = -(\lambda - 4)(\lambda + 2)^2\]

    So we get the eigenvalues \(\lambda = 4, -2, -2\).

    By solving the equation \((A - \lambda I)\vec v = 0\) for our eigenvalues, we find the eigenvector

    \[\begin{pmatrix}1\\1\\2\end{pmatrix}\]

    for the eigenvalue \(\lambda = 4\), and we find the linearly independent eigenvectors

    \[\begin{pmatrix}-1\\0\\1\end{pmatrix} \quad\textrm{and}\quad \begin{pmatrix}1\\1\\0\end{pmatrix}\]

    for the eigenvalue \(\lambda = -2\).

  2. Find the eigenvalues and their corresponding eigenvectors for the matrix

    \[B = \begin{pmatrix} 9&-8&6&3 \\ 0&-1&0&0 \\ 0&0&3&3 \\ 0&0&0&7 \end{pmatrix}.\]

    This matrix has a nice characteristic polynomial. To find the eigenvalues, we solve

    \[0 = \det{(B - \lambda I)} = (9 - \lambda)(-1 - \lambda)(3 - \lambda)(7 - \lambda)\]

    So we get the eigenvalues \(\lambda = 9, -1, 3, 7\).

    By solving the equation \((B - \lambda I)\vec v = 0\) for our eigenvalues, we find the eigenvector

    \[\begin{pmatrix}1\\0\\0\\0\end{pmatrix}\]

    for the eigenvalue \(\lambda = 9\), we find the eigenvector

    \[\begin{pmatrix}-15\\0\\3\\4\end{pmatrix}\]

    for the eigenvalue \(\lambda = 7\), we find the eigenvector

    \[\begin{pmatrix}-1\\0\\1\\0\end{pmatrix}\]

    for the eigenvalue \(\lambda = 3\), and we find the eigenvector

    \[\begin{pmatrix}4\\5\\0\\0\end{pmatrix}\]

    for the eigenvalue \(\lambda = -1\),

  3. Find the eigenvalues and their corresponding eigenvectors for the matrix

    \[C = \begin{pmatrix} -1 &1 \\ -1 & -1 \end{pmatrix}.\]

    (Hint. They may be complex.)

    To find the eigenvalues for this matrix, we have to allow for complex solutions. We’re solving the equation,

    \[0 = \det{(C - \lambda I)} = (-1 - \lambda)^2 + 1 = \lambda^2 + 2\lambda + 2 = (\lambda - (-1 + i))(\lambda - (-1 - i))\]

    So we get the eigenvalues \(\lambda = (-1+i), (-1-i)\).

    By solving the equation \((C - \lambda I)\vec v = 0\) for our eigenvalues, we find the eigenvector

    \[\begin{pmatrix}-i\\1\end{pmatrix}\]

    for the eigenvalue \(\lambda = -1+i\), and we find the eigenvector

    \[\begin{pmatrix}i\\1\end{pmatrix}\]

    for the eigenvalue \(\lambda = -2\).

  4. You can plug a square matrix into any polynomial by also multiplying any constant terms by the matrix \(I\) to make them a square matrix too instead of just numbers. Using this, answer the following questions.
    1. Let \(R\) be a \(2\times2\) matrix, and \(P_R(\lambda)\) its characteristic polynomial. What is \(P_R(R)\) (that is, if you plug the matrix \(R\) into all \(\lambda\)s)?

    Any \(2 \times 2\) matrix looks like, > > \(> R=\begin{pmatrix}a&b\\c&d\end{pmatrix} >\) > > and has characteristic polynomial > > \(> P_R(\lambda) = \det(\lambda I - R) = (\lambda-a)(\lambda-d) - bc = \lambda^2 - (a+d)\lambda^1 + (ad - bc)\lambda^0 >\) > > If we plug in \(R\) for \(\lambda\), we find: > > \(> \begin{aligned} > P_R(R) &= R^2 - (a+d)R^1 + (ad - bc)R^0 \\ > &= \begin{pmatrix}a^2 + bc & b(a + d) \\ c(a + d) & d^2 + bc\end{pmatrix} - (a+d)\begin{pmatrix}a&b\\c&d\end{pmatrix} + (ad - bc)\begin{pmatrix}1 & 0 \\ 0 & 1\end{pmatrix} \\ > &= \begin{pmatrix}0&0\\0&0\end{pmatrix} > \end{aligned} >\) > > So, surprisingly, \(P_R(R)\) is the zero matrix.

    1. (Bonus) Let \(S\) be a \(m\times m\) matrix, and \(P_S(\lambda)\) its characteristic polynomial. Based on your answer to part (a), what do you think \(P_S(S)\) might be? Can you think of reasons your answer makes sense?

      Surprisingly, our answer for (a) is true for any \(n \times n\) matrix \(S\):

      \[P_S(S) = 0.\]
  5. Two linear transformations, \(L\) and \(M\), have the same eigenvalues with multiplicities. Additionally, every eigenvector of \(L\) for eigenvalue \(\lambda\) is also an eigenvector of \(M\) for eigenvalue \(\lambda\).

    Are \(L\) and \(M\) the same linear transformation? Explain or give a counterexample.

    This question was trickier than intended, so I didn’t count its points in the total for this assignment. Also, I gave credit if you wrote something down for this problem indicating that you thought about it.

    Certainly, this is true if \(L\) and \(M\) both have a full set of eigenvectors. This means that there is the same basis of eigenvectors for \(L\) and \(M\) so that they both diagonalize as the same diagonal matrix of eigenvalues \(D\)

    If we were just thinking of \(L\) and \(M\) as matrices, and they don’t have enough eigenvectors, they might not be the same. For example, the matrices

    \[\begin{pmatrix}0&1\\0&0\end{pmatrix} \quad\textrm{and}\quad \begin{pmatrix}0&2\\0&0\end{pmatrix}\]

    have the same eigenvalues and the same eigenvector, but are different matrices. If they are matrix representations of \(L\) and \(M\) for the standard basis, then we could say \(L\) and \(M\) are different linear transformations. But what if we could change bases? If the second matrix were instead a representation for the linear transformation \(M\) in the basis

    \[B = \left(\begin{pmatrix}1\\0\end{pmatrix}, \begin{pmatrix}0\\\frac 12\end{pmatrix}\right)\]

    then the two linear transformations are the same.

    We don’t even need to resort to worrying about bases, and it turns out that it’s still not true that two linear transformations are the same in this case, provided they still do not have a full set of eigenvectors. Consider the linear transformations who, for the standard basis, have matrix representations:

    \[L = \begin{pmatrix}0&1&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&1&0 \end{pmatrix}\]

    and,

    \[M = \begin{pmatrix}0&1&0&0 \\ 0&0&1&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{pmatrix}.\]

    Then \(L\) and \(M\) both have the eigenvalues \(\lambda = 0,0,0,0\) and have for eigenvectors only the standard vectors

    \[\begin{pmatrix}1\\0\\0\\0\end{pmatrix} \quad\textrm{and}\quad \begin{pmatrix}0\\0\\0\\1\end{pmatrix}\]

    However, \(L^2 = 0\) while \(M^2 \ne 0\) (in fact, \(M^3 = 0\), but the point remains) so that they cannot represent the same linear transformation, even allowing for the changing of basis.