Linear Algebra I: Homework 5

Due: Friday, September 22, 2017
    1. Find the matrix for the linear transformation \(\tfrac{d}{dx}\) acting on the vector space \(V = \{a+bx+cx^2 \mid a,b,c \in \mathbb R\}\) of polynomials of degree \(2\) or less in the ordered basis \(B = (x^2, x, 1)\).

      We plug the basis vectors in \(B\) into \(\tfrac d{dx}\) and evaluate using calculus. We then write them as column vectors with regards to \(B\). We then stack them side-by-side into a matrix.

      \[\tfrac d{dx} x^2 = 2x = \begin{pmatrix}0\\2\\0\end{pmatrix}_B\] \[\tfrac d{dx} x = 1 = \begin{pmatrix}0\\0\\1\end{pmatrix}_B\] \[\tfrac d{dx} 1 = 0 = \begin{pmatrix}0\\0\\0\end{pmatrix}_B\]

      So the matrix is

      \[\begin{pmatrix}0&0&0 \\ 2&0&0 \\ 0&1&0\end{pmatrix}.\]
    2. Use your matrix from (a) to rewrite the differential equation (\(p(x)\) is a vector in \(V\))

      \[\tfrac{d}{dx}p(x) = x\]

      as a matrix equation. Find all solutions of the matrix equation, then write them as elements of \(V\).

      Using our answer to (a), the equation becomes the matrix equation with unknown vector \(p(x)\);

      \[\left( \begin{pmatrix}0&0&0 \\ 2&0&0 \\ 0&1&0\end{pmatrix} p(x) \right)_B = \begin{pmatrix}0\\1\\0\end{pmatrix}_B\]

      Using Gaussian elimination we get that

      \[\left( \begin{pmatrix}1&0&0 \\ 0&1&0 \\ 0&0&0\end{pmatrix} p(x) \right)_B = \begin{pmatrix}1/2\\0\\0\end{pmatrix}_B\]

      As there is no pivot in the third column of the matrix, the coefficient on \(1\) is free. The pivots in columns two and one respectively say that the coefficient on \(x\) is \(0\) and the coefficient on \(x^2\) is \(1/2\). So,

      \[p(x) = \tfrac 12 x^2 + 0x + c\]

      Where \(c\) can be any real number.

      This is the same as the antiderivative \(\int {x \, dx}\), as you might have expected.

    3. Find the matrix for the linear transformation \(\tfrac{d}{dx}\) acting on the same vector space \(V\) but now with the ordered basis \(C = (x^2+x,x^2-x,1)\).

      We plug the basis vectors in \(C\) into \(\tfrac d{dx}\) and evaluate using calculus. We then write them as column vectors with regards to \(C\). We then stack them side-by-side into a matrix.

      \[\tfrac d{dx} (x^2+x) = 2x+1 = \begin{pmatrix}1\\-1\\1\end{pmatrix}_C\] \[\tfrac d{dx} (x^2-x) = 2x-1 = \begin{pmatrix}1\\-1\\-1\end{pmatrix}_C\] \[\tfrac d{dx} 1 = 0 = \begin{pmatrix}0\\0\\0\end{pmatrix}_C\]

      So the matrix is

      \[\begin{pmatrix}1&1&0 \\ -1&-1&0 \\ 1&-1&0\end{pmatrix}.\]
    4. Use your matrix from (c) to rewrite the differential equation (\(p(x)\) is a vector in \(V\))

      \[\tfrac{d}{dx}p(x) = x\]

      as a matrix equation. Find all solutions of the matrix equation, then write them as elements of \(V\).

      Using our answer to (b), the equation becomes the matrix equation with unknown vector \(p(x)\);

      \[\left( \begin{pmatrix}1&1&0 \\ -1&-1&0 \\ 1&-1&0\end{pmatrix} p(x) \right)_C = \begin{pmatrix}1/2\\-1/2\\0\end{pmatrix}_C\]

      Using Gaussian elimination we get that

      \[\left( \begin{pmatrix}1&0&0 \\ 0&1&0 \\ 0&0&0\end{pmatrix} p(x) \right)_C = \begin{pmatrix}1/4\\1/4\\0\end{pmatrix}_C\]

      As there is no pivot in the third column of the matrix, the coefficient on \(1\) is free. The pivots in columns two and one respectively say that the coefficient on \(x^2-x\) is \(1/4\) and the coefficient on \(x^2+x\) is \(1/4\). So,

      \[p(x) = \tfrac 14 (x^2+x) + \tfrac 14 (x^2-x) + c\]

      Where \(c\) can be any real number.

      Notice how this answer simplifies to \(p(x) = \tfrac 12 x^2 + c\) just like we got for the answer to (b). We’ll see this later, how we get the same answer no matter the basis that we use.

  1. Suppose that \(A\) is a square matrix that is antisymmetric, meaning that

    \[A = -A^T.\]

    Prove that \(\mathop{\rm tr}(A) = 0\).

    Properties of trace have that

    \[\mathop{\rm tr}(-A^T) = -\mathop{\rm tr}(A^T) = -\mathop{\rm tr}(A)\]

    But since \(A = -A^T\) this means that

    \[\mathop{\rm tr}(A) = -\mathop{\rm tr}(A)\]

    which can only happen if \(\mathop{\rm tr}(A) = 0\).

  2. The matrix exponential of a matrix \(M\) is given by the Taylor series,

    \[\exp(M) = I + M + \frac 12 M^2 + \frac 1{3!} M^3 + \cdots = \sum_{k=0}^{\infty}{\frac{1}{k!}M^k}.\]

    For \(\lambda \in \mathbb R\), let \(A\) be the matrix:

    \[A = \begin{pmatrix}\lambda & 0 \\ 0 & \lambda \end{pmatrix},\]

    and let \(B\) be the matrix:

    \[B = \begin{pmatrix}0 & \lambda \\ 0 & 0 \end{pmatrix}\]
    1. Find a concise formula for \(A^k\), for any integer \(k\).

      Since \(A = \lambda I\),

      \[A^k = (\lambda I)^k = \lambda^k I = \begin{pmatrix} \lambda^k & 0 \\ 0 & \lambda^k \end{pmatrix}\]
    2. Find a concise formula for \(\exp(A)\).

      \[\begin{aligned} \exp(A) &= \sum_{k=0}^\infty{\frac 1{k!}A^k} \\ &= \sum_{k=0}^\infty{\frac 1{k!}\lambda^kI } \\ &= \left(\sum_{k=0}^\infty{\lambda^k/k!}\right)I \\ &= e^\lambda I = \begin{pmatrix}e^\lambda & 0 \\ 0 & e^\lambda\end{pmatrix} \end{aligned}\]
    3. Find \(B^k\), for any integer \(k\).

      \(B\) has no inverse, so \(B^{k}\) is undefined for any negative \(k\).

      \(B^0\) is defined to be \(I\), and \(B^1 = B\).

      \(B^2 = 0\) by an easy calculation, so for any \(k \ge 2\), \(B^k = B^2B^{k-2} = 0B^{k-2} = 0\).

    4. Find a concise formula for \(\exp(B)\).

      Most stuff is zero, so the answer is:

      \[\exp(B) = I + B = \begin{pmatrix}1 & \lambda \\ 0 & 1\end{pmatrix}\]
  3. Find the determinant of the matrix

    \[\begin{pmatrix} 2&1&3&7 \\ 6&1&4&4 \\ 2&1&8&0 \\ 1&0&2&0 \end{pmatrix}.\]
    \[\begin{aligned} \begin{vmatrix} 2&1&3&7 \\ 6&1&4&4 \\ 2&1&8&0 \\ 1&0&2&0 \end{vmatrix} &= (-1)1\begin{vmatrix} 1&3&7 \\ 1&4&4 \\ 1&8&0 \end{vmatrix} + ( 1)1\begin{vmatrix} 2&1&7 \\ 6&1&4 \\ 2&1&0 \end{vmatrix} \\ &= -\left(\begin{vmatrix} 3&7 \\ 4&4 \end{vmatrix} - 8\begin{vmatrix} 1&7 \\ 1&4 \end{vmatrix} \right) - \left(2\begin{vmatrix} 1&7 \\ 1&4 \end{vmatrix} - \begin{vmatrix} 2&7 \\ 6&4 \end{vmatrix} \right) \\ &= -(12 - 28 - 8(4-7)) - 2(2(4-7)-8+42) \\ &= -64 \end{aligned}\]
  4. Is the determinant of a matrix a linear transformation into the real numbers \(\mathbb R\)? Explain or give a counterexample.

    No. There are many reasons, but one example is for the \(2 \times 2\) identity matrix \(I\). We know that \(\det(I) = 1\) but \(\det(2I) = 4\) rather than \(2\det(I) = 2\). So \(\det\) cannot be a linear transformation.