Using a formula called Jacobi’s formula, we can deduce that for any square matrix \(A\),
\[\det(\exp(A)) = e^{\mathop{\rm tr}(A)},\]where \(\exp(A)\) is the matrix exponential discussed in homework 5. Suppose that a matrix \(M\) is not invertible. Is \(\exp(M)\) invertible? Explain.
Based on the formula, we know that \(\det(exp(M)) = e^{\mathop{\rm tr}(A)}\). No matter what \(\mathop{\rm tr}(A)\) is, \(e^{\mathop{\rm tr}(A)} > 0\). So \(\det(exp(M)) \ne 0\), meaning that \(exp(M)\) is invertible.
Suppose \(ad - bc \ne 0\). Let \(\vec u = \begin{pmatrix}a\\c\end{pmatrix}\) and \(\vec v = \begin{pmatrix}b\\d\end{pmatrix}\).
Can \(\vec v\) be a multiple of \(\vec u\)? Explain your answer.
If \(\vec v = \lambda \vec u\) for some scalar \(\lambda\), then \(b = \lambda a\) and \(d = \lambda c\). But then \(ad-bc = \lambda ac - \lambda ca = 0\), which the problem says isn’t the case.
Let \(\vec x\) be a vector in \(\mathbb R^2\). How many ways are there to write \(\vec x\) as a linear combination of \(\vec u\) and \(\vec v\)? Explain your answer.
The number of ways to write \(\vec x\) as a linear combination of \(\vec u\) and \(\vec v\) is the same as the number of solutions to the equation,
\[\begin{pmatrix}a&b\\c&d\end{pmatrix} \vec y = \vec x.\]Since the determinant of the matrix is not zero (as \(ad-bc \ne 0\)), it is invertible. So, there is only one solution \(\vec y\) and hence exactly one way to write \(\vec x\) as a linear combination of \(\vec u\) and \(\vec v\).
Let \(B = (\vec e_1, \vec e_2)\) be the standard basis for the vector space \(\mathbb R^2\). Suppose
\[L : \mathbb R^2 \to \mathbb R^2\]is a linear transformation and that \(L(\vec e_1) = \begin{pmatrix}1\\1\end{pmatrix}\) and \(L(\vec e_2) = \begin{pmatrix}2\\1\end{pmatrix}\).
Compute the matrix of \(L\) using the basis \(B\).
The matrix \(L_B = \begin{pmatrix} L(\vec e_1) & L(\vec e_2) \end{pmatrix}\). Plugging in we get that
\[L_B = \begin{pmatrix}1&2\\1&1\end{pmatrix}\]
Compute the trace of your matrix from (a).
\(\mathop{\rm tr}(L_B) = 1+1 = 2\).
If \(ad-bc \ne 0\), then
\[B' = \left( \begin{pmatrix}a\\c\end{pmatrix}, \begin{pmatrix}b\\d\end{pmatrix}\right)\]is a basis for \(\mathbb R^2\). Compute the matrix of \(L\) using the basis \(B'\).
The matrix \(L_{B'} = \begin{pmatrix}\vec g& \vec h \end{pmatrix}\), where \(\vec g\) is \(L(\vec u)\) written as a column vector with respect to the ordered basis \(B'\) and \(\vec h\) is \(L(\vec v)\) written as a column vector with respect to \(B'\).
We can use our matrix from (a) to find out a few things. \(L(\vec u) = L_B\vec u\) and \(L(\vec v) = L_B\vec v\) (this is what it means to have a matrix for a linear transformation). How does one then write \(L_B\vec u\) as a column vector with respect to \(B'\)? Remember that
\[\begin{pmatrix}s \\ t\end{pmatrix}_{B'} = L_B\vec u\]means that \(s\vec u + t\vec v = L_B\vec u\). In other words, that means that
\[\vec g = \begin{pmatrix}s \\ t\end{pmatrix}\]is the solution to the equation \(A\vec g = L_B\vec u\), where
\[A = \begin{pmatrix} \vec u & \vec v \end{pmatrix} = \begin{pmatrix} a&b\\c&d \end{pmatrix}\]Since \(ad-bc \ne 0\) we know that \(A\) is invertible, so \(\vec g = A^{-1}L_B\vec u\). Similarly, \(\vec h = A^{-1}L_B\vec v\). So,
\[L_{B'} = \begin{pmatrix} A^{-1}L_B\vec u & A^{-1}L_B\vec v \end{pmatrix}.\]This answer is fine, as is the answer where each entry is calculated out. But you might also notice that we can write \(L_{B'}\) a little more succinctly as the matrix product,
\[L_{B'} = A^{-1}L_BA.\]
Compute the trace of your matrix from (c).
You can use an explicit matrix formula for (c) to find out the answer here. But you can also use properties of trace:
\[\mathop{\rm tr}(L_{B'}) = \mathop{\rm tr}(A^{-1}L_BA) = \mathop{\rm tr}((A^{-1}L_B)A) = \mathop{\rm tr}(A(A^{-1}L_B)) = \mathop{\rm tr}(L_B) = 2.\]
Find the value of \(a\) for which
\[\vec v = \begin{pmatrix}6\\a\\-16\\-4\end{pmatrix}\]is in the set
\[H = \mathop{\rm span}\left\{ \begin{pmatrix}-3\\2\\5\\3\end{pmatrix}, \begin{pmatrix}0\\-1\\-2\\-3\end{pmatrix}, \begin{pmatrix}0\\0\\-5\\-2\end{pmatrix} \right\}.\]Remember that asking if a vector is in a span is the same as asking if it can be written as a linear combination of the spanning vectors. But this is just a question about matrices! So, we rephrase the question in terms of an augmented matrix, and apply Gaussian elimination. We only need to get to row echelon form, since we don’t care about the actual solution, only “how many” there are.
\[\begin{aligned} &\left( \begin{array}{ccc|c} -3 & 0 & 0 & 6 \\ 2 & -1 & 0 & a \\ 5 & -2 & -5 & -16 \\ 3 & -3 & -2 & -4 \end{array} \right) \\ \stackrel{(1')=-1/3(1) \\ (2)=(2)-2(1') \\ (3)=(3)-5(1') \\ (4)=(4)-3(1')}{\sim} &\left( \begin{array}{ccc|c} 1 & 0 & 0 & -2 \\ 0 & -1 & 0 & a+ 4 \\ 0 & -2 & -5 & -6 \\ 0 & -3 & -2 & 2 \end{array} \right) \\ \stackrel{(2')=-(2) \\ (3)=(3)+2(2') \\ (4)=(4)+3(2')}{\sim} &\left( \begin{array}{ccc|c} 1 & 0 & 0 & -2 \\ 0 & 1 & 0 & -a- 4 \\ 0 & 0 & -5 & -14-2a \\ 0 & 0 & -2 & -10-3a \end{array} \right) \\ \stackrel{(4)=(4)-2/5(3)}{\sim} &\left( \begin{array}{ccc|c} 1 & 0 & 0 & -2 \\ 0 & 1 & 0 & -a- 4 \\ 0 & 0 & -5 & -14-2a \\ 0 & 0 & 0 & -10-3a+2(14 +2a)/5 \end{array} \right) \end{aligned}\]For this augmented matrix to have a solution, the last row has to be only zeros. That is, we need that \(0 = -10-3a+2(14 +2a)/5\).
\[\begin{aligned} 0 &= -10-3a+2(14 +2a)/5 \\ 0 &= -50-15a+28 +4a \\ 0 &= -22-11a \\ -2 &= a \end{aligned}\]So the vector \(\vec v\) is in the span only when \(a = -2\).