Let \(\mathbb R^2\) have inner product,
\[\langle \vec x, \vec y \rangle = 3x_1y_1 + 5x_2y_2.\]Let \(\vec u = (1, 1)\), \(\vec v = (3,2)\), \(\vec w = (0, -1)\).
Compute \(\langle \vec u, \vec w \rangle\).
Compute \(\langle 3\vec u, \vec v \rangle\).
Compute \(\lVert \vec u - 3\vec w \rVert\).
Find some unit vectors with regards to this inner product \(\langle \cdot , \cdot \rangle\) and sketch its unit circle. Hint; it will not look like a typical unit circle.
For all of these, we just have to use the inner product they give us.
\(\langle \vec u, \vec w \rangle = 3u_1w_1 + 5 u_2w_2 = 3(1)(0) + 5(1)(-1) = -5\).
\(\langle \vec 3u, \vec v \rangle = 3(3u_1)v_1 + 5(3u_2)v_2 = 3(3)(3) + 5(3)(2) = 57\).
\(\lVert \vec u - 3\vec w \rVert = \sqrt{\langle \vec u - 3\vec w, \vec u - 3\vec w \rangle} = \sqrt{3(u_1-3w_1)^2 + 5(u_2-3w_2)^2} = \sqrt{3(1)^2+5(4)^2} = \sqrt{83}\).
A unit vector is something with magnitude 1; this means that for a vector \(\vec x = (x, y)\) to be a unit vector, it must obey,
\[1^2 = \lVert \vec x \rVert^2 = \langle \vec x, \vec x \rangle = 3x^2 + 5y^2.\]This equation describes an ellipse which is wider than it is tall (for instance, it passes through the points \((0, \pm 1/\sqrt 5)\), \((\pm 1/\sqrt 3, 0)\)). For a graph, see for instance WolframAlpha.
Use the Gram-Schmidt process to orthonormalize the basis \(B\) with respect to the dot product on \(\mathbb R^3\):
\[B = \left( \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix} \right)\]Gram-Schmidt gives us the three orthogonal vectors,
\[\left( \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}, \begin{pmatrix} 1/6 \\ 1/6 \\ -2/6 \end{pmatrix} \right)\]Which we orthonormalize to be
\[\left( \frac{1}{\sqrt{3}} \begin{pmatrix} 1 \\ 1 \\ 1 \end{pmatrix}, \frac{1}{\sqrt{2}} \begin{pmatrix} -1 \\ 1 \\ 0 \end{pmatrix}, \sqrt{6} \begin{pmatrix} 1/6 \\ 1/6 \\ -2/6 \end{pmatrix} \right).\]
Let \(W\) be a subspace of a vector space \(V\) with inner product \(\langle \cdot , \cdot \rangle\). The orthogonal complement of \(W\) in \(V\) is the subspace \(W^\perp\) of all vectors \(u\) which are orthogonal to every vector in \(W\).
Let
\[W = {\rm span}\left\{ \begin{pmatrix}1 \\ 4 \\ 5 \\ 2\end{pmatrix}, \begin{pmatrix}2 \\ 1 \\ 3 \\ 0\end{pmatrix} \right\}\]be a subspace of \(\mathbb R^4\). Find a matrix equation for which \(W^\perp\) is the set of all solutions, then solve it to find \(W^\perp\).
Notice that the system of linear equations \(\vec w \cdot \vec x = 0\), where \(\vec w\) is a basis vector for \(W\) can be rewritten as the matrix equation \(A^T \vec x = 0\), where \(A\) is the matrix whose columns are the basis vectors for \(W\). Solving this yields,
\[W^\perp = {\rm span} \left\{ \begin{pmatrix}-1\\-1\\1\\0\end{pmatrix}, \begin{pmatrix}2\\-4\\0\\7\end{pmatrix} \right\}.\]
Let \(R\) be the subspace defined by the plane \(2x + y - z = 0\) in \(\mathbb R^3\). Find \(R^\perp\).
The plane given is spanned by the vectors,
\[\left\{ \begin{pmatrix}-1 \\ 2 \\ 0\end{pmatrix}, \begin{pmatrix}1\\ 0 \\ 2\end{pmatrix} \right\}.\]Solving the equation \(B^T\vec x = 0\), where \(B\) is the matrix made of the above columns, yields
\[R^\perp = {\rm span} \left\{ \begin{pmatrix}2 \\ 1 \\ -1\end{pmatrix} \right\}.\]
Let \(W\) be a subspace of \(V\), \(B = \left\{ \vec b_1, \vec b_2, \cdots, \vec b_k \right\}\) be an orthonormal basis for \(W\) and \(C = \left\{ \vec c_1, \vec c_2, \cdots, \vec c_\ell \right\}\) be an orthonormal basis for its orthogonal complement (see #3) \(W^\perp\).
Consider the set of vectors
\[U = \left\{ \vec b_1 \vec b_2, \cdots \vec b_k, \vec c_1, \vec c_2, \cdots, \vec c_\ell\right\}.\]Show that the only vector in both \(W\) and \(W^\perp\) is \(\vec 0\).
If \(\vec x\) is in both \(W\) and \(W^\perp\) then \(\lVert \vec x \rVert^2 = \vec x \cdot \vec x = 0\). But the only vector with magnitude zero is \(\vec x = \vec 0\).
It turns out that for every vector \(\vec x \in V\), there is a unique way to write it as the sum \(\vec x = \vec x^{||} + \vec x^\perp\) where \(\vec x^{||} \in W\) and \(\vec x^{\perp} \in W^\perp\) (basically, Gram-Schmidt).
Taking this as a given, explain why \(U\) is an orthonormal basis for \(V\). Hint; first explain why \(U\) spans \(V\). Then, tell me what dot products between different kinds of vectors in \(U\) are, and use this to convince me that it is orthonormal (and hence linearly independent, too).
If any \(\vec x = \vec x^{||} + \vec x^{\perp}\), then \(\vec x\) can be written as a linear combination of vectors in \(U\) because the parallel part can be written as a linear combination of the \(\vec b_i\) and the orthogonal part can be written as a linear combination of the \(\vec c_i\).
\(U\) is orthonormal because all of its vectors are unit lengths and all pairs dot to zero; we know \(\vec b_i \cdot \vec b_j = 0\) because they come from an orthonormal basis (similarly to \(\vec c_i \cdot \vec c_j\)). It remains to reassure ourselves that \(\vec b_i \cdot \vec c_j = 0\), but this happens because they come from orthogonal complements. If all the vectors are orthogonal and nonzero they must be linearly independent. So \(U\) is an orthonormal basis for \(V\).
Let \(A\) be a symmetric matrix.
If \(\vec v_1, \vec v_2\) are eigenvectors of \(A\) for eigenvalues \(\lambda_1, \lambda_2\) (where \(\lambda_1 \ne \lambda_2\)), explain why \(v_1\) and \(v_2\) are orthogonal. Hint; remember \(\vec u \cdot \vec v = \vec u^T \vec v\) and compute \((A\vec v_1)\cdot \vec v_2\).
On the one hand, we have \(A\vec v_1 \cdot \vec v_2 = \lambda_1 (\vec v_1 \cdot \vec v_2)\). But we can also see that \(A\vec v_1 \cdot \vec v_2 = (A\vec v_1)^T\vec v_2 = \vec v_1^TA^T\vec v_2 = \vec v_1 \cdot A^T\vec v_2 = \vec v_1 \cdot A\vec v_2 = \vec v_1 \cdot \lambda_2 \vec v_2 = \lambda_2 (\vec v_1 \cdot \vec v_2)\).
So, \((\lambda_1 - \lambda_2)(\vec v_1 \cdot \vec v_2) = 0\). But since \(\lambda_1 \ne \lambda_2\), this means \(\vec v_1 \cdot \vec v_2 = 0\), so they are orthogonal.
Suppose \(A\) diagonalizes (in fact, every symmetric matrix \(A\) diagonalizes always). Explain why it is possible to find an orthogonal basis of eigenvectors for \(A\). Conclude that it is possible to diagonalize \(A\) as
\[A = PDP^T\]where \(P\) is an orthogonal matrix.
The point here is that we can find an orthonormal eigenbasis for \(A\), then \(P\) is an orthogonal matrix. We can do this because (i) all of the eigenspaces for \(A\) are orthogonal because of part a., and (ii) we can find orthonormal bases for each eigenspace by Gram-Schmidt. Putting these all together yields an orthonormal eigenbasis.