Place the cursor over the image to start the animation.
Five of the above level surfaces.
Theorem. All eigenvalues of a symmetric matrix are real.
Proof. Let $A$ be a symmetric $n\!\times\!n$ matrix and let $\lambda$ be an eigenvalue of $A$. Let $\vec{v} = \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr]^\top$ be a corresponding eigenvector. Then $\vec{v} \neq \vec{0}.$ We allow the possibility that $\lambda$ and $v_1,$ $v_2,\ldots,$ $v_n$ are complex numbers. For a complex number $\alpha$ by $\overline{\alpha}$ we denote its complex conjugate. Recall that for a nonzero complex number $\alpha$ we have $\alpha\,\overline{\alpha} = |\alpha|^2 \gt 0.$ Since $\vec{v}$ is an eigenvector of $A$ corresponding to $\lambda$ we have \[ A \vec{v} = \lambda \vec{v}. \] Since $A$ is a all entries of $A$ are real numbers, taking the complex conjugate of both sides of the above equality we have \[ A\bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \overline{\lambda} \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top. \] Since $A$ is symmetric, that is $A=A^\top$, we also have \[ A^\top \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \overline{\lambda} \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top. \] Multiplying both sides of the last equation by $\vec{v}^\top = \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr]$ we get \[ \vec{v}^\top A^\top \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \overline{\lambda} \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top. \] By properties of matrix multiplication and of the transpose operation the last equality is equivalent to \[ \bigl(A\vec{v}\bigr)^\top \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \overline{\lambda} \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top. \] since $A \vec{v} = \lambda \vec{v}$, we further have \[ \lambda \, \vec{v}^\top \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \overline{\lambda} \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top, \] that is, \[ \tag{*} \lambda \, \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \overline{\lambda} \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top. \] Since $\vec{v} \neq \vec{0}$ we have \[ \bigl[v_1 \ \ v_2 \ \ \cdots \ \ v_n \bigr] \bigl[\overline{v_1} \ \ \overline{v_2} \ \ \cdots \ \ \overline{v_n} \bigr]^\top = \sum_{k=1}^n v_k\, \overline{v_k} = \sum_{k=1}^n |v_k|^2 \gt 0, \] and therefore equality (*) yields \[ \lambda = \overline{\lambda}. \] This proves that $\lambda$ is a real number.
Theorem. A symmetric $2\!\times\!2$ matrix is orthogonally diagonalizable.
Proof. Let $A = \begin{bmatrix} a & b \\ b & d \end{bmatrix}$ an arbitrary $2\!\times\!2$ be a symmetric matrix. We need to prove that there exists an orthogonal $2\!\times\!2$ matrix $U$ and a diagonal $2\!\times\!2$ matrix $D$ such that $A = UDU^\top.$ The eigenvalues of $A$ are \[ \lambda_1 = \frac{1}{2} \Bigl( a+b - \sqrt{(a-d)^2 + 4 b^2} \Bigr), \quad \lambda_2 = \frac{1}{2} \Bigl( a+b + \sqrt{(a-d)^2 + 4 b^2} \Bigr) \] If $\lambda_1 = \lambda_2$, then $(a-d)^2 + 4 b^2 = 0$, and consequently $b= 0$ and $a=d$; that is $A = \begin{bmatrix} a & 0 \\ 0 & a \end{bmatrix}$. Hence $A = UDU^\top$ holds with $U=I_2$ and $D = A$.
Now assume that $\lambda_1 \neq \lambda_2$. Let $\vec{u}_1$ be a unit eigenvector corresponding to $\lambda_1$ and let $\vec{u}_2$ be a unit eigenvector corresponding to $\lambda_2$. We proved that eigenvectors corresponding to distinct eigenvalues of a symmetric matrix are orthogonal. Since $A$ is symmetric, $\vec{u}_1$ and $\vec{u}_2$ are orthogonal, that is the matrix $U = \begin{bmatrix} \vec{u}_1 & \vec{u}_2 \end{bmatrix}$ is orthogonal. Since $\vec{u}_1$ and $\vec{u}_2$ are eigenvectors of $A$ we have \[ AU = U \begin{bmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix} = UD. \] Therefore $A=UDU^\top.$ This proves that $A$ is orthogonally diagonalizable.
Theorem. For every positive integer $n$, a symmetric $n\!\times\!n$ matrix is orthogonally diagonalizable.
Proof. This statement can be proved by Mathematical Induction. The base case $n = 1$ is trivial. The case $n=2$ is proved above. To get a feel how mathematical induction proceeds we will prove the theorem for $n=3.$
Let $A$ be a $3\!\times\!3$ symmetric matrix. Then $A$ has an eigenvalue, which must be real. Denote this eigenvalue by $\lambda_1$ and let $\vec{u}_1$ be a corresponding unit eigenvector. Let $\vec{v}_1$ and $\vec{v}_2$ be unit vectors such that the vectors $\vec{u}_1,$ Let $\vec{v}_1$ and $\vec{v}_2$ form an orthonormal basis for $\mathbb R^3.$ Then the matrix $V_1 = \bigl[\vec{u}_1 \ \ \vec{v}_1\ \ \vec{v}_2\bigr]$ is an orthogonal matrix and we have \[ V_1^\top A V_1 = \begin{bmatrix} \vec{u}_1^\top A \vec{u}_1 & \vec{u}_1^\top A \vec{v}_1 & \vec{u}_1^\top A \vec{v}_2 \\[5pt] \vec{v}_1^\top A \vec{u}_1 & \vec{v}_1^\top A \vec{v}_1 & \vec{v}_1^\top A \vec{v}_2 \\[5pt] \vec{v}_2^\top A \vec{u}_1 & \vec{v}_2^\top A \vec{v}_1 & \vec{v}_2^\top A \vec{v}_2 \\\end{bmatrix}. \] Since $A = A^\top$, $A\vec{u}_1 = \lambda_1 \vec{u}_1$ and since $\vec{u}_1$ is orthogonal to both $\vec{v}_1$ and $\vec{v}_2$ we have \[ \vec{u}_1^\top A \vec{u}_1 = \lambda_1, \quad \vec{v}_j^\top A \vec{u}_1 = \lambda_1 \vec{v}_j^\top \vec{u}_1 = 0, \quad \vec{u}_1^\top A \vec{v}_j = \bigl(A \vec{u}_1\bigr)^\top \vec{v}_j = 0, \quad \quad j \in \{1,2\}, \] and \[ \vec{v}_2^\top A \vec{v}_1 = \bigl(\vec{v}_2^\top A \vec{v}_1\bigr)^\top = \vec{v}_1^\top A^\top \vec{v}_2 = \vec{v}_1^\top A \vec{v}_2. \] Hence, \[ \tag{**} V_1^\top A V_1 = \begin{bmatrix} \lambda_1 & 0 & 0 \\[5pt] 0 & \vec{v}_1^\top A \vec{v}_1 & \vec{v}_1^\top A \vec{v}_2 \\[5pt] 0 & \vec{v}_1^\top A \vec{v}_2 & \vec{v}_2^\top A \vec{v}_2 \\\end{bmatrix}. \] By the already proved theorem for $2\!\times\!2$ symmetric matrix there exists an orthogonal matrix $\begin{bmatrix} u_{11} & u_{12} \\[5pt] u_{21} & u_{22} \end{bmatrix}$ and a diagonal matrix $\begin{bmatrix} \lambda_2 & 0 \\[5pt] 0 & \lambda_3 \end{bmatrix}$ such that \[ \begin{bmatrix} \vec{v}_1^\top A \vec{v}_1 & \vec{v}_1^\top A \vec{v}_2 \\[5pt] \vec{v}_1^\top A \vec{v}_2 & \vec{v}_2^\top A \vec{v}_2 \end{bmatrix} = \begin{bmatrix} u_{11} & u_{12} \\[5pt] u_{21} & u_{22} \end{bmatrix} \begin{bmatrix} \lambda_2 & 0 \\[5pt] 0 & \lambda_3 \end{bmatrix} \begin{bmatrix} u_{11} & u_{12} \\[5pt] u_{21} & u_{22} \end{bmatrix}^\top. \] Substituting this equality in (**) and using some matrix algebra we get \[ V_1^\top A V_1 = \begin{bmatrix} 1 & 0 & 0 \\[5pt] 0 & u_{11} & u_{12} \\[5pt] 0 & u_{21} & u_{22} \end{bmatrix} % \begin{bmatrix} \lambda_1 & 0 & 0 \\[5pt] 0 & \lambda_2 & 0 \\[5pt] 0 & 0 & \lambda_3 \end{bmatrix} % \begin{bmatrix} 1 & 0 & 0 \\[5pt] 0 & u_{11} & u_{12} \\[5pt] 0 & u_{21} & u_{22} \end{bmatrix}^\top \] Setting \[ U = V_1 \begin{bmatrix} 1 & 0 & 0 \\[5pt] 0 & u_{11} & u_{12} \\[5pt] 0 & u_{21} & u_{22} \end{bmatrix} \quad \text{and} \quad D = \begin{bmatrix} \lambda_1 & 0 & 0 \\[5pt] 0 & \lambda_2 & 0 \\[5pt] 0 & 0 & \lambda_3 \end{bmatrix} \] we have that $U$ is an orthogonal matrix, $D$ is a diagonal matrix and $A = UDU^\top.$ This proves that $A$ is orthogonally diagonalizable.
Theorem. Let $A$ be an $n\!\times\!m$ matrix. Then $\operatorname{Nul}(A^\top\!\! A ) = \operatorname{Nul}(A)$.
Proof. The set equality $\operatorname{Nul}(A^\top\!\! A ) = \operatorname{Nul}(A)$ means \[ \vec{x} \in \operatorname{Nul}(A^\top\!\! A ) \quad \text{if and only if} \quad \vec{x} \in \operatorname{Nul}(A). \] So, we prove this equivalence. Assume that $\vec{x} \in \operatorname{Nul}(A)$. Then $A\vec{x} = \vec{0}$. Consequently, $A^\top\!A\vec{x} = A^\top\vec{0} = \vec{0}$. Hence, $A^\top\!A\vec{x}= \vec{0}$, and therefore $\vec{x} \in \operatorname{Nul}(A^\top\!\! A )$. This proves, \[ \vec{x} \in \operatorname{Nul}(A) \quad \Rightarrow \quad \vec{x} \in \operatorname{Nul}(A^\top\!\! A ). \] Now we prove the converse, \[ \tag{*} \vec{x} \in \operatorname{Nul}(A^\top\!\! A ) \quad \Rightarrow \quad \vec{x} \in \operatorname{Nul}(A). \] Assume, $\vec{x} \in \operatorname{Nul}(A^\top\!\! A )$. Then, $A^\top\!\!A \vec{x} = \vec{0}$. Multiplying the last equality by $\vec{x}^\top$ we get $\vec{x}^\top\! (A^\top\!\! A \vec{x}) = 0$. Using the associativity of the matrix multiplication we obtain $(\vec{x}^\top\!\! A^\top)A \vec{x} = 0$. Using the Linear Algebra with the transpose operation we get $(A \vec{x})^\top\!A \vec{x} = 0$. Now recall that for every vector $\vec{v}$ we have $\vec{v}^\top \vec{v} = \|\vec{v}\|^2$. Thus, we have proved that $\|A\vec{x}\|^2 = 0$. Now recall that the only vector whose norm is $0$ is the zero vector, to conclude that $A\vec{x} = \vec{0}$. This means $\vec{x} \in \operatorname{Nul}(A)$. This completes the proof of implication (*). The theorem is proved.
Corollary. Let $A$ be an $n\!\times\!m$ matrix. The columns of $A$ are linearly independent if and only if the $m\!\times\!m$ matrix $A^\top\!\! A$ is invertible.
Corollary. Let $A$ be an $n\!\times\!m$ matrix. Then $\operatorname{Col}(A^\top\!\! A ) = \operatorname{Col}(A^\top)$.
Corollary. Let $A$ be an $n\!\times\!m$ matrix. The matrices $A^\top$ and $A^\top\!\! A$ have the same rank.
Theorem. Every $n\times m$ matrix $A$ with linearly independent columns can be written as a product $A = QR$ where $Q$ is an $n\times m$ matrix whose columns form an orthonormal basis for the column space of $A$ and $R$ is an $m\times m$ upper triangular invertible matrix with positive entries on its diagonal.
Exercise. Let $T: \mathbb P_3 \to \mathbb R^4$ be the transformation defined by the following formula. For all ${\mathbf p} \in {\mathbb P}_3$ we define \[ T {\mathbf p} = \begin{bmatrix} {\mathbf p}(0) \\ {\mathbf p}'(0) \\ {\mathbf p}(1) \\ {\mathbf p}'(1) \end{bmatrix}. \] Find the matrix of $T$ relative to the standard basis $\{1, t, t^2, t^3\}$ for $\mathbb P_3$ and the standard basis for $\mathbb R^4$.
Solution. First introduce a notation for the polynomials in the standard basis $\{1, t, t^2, t^3\}$ for $\mathbb P_3$: set $\mathbf q_0(t) = 1$, $\mathbf q_1(t) = t$, $\mathbf q_2(t) = t^2$, $\mathbf q_3(t) = t^3$ and calculate \[ T {\mathbf q}_0 = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \ T {\mathbf q}_1 = \begin{bmatrix} 0 \\ 1 \\ 1 \\ 1 \end{bmatrix}, \ T {\mathbf q}_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 2 \end{bmatrix}, \ T {\mathbf q}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 3 \end{bmatrix}. \] Since the basis which we use in $\mathbb R^4$ is the standard basis the vectors $T {\mathbf q}_0$, $T {\mathbf q}_1$, $T {\mathbf q}_2$, $T {\mathbf q}_3$ given above are already the coordinate vectors relative to the standard basis for $\mathbb R^4$. Thus, the matrix of $T$ relative to the standard basis $\{1, t, t^2, t^3\}$ for $\mathbb P_3$ and the standard basis for $\mathbb R^4$ is \[ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 \\ 0 & 1 & 2 & 3 \end{bmatrix}. \]
Remark. Let us modify the exercise above and instead of the polynomial space $\mathbb P_3$ consider the polynomial space $\mathbb P_n$, where $n$ is a positive integer. Let $T: \mathbb P_n \to \mathbb R^4$ be the transformation defined by the following formula. For all ${\mathbf p} \in {\mathbb P}_n$ we define \[ T {\mathbf p} = \begin{bmatrix} {\mathbf p}(0) \\ {\mathbf p}'(0) \\ {\mathbf p}(1) \\ {\mathbf p}'(1) \end{bmatrix}. \] Let us find the matrix of $T$ relative to the standard basis $\{1, t, t^2, \ldots, t^n\}$ for $\mathbb P_n$ and the standard basis for $\mathbb R^4$. As before, introduce a notation for the polynomials in the standard basis $\{1, t, \ldots, t^n\}$ for $\mathbb P_n$. For $k \in \{0,1,\ldots, n\}$ set $\mathbf q_k(t) = t^k$ and calculate \[ T {\mathbf q}_0 = \begin{bmatrix} 1 \\ 0 \\ 1 \\ 0 \end{bmatrix}, \ T {\mathbf q}_1 = \begin{bmatrix} 0 \\ 1 \\ 1 \\ 1 \end{bmatrix} \quad \text{and} \quad T {\mathbf q}_k = \begin{bmatrix} 0 \\ 0 \\ 1 \\ k \end{bmatrix} \ \ \text{for} \ \ k \geq 2. \] Since the basis which we use in $\mathbb R^4$ is the standard basis the vectors $T {\mathbf q}_k$, $k \in \{0,1,\ldots,n\}$, given above are already the coordinate vectors relative to the standard basis for $\mathbb R^4$. Thus, the matrix of $T$ relative to the standard basis $\{1, t, \ldots, t^n\}$ for $\mathbb P_n$ and the standard basis for $\mathbb R^4$ is the following $4 \times (n+1)$ matrix: \[ \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ 1 & 1 & 1 & \cdots & 1 \\ 0 & 1 & 2 & \cdots & n \end{bmatrix}. \]
Place the cursor over the image to start the animation.