# Fall 2012 MATH 304: Linear algebra

## Branko Ćurgus

Tuesday, December 4, 2012

• Suggested problems for Section 7.4: 3, 7, 11, 13, 14, 15, 17, 21

Monday, December 3, 2012

• Suggested problems for Section 7.3: 1, 3, 5, 9, 11, 12
• Here is the Mathematica notebook that I used in class today. As usual it is written in version 5.2.
• With what we learned in 7.3 we can go back to the animations from the beginning of the quarter and calculate some remarkable quantities that appear naturally in those animations. Here is one animation that is at the beginning of the bottom of this page and a new one. Looking at the blue vectors bellow we see that their heads trace an ellipse. What are the directions and lengths of their axes? Based on what we learned in 7.3 we can calculate these quantities.
• Place the cursor over the image to start the animation.

• In the animations below we emphasize the direction and the lengths of the axes of the ellipses traced above. You should understand how these quantities are calculated.
• Going back to Section 7.2, in the book they do not discus quadratic forms with three variables. This is usually done in Math 224. Here are some animations that might help you understand the quadratic form $x_1^2 + x_2^2 - x_3^2$. Here I show the surfaces in ${\mathbb R}^3$ with equations $x_1^2 + x_2^2 - x_3^2 = c$ for different values of $c$.

Place the cursor over the image to start the animation.

Five of the above level surfaces.

Friday, November 30, 2012

• Suggested problems for Section 7.2: 1, 3, 5, 7, 9, 13, 17, 19, 20, 21, 23, 25

Wednesday, November 28, 2012

• Suggested problems for Section 7.1: 3, 4, 9, 11, 15, 19, 23, 24, 25, 27, 30, 31, 32, 33, 35, 36, 37

Tuesday, November 20, 2012

• Today in class I handed out the assignment which is due on Friday, November 30, 2012.
• We reviewed Chapter 6 for the exam on Monday, November 26. We decided that for the exam you need to know the proof of the law of cosines and how it relates to the dot product, the proof of Cauchy-Schwarz inequality and the triangle inequality. You also need to know the proof that the matrices $A^TA$ and $A$ have the same nullspace.

Monday, November 19, 2012

• Suggested problems for Section 6.8: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 14
• Here is a Mathematica notebook with several examples of Fourier series. The notebook is semi-automated so that you can add your examples relatively easily. As usual it is written in version 5.2.

Thursday, November 15, 2012

• Suggested problems for Section 6.7: 1, 2, 3, 5, 7, 9, 10, 13, 16, 17, 19, 20, 21, 23, 25

Wednesday, November 14, 2012

Friday, November 9, 2012

• Suggested problems for Section 6.5: 1, 3, 6, 7, 9, 13, 16, 17, 19, 20, 21, 22
• Suggested problems for Section 6.6: 1, 2, 3, 4, 5, 6, 7, 8, 9, 14, 15, 16

Monday, November 5, 2012

• Suggested problems for Section 6.4: 2, 3, 5, 7, 9, 13, 15, 17, 19, 20
• As an example of a Mathematica program I wrote my own Gram-Schmidt orthogonalization program in this Mathematica notebook. You can use this notebook to check your answers to book's exercises.

Saturday, November 3, 2012

• Yesterday in class I did Exercise 19 in Section 6.3. Here is a similar problem in ${\mathbb R}^4$: Given three vectors: $\vec{u} = \begin{bmatrix} 1\\ 1\\ 1\\ 1 \end{bmatrix}, \quad \vec{v} = \left[\!\!\begin{array}{r} 5\\ -7\\ 1\\ 1 \end{array}\right], \quad \vec{w} = \begin{bmatrix} 1\\ 2\\ 3\\ 4 \end{bmatrix},$ find a vector $\vec{x}$ which is in the span of the vectors $\vec{u}, \vec{v}, \vec{w}$ and which is orthogonal to the vectors $\vec{u}$ and $\vec{v}$.

Thursday, November 1, 2012

• Suggested problems for Section 6.3: 1, 2, 4, 5, 7, 10, 11, 13, 15, 16 17, 19, 20, 21, 23

Tuesday, October 30, 2012

• Suggested problems for Section 6.2: 2, 3, 5, 8, 9, 11, 13, 15, 17, 19, 21, 23, 25, 26, 27, 29.
• As I mentioned several times in class, to understand the importance of the dot product you need to review Law of cosines. Here is a short page that i wrote about the Law of cosines. Wikipedia offers six different proofs at its Law of cosines page. Find one proof that resonates best with you. Have you seen this proof in high school? Which one?

Friday, October 26, 2012

• Suggested problems for Section 6.1: 1, 5, 7, 8, 9-12, 13, 15-18, 20, 22, 24, 25, 26, 27, 28, 29, 31

Tuesday, October 23, 2012

• Suggested problems for Section 5.8:1, 2, 3, 4, 5, 6
• Yesterday in class I used this notebook with illustrations of the power method.
• The first exam is on Thursday. Please bring your calculators, just in case.

Friday, October 19, 2012

• Suggested problems for Section 5.6:1, 2, 3, 4, 5, 6, 9, 12, 13, 15
• Today in class I used this notebook with illustrations of several dynamical systems. Here is a pdf printout of the notebook.
• I was asked to do Exercise 27 in Section 5.5. I solved it in this notebook.

Wednesday, October 17, 2012

• I updated my notebook with classical orthogonal polynomials. I added some explanations of the commands. If you have any questions please ask. Here is a pdf printout of the notebook.

Tuesday, October 16, 2012

• Today in class I demonstrated how to find Chebyshev polynomials of the first kind using a matrix representation of a linear transformation covered in Section 5.4. What I did in class and a little more is at my web-page about Chebyshev polynomials.
• Here is the notebook that I used in class today. This notebook contains automated calculations of Chebyshev polynomials and several other, so called, classical orthogonal polynomials: Legendre, Laguerre, Bessel and Hermite polynomials.
• Here is the notebook that I used on Monday. It contains calculations of a reflection matrix in $\mathbb R^3$.
• Next we are moving on to Section 5.5. Suggested problems for Section 5.5: 1-6 (Are these matrices in "The book of Beautiful Matrices"?), 7-12, 13, 16, 17, 18, 21, 25, 26.
• The Book of Beautiful Matrices consists of two-by-two matrices whose entries and eigenvalues are integers between -9 and 9. I consider only the matrices with the nonnegative top-left entry. In addition, I consider only matrices with the relatively prime entries. To get matrices that are omitted in this way you just multiply one of the given matrices by an integer. You need to adjust the eigenvalues by multiplying them with the same integer. The eigenvectors remain unchanged.
• I divided the Book in three volumes: Volume 1 contains matrices with real distinct eigenvalues, Volume 2 contains matrices with non-real eigenvalues (whose real and imaginary part are integers between -9 and 9) and Volume 3 contains matrices with a repeated eigenvalue. The eigenvalues and a corresponding eigenvector (and a root vector for repeated eigenvalues) are given for each matrix. Three volumes in pdf format are here:
• There are 4292 matrices in Volume 1. Here you can find Volume 1 ordered by eigenvalues.
• There are 1164 matrices in Volume 2. Here you can find Volume 2 ordered by the complex eigenvalues.
• There are 270 matrices in Volume 3. Here you can find Volume 3 ordered by repeated eigenvalues.

Friday, October 12, 2012

• Suggested problems for Section 5.4: 1, 3 - 13, 17, 19 - 23, 27, 28

Tuesday, October 9, 2012

• Assigned exercises for Section 5.3 are 1, 3, 5, 6, 7, 8, 15, 17, 18, 21, 22, 23, 26, 27, 28, 31.
• Here is the notebook that I used in class today.

Monday, October 8, 2012

• Notice that I changed Exercise 18 in the assigned exercises for Section 5.2 to bold font-weight. This means that it is an important exercise. It shows that an eigenvalue of a matrix can have algebraic multiplicity 2 and the corresponding eigenspace can be one-dimensional. Much simpler example of this phenomena is exhibited by the matrix $\displaystyle \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$. The characteristic polynomial of this matrix is $p(\lambda) = \lambda^2$. Thus $0$ is an eigenvalue of algebraic multiplicity 2. However, the corresponding eigenspace is one-dimensional; it is spanned by the vector $\displaystyle \begin{bmatrix} 1 \\ 0 \end{bmatrix}$.
• I have now posted a different proof of Theorem 2 in Section 5.1 here. I hope that studying both proofs will make it easier for you to internalize this proof.
• Recall that the eigenvalues of a matrix $A$ are roots of the characteristic equation of $A$; that is the solutions of $\det(A-\lambda I) = 0$. For an arbitrary $n\times n$ matrix $A$ the expression $\det(A-\lambda I)$ is a polynomial od degree $n$. For example, for my favorite matrix $M = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{bmatrix}$ the characteristic polynomial is $\det(M-\lambda I) = 11 \lambda^{10} - \lambda^{11} = \lambda^{10} (11 - \lambda).$ The characteristic equation of $M$ is $\lambda^{10} (10 - \lambda) = 0$. This equation has two roots $\lambda = 0$ and $\lambda = 11$. The multiplicity of the root $0$ is 10 and the multiplicity of the root $11$ is $1$.
• It is important to note the following terminology related to eigenvalues. Let $\lambda_1$ be an eigenvalue of a matrix $A$.
• The multiplicity of $\lambda_1$ as a root of the characteristic equation $\det(A-\lambda I) = 0$ is called the algebraic multiplicity of the eigenvalue $\lambda_1$.
• The dimension of the eigenspace corresponding to the eigenvalue $\lambda_1$ as is called the geometric multiplicity of the eigenvalue $\lambda_1$.
• The algebraic multiplicity of an eigenvalue is always greater than or equal to the geometric multiplicity of that eigenvalue.
• I illustrate the terminology introduced in the previous item with my favorite matrix $M$.
• For each of the eigenvalues of $M$ the algebraic multiplicity is equal to its geometric multiplicity. This is a remarkable property. We will learn in Section 5.3 that this property is equivalent to the fact that matrix $M$ is diagonalizable.
• Now I should prove the statement in the previous item. The following table lists the eigenvalues of $M$ in the first row and bellow them corresponding eigenvectors:
 $11$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $0$ $\left[\begin{array}{r} \phantom{-}\!\! 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \\ 1 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \\ 0 \end{array}\right]$ $\left[\begin{array}{r} 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ -1 \end{array}\right]$
• The eleven eigenvectors listed in the above table are linearly independent. One way to see this is to form the matrix whose columns are those vectors: $P = \left[ \begin{array}{rrrrrrrrrrr} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \end{array} \right]$ and calculate the determinant of this matrix. The calculation is easier than one would expect. Recall that combining the rows does not change the determinant. So, we replace the first row in $P$ with the sum of all the rows. We get the following matrix $\left[ \begin{array}{rrrrrrrrrrr} 11 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 \end{array} \right]$ whose determinant is the same as the determinant of $P$. But, the determinant of the last matrix is easy, it is $11 \neq 0$. This proves the linear independence of the eigenvectors listed in the table.
• Now we know that $P$ is invertible. What is its inverse? Since $P$ has special structure, one can almost guess the inverse of $P$: $P^{-1} = \frac{1}{11} \left[ \begin{array}{rrrrrrrrrrr} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & -10 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & -10 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -10 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & -10 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & -10 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & -10 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & -10 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -10 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -10 & 1 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -10 \end{array} \right]$
• Based on what we will learn in Section 5.3 the following identity is true $P^{-1}\, M\, P = \left[ \begin{array}{rrrrrrrrrrr} 11 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]$ The point here is that the last matrix is diagonal and it has the eigenvalues of $M$ on the diagonal.

Friday, October 5, 2012

• Assigned exercises for Section 5.2 are 1-8 (find corresponding eigenvectors as well), 9, 13, 15, 18, 19, 20, 21, 24, 25, 27.

Monday, October 1, 2012

• Today in class I demonstrated how to use Mathematica to solve Exercises 17 and 19 in Section 4.7. Here are the notebooks with Exercise 4.7.17 and Exercise 4.7.19. The files are called Exercise_4_7_17.nb and Exercise_4_7_19.nb. Right-click on the underlined phrase "Exercise 4.7.17" or "Exercise 4.7.19"; in the pop-up menu that appears, your browser will offer you to save the file in your directory. Make sure that you save it with the exactly same name. After saving the file you can open it with Mathematica version 5.2. You need to find a campus computer with Mathematica v 5.2 installed on it (for example BH 209, BH 215 and many more campus computers). You will find Mathematica v 5.2 as follows (this sequence might differ on different campus computers)
Start -> Programs -> Mathematics -> Mathematica.
Open Mathematica first, then open the specific file that you want to read in Mathematica. You can execute the entire file by the following manu sequence (in Mathematica):
Kernel -> Evaluation -> Evaluate Notebook.
• More information on how to use Mathematica you can find on my Mathematica page.
• If you have problems running files that I posted please let me know. If you spend some time learning how to use Mathematica you will enhance your understanding of math that we are studying.
• We also have Mathematica v 8. It is only available in BH 215. Since Mathematica v 5.2 is more widely available I decided to use it in this class. These two versions are not compatible. The command structure is very similar. Version 8 will usually recognize the differences and correct them.
• Next we will study Section 5.1. Suggested problems for Section 5.1: 1, 3, 4, 5, 6, 8, 11, 15, 16, 17, 19, 20, 24-27, 29, 30, 31.
• A related Wikipedia link: Eigenvalue, eigenvector and eigenspace.
• Here are animations of different matrices in action. In each scene the navy blue vector is the image of sea green vector under multiplication by a matrix A. For easier visualization of the action the heads of the vectors leave traces. Just looking at the movies you can guess what the matrix is in each movie. You can also see what its eigenvalues and eigenvectors are, whether an eigenvalue is positive, negative, zero, complex, ...
• Place the cursor over the image to start the animation.

Thursday, September 27, 2012

• The information sheet
• We will start with the review of Section 4.7 Change of Basis. Suggested problems are 2, 3, 4, 6, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20.