# Fall 2015 MATH 204: Introduction to Linear AlgebraBranko Ćurgus

Thursday, December 3, 2015

Wednesday, December 2, 2015

• Assigned exercises for Section 5.3 are 2, 3, 5, 8, 9, 12, 13, 16, 18, 20, 23, 24.
• Updated list of exercises for Section 5.2 are 1-8, 11, 12, 14, 15, (in all these problems you can find eigenvectors as well) 9, 13, 18, 19, 20, 21, 24, 25, 27.

Saturday, November 21, 2015

• In this file you can find some problems in the form that they could appear on the third exam.

Friday, November 20, 2015

• Towards the end of the class today I introduced the concept of the characteristic equation for $2\!\!\times\!\!2$ matrices, see Section 5.2 on page 313. Related to this see exercises in Section 5.2 1-8. (in all these problems you can find eigenvectors as well).
• I updated the list of topics that will be on the third exam.

Thursday, November 19, 2015

• Read Section 5.1. Suggested problems for Section 5.1: 1, 3, 4, 5, 6, 8, 11, 15, 16, 17, 19, 20, 24-27, 29, 30, 31.
• A related Wikipedia link: Eigenvalue, eigenvector and eigenspace.
• Below are animations of different matrices in action. In each scene the navy blue vector is the image of the sea green vector under the multiplication by a matrix $A$. For easier visualization of the action the heads of vectors leave traces.
• Just looking at the movies you can guess what are the eigenvalues and eigenvectors of the featured matrix. In particular it is easy to see whether an eigenvalue is positive, negative, zero, or complex, ... You can also approximately calculate which matrix is featured in each movie.
• Place the cursor over the image to start the animation.

Tuesday, November 17, 2015

• Suggested problems for Section 4.7: 4-10, 13, 14
• Here is an interesting problem about change of coordinate matrices.
• Problem. Consider two matrices $A$ and $B$ given by $A = \left[ \begin{array}{cccccc} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 5 & 6 & 7 & 8 \\ 3 & 6 & 6 & 7 & 8 & 9 \\ 1 & 2 & 3 & 3 & 3 & 3 \end{array} \right]$ and $B = \left[ \begin{array}{cccccc} 6 & 5 & 4 & 3 & 2 & 1 \\ 8 & 7 & 6 & 5 & 4 & 2 \\ 9 & 8 & 7 & 6 & 6 & 3 \\ 3 & 3 & 3 & 3 & 2 & 1 \end{array} \right]$ Notice that the columns of $B$ are just reversed the columns of $A$. Hence $\operatorname{Col}(A) = \operatorname{Col}(B)$. Your tasks:
1. Row reduce matrices $A$ and $B$ to RREF.
2. Based on the respective RREFs determine the bases for $\operatorname{Col}(A)$ and $\operatorname{Col}(B)$. Denote by $\mathcal A$ the basis for $\operatorname{Col}(A)$ and by $\mathcal B$ the basis for $\operatorname{Col}(B)$.
3. Determine the change of coordinates matrices $\overset{\displaystyle{P}}{{\mathcal B} \leftarrow {\mathcal A}} \qquad \text{and} \qquad \overset{\displaystyle{P}}{{\mathcal A} \leftarrow {\mathcal B}}.$
4. Verify that the these matrices are inverses of each other.
• Solution. Row reduction shows that $A = \left[ \begin{array}{cccccc} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 5 & 6 & 7 & 8 \\ 3 & 6 & 6 & 7 & 8 & 9 \\ 1 & 2 & 3 & 3 & 3 & 3 \end{array} \right] \sim \left[ \begin{array}{cccccc} 1 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & -1 & -2 \\ 0 & 0 & 0 & 1 & 2 & 3 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]$ and $B = \left[ \begin{array}{cccccc} 6 & 5 & 4 & 3 & 2 & 1 \\ 8 & 7 & 6 & 5 & 4 & 2 \\ 9 & 8 & 7 & 6 & 6 & 3 \\ 3 & 3 & 3 & 3 & 2 & 1 \end{array} \right] \sim \left[ \begin{array}{cccccc} 1 & 0 & -1 & -2 & 0 & 0 \\ 0 & 1 & 2 & 3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & \frac{1}{2} \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]$ Therefore $\mathcal A = \left\{\left[ \begin{array}{c} 1 \\ 2 \\ 3 \\ 1 \end{array} \right], \left[ \begin{array}{c} 3 \\ 5 \\ 6 \\ 3 \end{array} \right], \left[ \begin{array}{c} 4 \\ 6 \\ 7 \\ 3 \end{array} \right] \right\}$ and $\mathcal B = \left\{ \left[ \begin{array}{c} 6 \\ 8 \\ 9 \\ 3 \end{array} \right], \left[ \begin{array}{c} 5 \\ 7 \\ 8 \\ 3 \end{array} \right], \left[ \begin{array}{c} 2 \\ 4 \\ 6 \\ 2 \end{array} \right] \right\}.$ We read the matrix $\displaystyle \overset{\displaystyle{P}}{{\mathcal B} \leftarrow {\mathcal A}}$ from the RREF of $B$. This RREF has the coordinates of the basis vectors in $\mathcal A$ relative to the basis $\mathcal B$: $\overset{\displaystyle{P}}{{\mathcal B} \leftarrow {\mathcal A}} = \left[ \begin{array}{ccc} 0 & -2 & -1 \\ 0 & 3 & 2 \\ \frac{1}{2} & 0 & 0 \end{array} \right].$ We read the matrix $\displaystyle \overset{\displaystyle{P}}{{\mathcal A} \leftarrow {\mathcal B}}$ from the RREF of $A$. This RREF has the coordinates of the basis vectors in $\mathcal B$ relative to the basis $\mathcal A$: $\overset{\displaystyle{P}}{{\mathcal A} \leftarrow {\mathcal B}} = \left[ \begin{array}{ccc} 0 & 0 & 2 \\ -2 & -1 & 0 \\ 3 & 2 & 0 \end{array} \right].$ Now calculate $\left( \overset{\displaystyle{P}}{{\mathcal B} \leftarrow {\mathcal A}} \right) \left(\overset{\displaystyle{P}}{{\mathcal A} \leftarrow {\mathcal B}}\right) = \left[ \begin{array}{ccc} 0 & -2 & -1 \\ 0 & 3 & 2 \\ \frac{1}{2} & 0 & 0 \end{array} \right] \left[ \begin{array}{ccc} 0 & 0 & 2 \\ -2 & -1 & 0 \\ 3 & 2 & 0 \end{array} \right] = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right].$

Monday, November 16, 2015

• Today we solved problems 22 and 24 from Section 4.5.
• Suggested problems for Section 4.6: 3, 4, 5, 6, 7, 8, 9, 11, 13, 15, 17, 18, 23, 24, 27, 28, 29, 30 (there is a typo in this problem; the vector $\mathbf b$ should be in ${\mathbb R}^m$).

Friday, November 13, 2015

• Suggested problems for Section 4.5: 3, 6, 7, 8, 9, 10, 12, 13, 15, 18, 19, 20, 21, 22, 23, 24

Tuesday, November 10, 2015

• Suggested problems for Section 4.4: 3, 4, 7, 8, 9, 10, 11, 12, 13, 14, 27, 28

Monday, November 9, 2015

• We did Section 4.3: Linear independent sets; bases today. Notice that I gave a gave a different proof that the polynomials $1, t, t^2$ are linearly independent in the vector space ${\mathbb P}_2$. Similarly, you can give a different proof that $\cos t$ and $\sin t$ are linearly independent as vectors in the vector space of all continuous functions defined on $\mathbb R$. Suggested problems for Section 4.3: 3, 4, 5, 9, 10, 11, 13, 14, 15, 21, 22, 25, 33, 34

Thursday, November 5, 2015

• We did Section 4.2 today; suggested exercises: 1, 3, 7, 9, 12, 15, 17, 18, 19, 23, 24, 25, 26, 28, 31, 32, 33.

Friday, October 30, 2015

• We did Section 4.1 today. Recommended exercises: 1, 3, 5, 6, 7, 8, 9, 11 13, 15, 16, 17, 18, 21, 22, 23, 24.
• I updated the list of topics that we covered so far, and those that will be on the exam tomorrow.
• In this file you can find some problems in the form that they could appear on the second exam.

Thursday, October 29, 2015

• Suggested problems for Section 3.2: 5, 7, 9, 11, 16-20 (even), 21, 25, 31, 33, 34, 35, 40c
• Let E be an elementary matrix obtained from the identity matrix by switching two rows. The determinant of this matrix is -1. Here is a proof in four pictures.
• Below is a "click-by-click" proof. There are nine steps in this proof. I describe each step below.
1. This is the determinant that we want to calculate.
2. I emphasize that the $i$-th and $j$-th row in the identity matrix are interchanged.
3. We will calculate the $n\!\times\!n$ determinant by cofactor expansion along the $i$-th row.
4. Since the only nonzero entry in the $i$-th row is at the $j$-th place, the cofactor expansion equals $(-1)^{i+j}$ multiplied by an $(n-1)\times(n-1)$ determinant.
5. We will calculate the $(n-1)\!\times\!(n-1)$ determinant using cofactor expansion along the $(j-1)$-st row. Notice that the only nonzero entry in the $(j-1)$-st row is $1$ which is at $i$-th position.
6. The previous $(n-1)\!\times\!(n-1)$ determinant calculates to $(-1)^{i+j-1}$ multiplied by the $(n-2)\!\times\!(n-2)$ determinant of the identity matrix.
7. The $(n-2)\!\times\!(n-2)$ determinant of the identity matrix calculates to $1$.
8. $(-1)^{i+j}(-1)^{i+j-1} = (-1)^{2i+2j-1}$.
9. $(-1)^{2i+2j-1} = -1$.

All entries left blank in the determinant below are zeros.
Click on the image for a step by step proof.

• Look at Wikipedia's Matrix page. It contains some stuff that we did and much that we didn't do.

Tuesday, October 27, 2015

• Suggested problems for Section 3.1: 1, 3, 5, 25-30, 12, 13, 37, 40, 41
• There are many ways that one can go about proving the Invertible Matrix Theorem. One way is presented in this diagram. Yesterday I presented a slight variation of this in class, see the board image. Please let me know if you do not understand green and blue arrows. The proofs for red arrows are either presented in the book, or we did them in class. If you can not identify them, let me know.

Friday, October 23, 2015

• Today we started Section 2.3 Characterizations of invertible matrices. Reviewing Exercises 23, 24, 25 from Section 2.1 before reading this section is a good idea. Suggested problems for Section 2.3: 1, 3, 5, 8, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 26, 27, 33.
• I will summarize here what we did today. Consider the following exercise:
• Determine whether the matrix $A = \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ -1 & 1 & 1 \end{array}\right]$ is invertible.
• If $A$ is invertible find its inverse $A^{-1}$.
• To answer the fist question in the exercise we use the implication

If RREF of $A$ is $I_3$, then $A$ is invertible.

This implication is proved in Theorem 7 in Section 2.2 . This proof is important!
• So, we just row reduce $A$ to RREF and that will answer the first question in the exercise: \begin{align*} \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ -1 & 1 & 1 \end{array}\right] & \sim \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ 0 & 1 & 0 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \\ \end{align*}
• The row reduction above shows that $A$ is row equivalent to $I_3$. Therefore, by Theorem 7 in Section 2.2 , $A$ is invertible.
• Now recall that each step in the row reduction can be achieved by multiplication by an elementary matrix.
Step the row operation the elementary matrix the inverse of ele mat
1st The third row is replaced by the the sum of the first row and the third row $E_1 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right]$ $E_1^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right]$
2nd The third row and the second row are interchanged $E_2 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right]$ $E_2^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right]$
3rd The third row is replaced by the the sum of the third row and the second row multiplied by $(-2)$ $E_3 = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right]$ $E_3^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{array}\right]$
4th The first row is replaced by the the sum of the first row and the third row $E_4 = \left[\!\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$ $E_4^{-1} = \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$
• Next we use the elementary matrices $E_1$, $E_2$, $E_3$ and $E_4$ to reconstruct the row reduction above: \begin{align*} E_1 A & = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right] \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ -1 & 1 & 1 \end{array}\right] = \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ 0 & 1 & 0 \end{array}\right] \\ E_2 (E_1 A) & = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ 0 & 1 & 0 \end{array}\right] = \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0\\ 0 & 2 & 1 \end{array}\right] \\ E_3 (E_2 E_1 A) & =\left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right] \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0\\ 0 & 2 & 1 \end{array}\right] = \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array}\right] \\ E_4 (E_3 E_2 E_1 A) & = \left[\!\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array}\right] = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \end{align*}
• Because of the associativity of the matrix product we have $(E_4 E_3 E_2 E_1) A = I_3.$ Since we already know that $A$ is invertible, the last equality shows that $A^{-1} = E_4 E_3 E_2 E_1.$ Now calculate the product $E_4 E_3 E_2 E_1$: \begin{align*} E_1& = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right] \\ E_2 E_1 & = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{array}\right] = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] \\ E_3 (E_2 E_1) & =\left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & -2 & 1 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] = \left[\begin{array}{rrr} 1 & 0 & 0 \\ 1 & 0 & 1\\ -2 & 1 & -2 \end{array}\right] \\ E_4 (E_3 E_2 E_1) & = \left[\!\begin{array}{rrr} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right] \left[\begin{array}{rrr} 1 & 0 & 0 \\ 1 & 0 & 1\\ -2 & 1 & -2 \end{array}\right] = \left[\begin{array}{rrr} -1 & 1 & -2 \\ 1 & 0 & 1\\ -2 & 1 & -2 \end{array}\right] \end{align*}
• Thus we calculated $A^{-1}= \left[\begin{array}{rrr} -1 & 1 & -2 \\ 1 & 0 & 1\\ -2 & 1 & -2 \end{array}\right].$
• A more direct way of calculating $A^{-1}$ is to row reduce the $3\times 6$ matrix $[A | I_3]$: \begin{align*} \left[\!\begin{array}{rrr|rrr} 1 & 0 & -1 & 1 & 0 & 0 \\ 0 & 2 & 1 & 0 & 1 & 0 \\ -1 & 1 & 1 & 0 & 0 & 1 \end{array}\right] & \sim \left[\!\begin{array}{rrr|rrr} 1 & 0 & -1 & 1 & 0 & 0 \\ 0 & 2 & 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr|rrr} 1 & 0 & -1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 2 & 1 & 0 & 1 & 0 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr|rrr} 1 & 0 & -1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -2 & 1 & -2 \end{array}\right] \\ & \sim \left[\!\begin{array}{rrr|rrr} 1 & 0 & 0 & -1 & 1 & -2 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & -2 & 1 & -2 \end{array}\right] \\ \end{align*} Notice that in the above row reduction, the matrix $I_3$ is being transformed to $E_1$ at the 1st step, to $E_2 E_1$ at the 2nd step, to $E_3 (E_2 E_1)$ at the 3rd step and finally to $E_4 (E_3 E_2 E_1) = A^{-1}$ at the 4th step.
• The reason that I wrote all these detailed steps is to show that we obtained $A^{-1}$ as a product of elementary matrices. It is interesting to point out that at the same time we obtained $A$ as a product of elementary matrices. Recall that $(E_4 E_3 E_2 E_1) A = I_3.$ Multiplying the last equality consecutively by $E_4^{-1}$, $E_3^{-1}$, $E_2^{-1}$, $E_1^{-1}$ we get $A = E_1^{-1} E_2^{-1} E_3^{-1} E_4^{-1}.$ Thus, $A = \left[\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 2 & 1 \\ -1 & 1 & 1 \end{array}\right] = \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -1 & 0 & 1 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 2 & 1 \end{array}\right] \left[\!\begin{array}{rrr} 1 & 0 & -1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right]$ It is a good exercise in matrix arithmetic to confirm that the last equality is correct.
• After reading this post you should be able to solve an exercise stated like this: Consider the matrix $M = \left[\begin{array}{rrr} 3 & 3 & 2 \\ 3 & 2 & 1 \\ 2 & 1 & 0 \end{array}\right]$.
• Determine whether it is possible to write the matrix $M$ as a product of elementary matrices.
• If you claim that it is possible to write $M$ as a product of elementary matrices, then find elementary matrices whose product is $M$. If you claim that it is not possible to write $M$ as a product of elementary matrices, justify your answer.
• In exercises it is often convenient to have a matrix with integer entries whose inverse is also a matrix with integer entries. Such matrices are called unimodular matrices. I was surprised that there are a lot of such matrices. Here is a pdf file with all unimodular matrices with entries $-1,0,1,2,3$ whose inverses have entries among the ten digits and their opposites. I have omitted the matrices with 3 or more zeros. This pdf file is huge, 3119 pages with over 80000 matrices. If you ever need a unimodular matrix I hope you find it here.

Monday, October 19, 2015

• Today we started Section 2.2: The inverse of a matrix; suggested problems are: 1, 4, 5, 12, 13, 21, 22, 23, 24, 28, 32, 33, 34, 38.
• Please pay special attention to problems 22, 23, 24 from Section 2.1. In each of these problems you are asked to prove an implication. With any implication you should be aware of its converse and its contrapositive.
• Recall that the contrapositive of the implication $p \Rightarrow q$ is the implication $\neg q \Rightarrow \neg p$; where $\neg p$ stands for the negation of $p$ and $\neg q$ stands for the negation of $q$. Also recall that the contrapositive is equivalent to the original implication.
• Recall that the converse of the implication $p \Rightarrow q$ is the implication $q \Rightarrow p$. Also recall that the converse can be wrong when the original proposition is true.
• For Problem 22 in Section 2.1 write down the converse and answer whether it is true or false. Justify your answer.
• For Problem 24 in Section 2.1 write down the converse and answer whether it is true or false. Justify your answer. This problem is closely related to Problem 26 in Section 2.1.

Thursday, October 15, 2014

• Today we started Section 2.1: Matrix operations; suggested problems are: 1, 3, 5, 7, 9, 10, 11, 17, 20, 22, 23, 24, 25, 27, 28, 34

Friday, October 9, 2015

• Here is a list of topics that will be covered in this class. The topics that are not covered on this exam are in light gray.
• On exams I usually assign 4 problems. Often these problems have several parts (a), (b), ... In this file you can find some problems in the form that they could appear on the exam.

Thursday, October 8, 2015

• Suggested problems for Section 1.9: 1, 3, 4, 5, 7, 8, 11, 12, 18, 19, 23, 25, 26, 27, 28, 29, 30

Tuesday, October 6, 2015

• We did Section 1.8 today. Suggested problems: 1-4, 12, 13-17, 19, 20, 25, 27, 28

Monday, October 5, 2015

• We did Section 1.7: Linear independence today. Suggested problems: 2, 4, 5, 8, 10, 11, 17, 21, 23, 24, 25, 26, 27, 28, 29, 32, 33, 34, 35, 36, 37, 38, 39, 40
• We skipped Section 1.6: Applications of Linear Systems. However, I talked about my favorite application of vectors: COLORS. I will demonstrate below how Mathematica represents colors. In other programs it is done similarly. For example, HTML uses the integers between 0 and 255, instead of the real numbers between 0 and 1 which are used in Mathematica.
• All colors are identified in Mathematica by a vector $\displaystyle \begin{bmatrix}x \\ y \\ z\end{bmatrix}$ with $0 \leq x \leq 1$, $0 \leq y \leq 1$, $0 \leq z \leq 1$. In other words, Mathematica identifies colors with the points in the unit cube in the $xyz$-space. In this setting the unit cube is called The Color Cube.
• Below is an image of the color cube with 27 colors emphasized.
• Some of the colors emphasized below have common names. For others I tried to find appropriate names.
• Here I adopt following mathematical definitions of dark and light adjectives for colors: For a specific COLOR we define the dark COLOR to be the color which is half-way between COLOR and BLACK, that is the vector corresponding to the COLOR scaled by $1/2$. For a specific COLOR we define the light COLOR to be the color which is half-way between COLOR and WHITE, that is the sum of the vector $(1/2,1/2,1/2)$ and the vector corresponding to the COLOR scaled by $1/2$.
• In this terminology maroon is just dark red, navy is dark blue, teal is dark cyan, purple is dark magenta, olive is dark yellow, gray is light black, or gray is dark white, salmon is light red, ultra pink is light magenta.
$\displaystyle \begin{bmatrix}0 \\ 0 \\0\end{bmatrix}$   Black $\displaystyle \begin{bmatrix}1/2 \\ 1/2 \\ 1/2\end{bmatrix}$  Gray $\displaystyle \begin{bmatrix}1 \\ 1 \\ 1\end{bmatrix}$  White
$\displaystyle \begin{bmatrix}1 \\ 0 \\0\end{bmatrix}$   Red $\displaystyle \begin{bmatrix}1/2 \\ 0 \\0\end{bmatrix}$   Maroon $\displaystyle \begin{bmatrix}1/2 \\ 1/2 \\0\end{bmatrix}$   Olive $\displaystyle \begin{bmatrix}1 \\ 1/2 \\0\end{bmatrix}$   Orange $\displaystyle \begin{bmatrix}1 \\ 1 \\0\end{bmatrix}$   Yellow
$\displaystyle \begin{bmatrix}0 \\ 1 \\0\end{bmatrix}$   Green
$\displaystyle \begin{bmatrix}0 \\ 1/2 \\0\end{bmatrix}$
Dark
Green
$\displaystyle \begin{bmatrix}1/2 \\ 1 \\0\end{bmatrix}$ Chartruse $\displaystyle \begin{bmatrix}0 \\ 1/2 \\ 1/2\end{bmatrix}$ Teal
$\displaystyle \begin{bmatrix}0 \\ 1 \\1/2\end{bmatrix}$
Spring
Green
$\displaystyle \begin{bmatrix}0 \\ 0 \\1\end{bmatrix}$   Blue $\displaystyle \begin{bmatrix}0 \\ 0 \\ 1/2\end{bmatrix}$  Navy $\displaystyle \begin{bmatrix}1/2 \\ 0 \\ 1/2\end{bmatrix}$  Purple $\displaystyle \begin{bmatrix}1 \\ 0 \\ 1\end{bmatrix}$  Magenta $\displaystyle \begin{bmatrix}0 \\ 1 \\ 1\end{bmatrix}$  Cyan
$\displaystyle \begin{bmatrix}1/2 \\ 0 \\ 1\end{bmatrix}$
Dark
Violet
$\displaystyle \begin{bmatrix}0 \\ 1/2 \\1\end{bmatrix}$
Sky
Blue
$\displaystyle \begin{bmatrix}1/2 \\ 1/2 \\ 1\end{bmatrix}$
Light
Blue
$\displaystyle \begin{bmatrix} 1 \\ 1/2 \\1\end{bmatrix}$
Ultra
Pink
$\displaystyle \begin{bmatrix} 1/2 \\ 1 \\1 \end{bmatrix}$
Light
Cyan
$\displaystyle \begin{bmatrix}1 \\ 0 \\ 1/2 \end{bmatrix}$
Magenta
Red
$\displaystyle \begin{bmatrix} 1 \\ 1/2 \\ 1/2 \end{bmatrix}$  Salmon
$\displaystyle \begin{bmatrix}1 \\ 1 \\ 1/2 \end{bmatrix}$
Light
Yellow
$\displaystyle \begin{bmatrix}1/2 \\ 1 \\ 1/2 \end{bmatrix}$
Light
Green

Place the cursor over the image to start the animation.

• Thinking of colors as vectors helps us to understand a transition between two colors. Below I show three ways of interpreting a transition between Teal and Yellow.

Place the cursor over the image to start the animation.

Since Teal and Yellow are the heads of particular vectors in the Color Cube, to construct a transition I connected the heads with a line segment. Points on this line segment are the heads of special linear combinations of the vectors representing Teal and Yellow. As an exercise write the linear combinations which are used in the above transition.

Place the cursor over the image to start the animation.

In the above animation I used the colors from the line segment connecting Teal and Yellow to color the rectangles in the middle of the square.

Above is the unit circle colored using colors from the line segment connecting Teal and Yellow.

Friday, October 2, 2015

• Today we finished Section 1.5. We explained the relationship between the solution set of the homogeneous equation $A \mathbf x = \mathbf 0$ and the solution set of the homogeneous equation $A \mathbf x = \mathbf b$; see Theorem 6 for details. Please recognize how this theorem is reflected when the solution of $A \mathbf x = \mathbf 0$ is written in parametric vector form. Suggested problems for Section 1.5: 1, 3, 5, 6, 7, 9, 11, 12, 13, 15, 16, 19, 21, 23, 24, 26, 29, 32, 35, 37, 38, 39, 40.

Thursday, October 1, 2015

• Today we lightly covered Section 1.5 which talks about writing solution sets of linear systems in parametric vector form. Suggested problems for Section 1.5: 1, 3, 5, 6, 7, 9, 11, 12, 13, 15, 16, 19, 21, 23, 24, 26, 29, 32, 35, 37, 38, 39, 40.
• To save time, today in class I used Mathematica to row reduce a matrix. Here is the simple Mathematica file that I created. It is called 20151001.nb. Right-click on the underlined word "Here"; in the pop-up menu that appears, your browser will offer you to save the file in your directory. Make sure that you save it with the exactly same name. After saving the file you can open it with Mathematica 5.2. You will find Mathematica 5.2 on all campus computers in
Start -> All Programs -> Math Applications -> Mathematica.
Open Mathematica first, then open 20151001.nb from Mathematica.
• If you have problems using files that I post here please let me know. If you spend some time learning how to use Mathematica you will enhance the understanding of math that we are studying.
• We also have Mathematica version 8. It is only available in BH 215. Since Mathematica v 5.2 is more widely available I decided to use it in this class. These two versions are not compatible. The command structure is very similar. Version 8 will usually recognize the differences and correct them.
• More information on how to use Mathematica you can find on my Mathematica v5.2 page and Mathematica v8 page.

Tuesday, September 29, 2015

• Today we finished Section 1.3: Vector Equations and started Section 1.4: Matrix Equation $A {\mathbf x} = \mathbf b$. Suggested problems for Section 1.4: 1, 5, 13, 14, 15, 16, 17-20, 21, 22, 23, 24, 25, 26, 35, 36

Monday, September 28, 2015

• Today we finished Section 1.2 and started Section 1.3.
• Suggested problems for Section 1.3: 1, 5, 9, 15, 17, 18, 19, 21-25, 32
• As I pointed out in class, a very good exercise in understanding RREF (row reduced echelon form) is to write down all $2\!\times\!3$ matrices which are in RREF. There are seven of them. Below I list the matrix with no leading entries, then all matrices with one leading entry, then the matrices with two leading entries. Those are all possibilities since each leading entry needs its own row and we are studying matrices with only two rows. \begin{align*} & \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}, \\ & \begin{bmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix},\quad \begin{bmatrix} 0 & 1 & * \\ 0 & 0 & 0 \end{bmatrix},\quad \begin{bmatrix} 1 & * & * \\ 0 & 0 & 0 \end{bmatrix},\\ & \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & * & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 & * \\ 0 & 1 & * \end{bmatrix}, \quad \end{align*}
• Similarly, here are all possible $3\!\times\!4$ matrices in RREF (reduced row echelon form). There are fifteen of them. \begin{align*} & \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \\ & \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 0 & 0 & 1 & * \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix},\quad \begin{bmatrix} 0 & 1 & * & * \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & * & * & * \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix},\\ & \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 0 & 1 & * & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & * & * & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 0 & 1 & 0 & * \\ 0 & 0 & 1 & * \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & * & 0 & * \\ 0 & 0 & 1 & * \\ 0 & 0 & 0 & 0 \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 & * & * \\ 0 & 1 & * & * \\ 0 & 0 & 0 & 0 \end{bmatrix}, \\ & \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & * & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix},\quad \begin{bmatrix} 1 & 0 & * & 0 \\ 0 & 1 & * & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 & 0 & * \\ 0 & 1 & 0 & * \\ 0 & 0 & 1 & * \end{bmatrix}, \end{align*}
• Another good exercise is to think of the above matrices as augmented matrices of systems of equations and state for each corresponding system whether it has: No Solutions (NS), Unique Solution (US), Infinitely Many Solutions (MS).
• Another interesting exercise, but for Math 309, is to calculate how many $m\!\times\!n$ matrices are in RREF.

Friday, September 25, 2015

• Read Section 1.2. Suggested problems for Section 1.2: 3, 5, 6, 7, 8, 12, 17-31
• Some useful links:
• A matrix whose all entries are zero is called a zero matrix. A row of a matrix is said to be a zero row if all entries in that row are zero. The leftmost nonzero entry of a nonzero row is called a leading entry. The zeros preceding the leading entry are called leading zeros of a row. All entries of a zero row are leading zeros.
• This is PlanetMath's definition of row echelon form:
A matrix is said to be in row echelon form if each nonzero row has strictly more leading zeros then the previous row.
• This is a restatement of Wikipidia's definition of reduced row echelon form:
A nonzero matrix which is in row echelon form is said to be in reduced row echelon form if the leading entries of all nonzero rows are equal to 1 and this 1 is the only nonzero entry in its column.

Thursday, September 24, 2014

• The information sheet
• We will start with Section 1.1 Systems of Linear Equations. Suggested problems are 3, 6, 7, 5, 9, 11, 12, 16, 17, 18, 21, 22, 23, 24, 25, 27, 31, 33.