Math 402/204
2024-09-25
Definition 1 | Matrix
Example 1 | Examples of Matrices
some examples of matrices
\[\begin{align} & \begin{bmatrix} 1 & 2 \\ 3 & 0 \\ -1 & 4 \end{bmatrix} & & \begin{bmatrix} e & \pi & -\sqrt{2} \\ 0 & \frac{1}{2} & 1\\ 0 & 0 & 0 \end{bmatrix} & & \begin{bmatrix} 2 \\ 3 \end{bmatrix} \end{align}\]
Definition 2 | Zero matrix
A zero matrix has every entry equal to zero: \(\textbf{0} = [0]\) or \(0 = [0]\) , or explicitly
\[\begin{align} \textbf{0} = \begin{bmatrix} 0 & 0 &\cdots & 0 \\ 0 & 0 &\cdots & 0 \\ \cdots & \cdots &\cdots & \cdots \\ & & & \\ \cdots & \cdots &\cdots & \cdots \\ 0 & 0 &\cdots & 0 \\ \end{bmatrix} \end{align}\]
Definition 3 | Identity matrix
An identity matrix is a square matrix whose diagonal entries are all equal to one and whose off-diagonal entries are all equal to zero, and is denoted by \(I\) or \(\textbf{I}\).
\[\begin{align} \textbf{0} = \begin{bmatrix} 1 & 0 &\cdots & 0 \\ 0 & 1 &\cdots & 0 \\ \cdots & \cdots &\cdots & \cdots \\ & & & \\ \cdots & \cdots &\cdots & \cdots \\ 0 & 0 &\cdots & 1 \\ \end{bmatrix} \end{align}\]
Identity matrix Cont’d
People also use the following natation
where \(\begin{cases} {1 \quad \text{when} \quad i = j \\ 0 \quad \text{ otherwise }}\end{cases}\)
is called the Kronecker delta.
Definition 4 | Diagonal matrix
A square matrix in which all the entries off the main diagonal are zero. Here are some examples:
Definition 5 | Row vector, Column vector and Transpose of a matrix
Row vector: A row vector is of dimension \(p\) is a \(1 \times p\) matrix: \[\begin{align} u &= \begin{bmatrix} u_{1} & u_{2} & \cdots & u_{p} \\ \end{bmatrix} \end{align}\]
Column vector: A column vector of dimension n is an n × 1 matrix:
\[\begin{align} v &= \begin{bmatrix} v_{1} \\ v_{2} \\ \vdots \\ v_{p} \\ \end{bmatrix} \end{align}\]
The transpose of a column vector becomes a row vector, and vice versa.
Example 2 | Transpose of a matrix
What is transpose of matrix
\(\color{forestgreen}{\textbf{Solution 2 |}}\)
Definition 6 | Symmetric matrix
If \(\textbf{A}^{t} = \textbf{A}\) or \(a_{ij} = a_{ji}\), then the matrix \(\textbf{A}\) is said to be symmetric.
Of course, a symmetric matrix must be a square matrix.
Example 3 | Symmetric matrix
Find the matrix that is symmetric of matrix
\(\color{forestgreen}{\textbf{Solution 3 |}}\)
Definition 7 | Equal matrices
Example 4 | Equal matrices
Consider the matrices
\[\begin{align} A &= \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 2\\ 2 & 3 & 7 \end{bmatrix} & B &= \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 2\\ 2 & 3 & x \end{bmatrix} \end{align}\]
Are matrices \(A\) and \(B\) equal?
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
Definition 8 | Adding and Subtracting matrices
The sum/difference of two matrices \(\textbf{A}\) and \(\textbf{B}\) is defined by the sum/difference of corresponding entries, i.e.,\(\textbf{A} + \textbf{B} = [a_{ij} + b_{ij}]\).
The equation of \(\textbf{A} = \textbf{B}\) is equivalent to \(\textbf{A} - \textbf{B} = 0\).
Example 5 | Adding and Subtracting matrices
Consider the matrices
\[\begin{align} A &= \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 2\\ 2 & 3 & 7 \end{bmatrix} & B &= \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 2\\ 2 & 3 & 5 \end{bmatrix} \end{align}\]
\[\begin{align} C &= \begin{bmatrix} 1 & 2\\ 1 & 2 \end{bmatrix} \end{align}\]
Then find \(A+B\), \(A-B\), \(A+C\), and \(B-C\).
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
\[\begin{align} A + B &= \begin{bmatrix} 2 & 0 & 2\\ 0 & 4 & 4\\ 4 & 6 & 12 \end{bmatrix} \end{align}\]
Definition 9 | Matrix Trace and Scalar Multiple
If \(A\) is any matrix and \(c\) is any scalar, then the product \(cA\) is the matrix obtained by multiplying each entry of the matrix \(A\) by \(c\). The matrix \(cA\) is said to be a scalar multiple of \(A\).
In matrix notation, if \(A = [a_{ij}]\), then \[(cA)_{ij} = c(A)_{ij} = ca_{ij}\]
If \(A\) is a square matrix, then the trace of A, denoted by \(tr(A)\), is defined to be the sum of the entries on the main diagonal of \(A\). The trace of \(A\) is undefined if \(A\) is not a square matrix.
Example 6 | Matrix Trace and Scalar Multiple
Consider the matrix
\[\begin{align} A &= \begin{bmatrix} 1 & 0 & 1\\ 0 & 2 & 2\\ 2 & 3 & 7 \end{bmatrix} \end{align}\] and scalar \(c = 3\).
Find \(cA\) and \(tr(cA)\).
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
\[\begin{align} cA &= \begin{bmatrix} 3 & 0 & 3\\ 0 & 6 & 6\\ 6 & 9 & 21 \end{bmatrix} \end{align}\] \(tr(cA) = 3 + 6 + 21 = 30\)
Definition 9 | Dot product of two vectors
Dot product of two vectors: Two vectors of the same dimension can form a dot product that is equal to the sum of the products of the corresponding entries:
\[\textbf{u} \cdot \textbf{v} = u_{1}v_{1} + u_{2}v_{2} + \cdots u_{n}v_{n} = \sum_{i=1}^{n} u_{1}v_{1}\]
The dot product is also called an inner product.
Example 7 | Vector Dot Product
\(\textbf{a.}\) For the vectors \(\textbf{u} = [1, 2, 3]\) and \(\textbf{v} = [4, 5, 6]\), find \(\textbf{u} \cdot \textbf{v}\).
\(\textbf{b.}\) For the vectors \[\begin{align}
\textbf{u} &= \begin{bmatrix}
2 \\
-3 \\
-1
\end{bmatrix}
\end{align}\]
and \[\begin{align}
\textbf{v} &= \begin{bmatrix}
-5 \\
-3 \\
-1
\end{bmatrix}
\end{align}\]
, find \(\textbf{u} \cdot \textbf{v}\).
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
Definition 9 | Dot product of two vectors
The amplitude of vector u of dimension n is defined as \[|\textbf{u}| = \sqrt{{u_{1}}^{2} + {u_{2}}^{2} + \cdots {u_{n}}^{2}}\].
Sometimes, the amplitude is also called length, or Euclidean length, or magnitude.
If the Euclidean length of \(\textbf{u}\) is equal to one, we say that the \(\textbf{u}\) is a unit vector. If every element of \(\textbf{u}\) is zero, then we say that \(\textbf{u}\) is a zero vector.
Definition 10 | Orthogonal and Orthonormal Vectors
By the definition of dot product, we have \[|\textbf{u}|^{2} = \textbf{u} \cdot \textbf{u}\].
If \(\textbf{u}\cdot \textbf{v} = 0\), we say that u and v are orthogonal. Further, if \(\textbf{u}\cdot \textbf{v} = 0\) and \(\textbf{u}=\textbf{v} = 1\), then we say that \(\textbf{u}\) and \(\textbf{v}\) are orthonormal.
Example 8 | Numerical example of the dot product
Compute (or state why it’s impossible to compute) the following dot products:
a. \((1, 2, 3)\cdot(4,-3,2)\)
b. \((3, 6, 2)\cdot(-1, 5, 2, 1)\), and
c. \((v_{1}, v_{2}, \cdots,v_{n})\cdot\textbf{e}_{j}\), where \(1\le j\le n\).
Solutions | Numerical example of the dot product
a. \((1, 2, 3)\cdot(4,-3,2)\) \[\begin{align} &= 1\cdot 4 + 2 \cdot (-3) + 3 \cdot 2\\ &= 4 - 6 + 6 \\ &= 4 \\ \end{align}\]
b. \((3, 6, 3)\cdot(-1,5,2,1)\) does not exist, since these vectors do not have the same number of entries.
c. For this dot product to make sense, we have to assume that the vector \(\textbf{e}_{j}\) has \(n\) entries (the same number of entries as\((v_{1}, v_{2}, \cdots , v_{n})\)). Then \((v_{1}, v_{2}, \cdots , v_{n})\cdot \textbf{e}_{j}\) \[\begin{align} &= 0v_{1} + \cdots + 0v_{j-1} + 1v_{1} + 0v_{j+1}\\ & + \cdots + 0v_{n}\\ &= v_{j} \\ \end{align}\]
Example 9 | Numerical examples of vector length
Compute the lengths of the following vectors:
a. \((2, -5, 4, 6)\),
b. \((cos(\theta), sin(\theta))\), and
Solutions | Numerical examples of vector length
a. \[\begin{align} ||(2, -5, 4, 6)|| &= \sqrt{2^{2} + (-5)^{2} + 4^{2} + 6^{2}}\\ &= \sqrt{81} \\ &= 9 \\ \end{align}\]
b. \[\begin{align} ||(cos(\theta), sin(\theta))|| &= \sqrt{cos^{2}(\theta) + sin^{2}(\theta)}\\ &= \sqrt{1} \\ &= 1 \\ \end{align}\]
Example 9 | Numerical examples of vector length
Compute the lengths of the following vectors:
Solutions | Numerical examples of vector length
c. The cube with side length \(1\) can be positioned so that it has one vertex at \((0, 0, 0)\) and its opposite vertex at \((1, 1, 1)\), as shown below:
The main diagonal of this cube is the vector \(\textbf{v} = (1, 1, 1)\), which has length \(||v|| = \sqrt{1^{2} + 1^{2} + 1^{2}} = \sqrt{3}\)
Definition | Matrix Multiplication
Let \(A\) be an \(m \times n\) matrix and \(B\) an \(n \times p\) matrix: then the product \(AB\) is an \(m \times p\). The \(ij\) term of \(AB\) is the dot product of the ith row vector of \(A\) with the jth column vector of \(B\), so that
\[\begin{align} (AB)_{ij} &= a_{i1}b_{1j} + a_{i2}b_{2j} + \dots + a_{in}b_{nj}\\ &= \sum_{i=1}^{n} a_{ik}b_{kj} \end{align}\]
Example 1 | definition of matrix multiplication
For the matrices, find \(AB\)
\[\begin{align} A &= \begin{bmatrix} 1 & 2 & 4\\ 2 & 6 & 0 \end{bmatrix} & B &= \begin{bmatrix} 4 & 1 & 4 & 3\\ 0 & -1 & 3 & 1\\ 2 & 7 & 5 & 2 \end{bmatrix} \end{align}\]
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
\[\begin{align} AB &= \begin{bmatrix} 12 & \phantom{-}27 & 30 & 13\\ 8 & -4 & 26 & 12\\ \end{bmatrix} \end{align}\]
Matrix Multiplication | as Linear Combinations
If \(A_{1}, A_{2}, \cdots A_{r}\) are matrices of the same size, and if \(c_{1}, c_{2}, \cdots c_{r}\) are scalars, then an expression of the form
\[c_{1}A_{1} + c_{2}A_{2} + \cdots + c_{r}A_{r}\] with coefficients \(c_{1}, c_{2}, \cdots c_{r}\)
is called a linear combination of \(A_{1}, A_{2}, \cdots A_{r}\)
Example 2 | Linear Combinations
For the matrices, find \(AB\)
\[\begin{align} A &= \begin{bmatrix} 1 & 2 & 4\\ 2 & 6 & 0 \end{bmatrix} & B &= \begin{bmatrix} 4 & 1 & 4 & 3\\ 0 & -1 & 3 & 1\\ 2 & 7 & 5 & 2 \end{bmatrix} \end{align}\]
\(\color{forestgreen}{\textbf{Solution 4 |}}\)
\[\begin{align} AB &= \begin{bmatrix} 12 & \phantom{-}27 & 30 & 13\\ 8 & -4 & 26 & 12\\ \end{bmatrix} \end{align}\]
Matrix Multiplication | Column-Wise Form
Example 3 | Linear Combinations
Solution 3 | Column-Wise Form
The columns of \(B\) are \[\begin{align} b_{1} &= \begin{bmatrix} -1 \\ 0 \end{bmatrix} & b_{2} &= \begin{bmatrix} 1\\ 1 \\ \end{bmatrix} & b_{3} &= \begin{bmatrix} 1\\ 0 \\ \end{bmatrix} \end{align}\]
Multiplying each of these columns by \(A\) gives \[\begin{align} A\textbf{b}_{1} &= \begin{bmatrix} -1 \\ -3 \end{bmatrix} & A\textbf{b}_{2} &= \begin{bmatrix} 3\\ 7 \\ \end{bmatrix} \end{align}\]
\[\begin{align} A\textbf{b}_{3} &= \begin{bmatrix} 1\\ 3 \\ \end{bmatrix} \end{align}\]
Finally, placing these column into a matrix gives us exactly \(AB:\) \[\begin{align} A\textbf{b}_{3} &= \begin{bmatrix} -1 & 3 & 1\\ -3 & 7 & 3\\ \end{bmatrix} \end{align}\]
Matrix multiplication | Column-Row Expansion
Example 4 | Column-Row Exapansion
Solution | Example 4
The column vectors of \(A\) and the row vectors of \(B\) are, respectively, \[\begin{align} c_{1} &= \begin{bmatrix} 1 \\ 2 \end{bmatrix} & c_{2} &= \begin{bmatrix} \phantom{-}3\\ -1\\ \end{bmatrix} \end{align}\] \[\begin{align} r_{1} &= \begin{bmatrix} 2 & 0 & 4 \\ \end{bmatrix} & r_{2} &= \begin{bmatrix} -3 & 5 & 1\\ \end{bmatrix} \end{align}\]
\[\begin{align} AB &= \begin{bmatrix} 1 \\ 2 \end{bmatrix}\begin{bmatrix} 2 & 0 & 4 \\ \end{bmatrix} + \begin{bmatrix} 3\\ -1 \end{bmatrix}\begin{bmatrix} -3 & 5 & 1\\ \end{bmatrix} & \\ &= \begin{bmatrix} 2 & 0 & 4\\ 4 & 0 & 8 \end{bmatrix} + \begin{bmatrix} -9 & 15 & 3\\ 3 & -5 & -1 \end{bmatrix} \end{align}\]
\[\begin{align} AB &= \begin{bmatrix} -7 & \phantom{-}15 & 7 \\ \phantom{-}7 & -5 & 7 \end{bmatrix} \end{align}\]
Systems of Linear Equations | Definition
A linear equation in the variables \(x_{1}, \cdots, x_{n}\) is an equation that can be written in the form \[a_{1}x_{1} + a_{2}x_{2} + \cdots + a_{n}x_{n} = b\] where \(b\) and the coefficients \(a_{1}, \cdots, a_{n}\) are real or complex numbers, usually known in advance.
The subscript \(n\) may be any positive integer.
Example 5 | Linear equations
Are these equations \(4x_{1} - 5x_{2} + 2 = x_{1}\) and \(x_{2} = 2(\sqrt{6} - x_{1}) + x_{3}\) linear?
\(\color{forestgreen}{\textbf{Solution 5| Linear equations}}\)
Both equations are linear because they can be rearranged algebraically as:
\(3x_{1} - 5x_{2} = -2\) and \(2x_{1} + x_{2} - x_{3} = 2\sqrt{6}\).
Systems of Linear Equations | Definition
A system of linear equations (or a linear system) is a collection of one or more linear equations involving the same variables–say, \(x_{1}, \cdots, x_{n}\). An example is:
\[\begin{align} 2x - y + 1.5z &= \phantom{-}8\\ x \quad \quad \, -4z &= -7 \end{align}\]
A solution of the system is a list \((s_{1}, \cdots, s_{n})\) of numbers that makes each equation a true statement when the values \(s_{1}, \cdots, s_{n}\) are substituted for \(x_{1}, \cdots, x_{n}\), respectively.
Systems of Linear Equations | Definition
The set of all possible solutions is called the solution set of the linear system.
Two linear systems are called equivalent if they have the same solution set.
A system of linear equations has
A system of linear equations is said to be consistent if it has either one solution or infinitely many solutions; a system is inconsistent if it has no solution.
Systems of Linear Equations | Definition
The two lines have different slopes and hence intersect at a unique point, as shown in Fig. 1(a)
The two lines are identical (one equation is a nonzero multiple of the other), so there are infinitely many solutions, as shown in Fig. 1(b).
The two lines are parallel (have the same slope) and do not intersect, so the system is inconsistent, as shown in Fig. 1(c).
Matrix Notation
The essential information of a linear system can be recorded compactly in a rectangular array called a matrix. Given the system
\[\begin{align} x - 2y \,\, + z &= \, \, 0\\ \quad \, 2y - 8z &= \, \, 8\\ 5x \quad \quad -5z &= 10\\ \end{align}\]
with the coefficients of each variable aligned in columns, the matrix
\[\begin{align} \begin{bmatrix} 1 & -2 & \phantom{-}1\\ 0 & \phantom{-}2 & -8\\ 5 & \phantom{-}0 & -5 \end{bmatrix} \end{align}\]
is called the coefficient matrix (or matrix of coefficients) of the system and the matrix
Solving Linear Systems | Elementary Row Operation
\[\begin{align} \begin{bmatrix} 1 & -2 & \phantom{-}1 & 0\\ 0 & \phantom{-}2 & -8 & 8\\ 5 & \phantom{-}0 & -5 & 10 \end{bmatrix} \end{align}\] is called the augmented matrix of the system.
Solve the above system.
(Replacement) Replace one row by the sum of itself and a multiple of another row.
(Interchange) Interchange two rows.
(Scaling) Multiply all entries in a row by a nonzero constant.
\(\color{forestgreen}{\textbf{Solution | }}\) The results agree with the right side of the original system, so \((1, 0, -1)\) is a solution of the system.
Elementary Linear Algebra