1. Basics

\[ det(A) \ne 0 \]

Given a matrix \[ A = \begin{pmatrix} a & b\\ c & d \end{pmatrix} \] the inverse of A is \[ A^{-1} = \frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix} \]

1.1 Some basic operations

\[ (AB)^{-1} = B^{-1}A^{-1} \] Generally \[ (A+B)^{-1} \ne A^{-1} + B^{-1}\]

\[ (cA)^{-1} = c^{-1}A^{-1} = \frac{1}{c} A^{-1} \]

1.2 Properties of transposes

\[ (A+B)^T = A^T + B^T \] \[ (AB)^T = A^T B^T \] \[ (A^T)^{-1} = (A^{-1})^T \]

2. Elementary Row Operations

Augmented matrix: \[ \left(\begin{array}{ccc|ccc} a & b & c & 1 & 0 & 0\\ d & e & f & 0 & 1 & 0\\ g & h & i & 0 & 0 & 1 \end{array}\right)\]

Row echelon form: if the following conditions are met

Reduced row echelon form:

3. Determinant

3.1 Minor and cofactor

Let A = [aij]n*m be a square matrix. The minot of entry aijof A, denoted by Mij, is the determinant of the sub-matrix obtained by deleting the ith row and jth column. The cofactor of entry aij is equal to Cij = (-1)i+jMij.

3.2 Laplace Expansion

Let A be a n by n square matrix. The determinant of A is equal to \[ det(A) = a_{i1}C_{i1} + a_{i2}C_{i2} + ... + a_{in}C_{in} \quad for \quad any \quad row \quad i \quad or \\ = a_{j1}C_{j1} + a_{j2}C_{j2} + ... + a_{jn}C_{jn} \quad for \quad any \quad column \quad j \]

In general, the determinant of any matrix that has all zero entries above or below the diagonal, is the product of its diagonal elements.
In general, the determinant of any matrix that has a zero row or column, is zero.

3.3 Elementary row operations (Gaussian elimination operations)

Let A be a square matrix. Suppose A –> B through interchanging two rows, A –> C through multiplying a row by a nonzero number k, and A –> D through adding a multiple k of one row to another row. Then \[ det(B) = -det(A), \qquad det(C) = kdet(A), \qquad det(D) = det(A) \]

3.4 Properties of determinants

\[ det(AB) = det(A)det(B) \] \[ det(A^{-1}) = \frac{1}{det(A)} \] * If det(A) = 0, A is called a singular matrix. Or, it is called a non-singular matrix

\[ det(kA) = k^ndet(A) \] * Let A and B differ from each other in a single row (column). If matrix C is constructed by adding the two different rows (columns) in A and B and the rest of the rows (colunms) are the same as those in A and B, then

\[ det(C) = det(a) + det(B) \]

4. Linear Equations

For matrix equation Ax = b, how do we solve for x? 1). Write a new augmented matrix [A|b]
2). Use Gaussian elimination to reduce [A|b] into its reduced row echelon form, say [A’|b’]
3). If A’ = I, then we simply have x = b’ = A-1b.
4). If A’ has a zero row, but the corresponding entry in b’ is not zero, then the system has no solution
5). If all the corresponding entries in b’ to the zero rows of A’, but the non-zero rows of A’ does not contain an I, then there are infinitely solutions.

4.1 Cramer’s rule

Consider the system of linear equations represented by Ax = b. Define Ai to be equal to the matrix A but with colunm i replaced by vector b. Then we have \[ x_{i} = \frac{det(A_{i})}{det(A)}, for \quad all \quad i = 1,2,...,n. \]

5. Eigenvalues and Eigenvectors

5.1 Definition

Given matrix A, if \(\lambda \in C\) , and vector \(\xi \ne 0\) satisfy that \[ A\xi = \lambda \xi, \] then \(\lambda\) is called the eigenvalue of matrix A and \(\xi\) a corresponding eigenvector.

Note that eigenvalues are unique to each matrix, but the corresponding eigenvectors are not.
The eigenvector represents a direction where matrix A’s only effect on it is magnification, and this magnification is by the eigenvalue \(\lambda\) times.

5.2 Characteristic equation

\[ |A - \lambda I| = 0 \] is called the characteristic equation of A and the expression \(|A - \lambda I|\) is called the characteristic polynomial of A. Solving the characteristic equation yields the eigenvalues.
After obtaining the eigenvalues, we may proceed to find the corresponding eigenvectors. –> Fixing on a specific eigenvalue \(\lambda_{1}\), we can find the corresponding eigenvector by solving \[ (A - \lambda_{1}) \xi = 0 \]

algebraic multiplicity: the number of times the eigenvalue occurs as roots of the characteristic polynomial
geometric multiplicity: the number of eigenvectors the eigenvalue has
In general, for any eigenvalue, its geometric multiplicity \(\le\) algebraic multiplicity.

5.3 Some theorems about eigenvalues and eigen vectors

1). If \(\lambda\) is an eigenvalue of A with corresponding eigenvectors \(\xi_{1}, ..., \xi_{m}\), then
\(\lambda_{-1}\) (if \(\lambda \ne 0\)) is an eigenvalue of A-1 with the same corresponding eigenvectors
\(\lambda^{k}\) is an eigenvalue of Ak with the same corresponding eigenvectors
\(t \lambda\) is an eigenvalue of tA for \(t \in R\) with the same corresponding eigenvectors
2). Trace: the sum of the entries in the main diagonal.
3). Let A be an n by n matrix with eigenvalues \(\lambda_{1}, \lambda_{2},...,\lambda_{n}\) (possibly repeated). Then \[ tr(A) = \lambda_{1}+ \lambda_{2}+...+\lambda_{n}. \\ det(A) = \lambda_{1}* \lambda_{2}*...*\lambda_{n}\]

4). If a matrix B is such that B = P-1AP for some invertible matrix P (not necessarily the eigenvector matrix), then B has the same eigenvalues as A. Furthermore, tr(B) = tr(A) and det(A) = det(B).
In this case, B and A are called similar matrices. The transformation of A to B by the operation P-1()P is called a similarity transform.

5). A square matrix A is said to be diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that A = PDP-1. In other words, P-1AP is a diagonal matrix. The matrix P is called a transition matrix. –> Note that: if the repeated eigenvalue has the same eigenvector, then the matrix is not diagonalizable

6). Let A be an n by n matrix. Then A is diagonalizable if and only if it has distinct eigenvalues \(\xi_{1},\xi_{2},..., \xi_{n}\). That is, all eigenvalues has geomatric multiplicity equal algebraic multiplicity.

7). Suppose A is diagonalizable, then

  • AT, A-1 (if exists), and Ak are all diagnoalizable
  • Any similar matrix to A is diagonalizable

8). A square matrix P is an orthogonal matrix if and only if \[ P^T P = I \] Let A be a square matrix. Then A is orthogonally diagonalizable if and only if A is symmetric.