For a special type of square matrix \(A\), we get \(A^TA = AA^T\) Under what conditions could
this be true? (Hint: The Identity matrix I is an example of such a
matrix).
In general such matrices are called normal matrices by a unitary matrix,
i.e. they can be expressed in the form \(A =
U^{-1} D U\), where \(U U^{-1} =
I\) , \(U^{-1} = U^H\) and \(D\) is a diagonal matrix, one whose only
non-zero entries fall on the diagonal. This is a result of the spectral
theorem. Assuming the above conditions hold, in \(\mathbb{R}^{n \times n}\) the Hermitian
operator simplifies to the transpose of the matrix and the conditions
simplify such that the matrix \(A\)
must be symmetric, i.e. we set to prove that \(A^T = A\). \[
A^T = (U^{-1} D U )^T \] \[= U^T D^T
(U^{-1})^T \] \[ = U^{-1} D U
\] \[ = A\] Since \(D\) is a diagonal matrix \(D=D^T\).
Matrix factorization is a very important problem. There are supercomputers built just to do matrix factorization. Every second you are on an airplane, matrices are being factorized. Radars that track flights use a technique called Kalman filtering. At the heart of Kalman Filtering is a Matrix Factorization operation. Kalman Filters are solving linear systems of equations when they track your flight using radars. Write an R function to factorize a square matrix A into LU or LDU, whichever you prefer. Please submit your response in an R Markdown document using our class naming convention, E.g. LFulton_Assignment2_PS2.png You don’t have to worry about permuting rows of A and you can assume that A is less than 5x5, if you need to hard-code any variables in your code. If you doing the entire assignment in R, then please submit only one markdown document for both the problems.
I will utilize Doolittle method to perform the LU decomposition. This method assumes that a square matrix can be decomposed into the for \(A = LU\), where \(L\) is a lower triangular matrix and \(U\) is an upper triangular matrix.
For example:
\[ \begin{bmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2} \\ \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ l_{2,1} & 1 \\ \end{bmatrix} \begin{bmatrix} u_{1,1} & u_{1,2} \\ 0 & u_{2,2}\\ \end{bmatrix} = \begin{bmatrix} u_{1,1} & u_{1,2} \\ l_{2,1} \cdot u_{1,1} & l_{2,1} \cdot u_{1,2} + u_{2,2} \\ \end{bmatrix} \]
\[ u_{1.1} = a_{1.1} \; , u_{1.2} = a_{1.2} \; , u_{1.3} = a_{1.3} \; \\ \] \[ l_{2,1} = \frac{a_{2.1}}{u_{1,1}} \; , u_{2,2} = a_{2,2} - l_{2,1} \cdot u_{1,1} \]
It is possible to work backwards to determine values for \(u_{i,j}\) and \(l_{i,j}\). This generalizes to the following: \[ u_{i,j} = \begin{cases} a_{1,j} & ,\forall j \\ a_{i,j} \cdot \sum_{k=1}^{i-1} l_{i,k}u_{k,j} &, j>1 \end{cases} \] N.B. In the code below, since we are initializing the lower and upper matrices to the \(0\) matrix, it is sufficient to code on lt the second case for all values of \(j\) for the upper triangular matrix. \[ l_{i,j} = \begin{cases} 1 & , i = j \\ \frac{a_{j,i}-\sum_{k-1}^{i-1} l_{j,k} u_{k,i} }{u_{i,i}} &, j>1 \end{cases} \]