PROBLEM SET 1

(1) Show that AT A does not equal A AT in general. (Proof and demonstration.)

Per course text, matrix multiplication is not commutative and thus the order of the operands affects the result of the equation.

Proof:

If we take a 2x2 matrix,

A = [a11 a12], [a21, a22] AT = [b11, b12], [b21, b22]

AAT = (a11b11 + a12b21) (a11b12 + a12b22) (a21b11 + a22b21) (a21b12 + a22b22)

ATA = (b11a11 + b12a21) (b11a12 + b12a22) (b21a11 + b22a21) (b21a12 + b22a22)

And thus AAT and ATA are NOT equal.

Demonstration:

A <- cbind(c(1,-1,6), c(-3,0,2), c(0,-8,6))
AT <- cbind(c(1,-3,0), c(-1,0,-8),c(6,2,6))

AAT_product <- A %*% AT
ATA_product <- AT %*% A

print(AAT_product)
##      [,1] [,2] [,3]
## [1,]   10   -1    0
## [2,]   -1   65  -54
## [3,]    0  -54   76
print(ATA_product)
##      [,1] [,2] [,3]
## [1,]   38    9   44
## [2,]    9   13   12
## [3,]   44   12  100

We can see from the result that AAT_product does not equal ATA_product.

(2) For a special type of square matrix A, we get AT A = AAT. Under what conditions could this be true? (Hint: The Identity matrix I is an example of such a matrix).

The matrix is symmetrical and thus the transpose equals that matrix itself. Thus, their product will be equal because both cases (AAT and ATA) will result in the matrix simply being squared. For example:

#Notice that A = AT
A <- cbind(c(1,0,2), c(0,1,3), c(2,3,1))
AT <- cbind(c(1,0,2), c(0,1,3),c(2,3,1))

AAT_product <- A %*% AT
ATA_product <- AT %*% A

print(AAT_product)
##      [,1] [,2] [,3]
## [1,]    5    6    4
## [2,]    6   10    6
## [3,]    4    6   14
print(ATA_product)
##      [,1] [,2] [,3]
## [1,]    5    6    4
## [2,]    6   10    6
## [3,]    4    6   14

Notice that the products are equivalent matrices and thus AAT_product = ATA_product for the provided case.

PROBLEM SET 2

Write an R function to factorize a square matrix A into LU or LDU, whichever you prefer.

I’ve elected to perform LU decomposition and find two matrices (L and U) who are lower and upper triangular, respectively, and whose product (LU) equals A.

U will be found through the row reduction of A and L will be found by keeping 1s along its diagonal and then tracking row operations at corresponding elements when we row reduce A (ie. if we subtract 2R1 at element 12 then element 12 of L will be 2).

lu_soln <- function(A){
  
  #It's assumed that we'll receive a square matrix < 5x5.
  
  U <- A #Set U, upper triangular, equal to A prior to row reducing
  n <- dim(A)[1] #Assign matrix dimensions, being that it's square it's nxn
  L <- diag(n) #Initialize L, lower triangular, as the identity matrix of size n
  
  #We row reduce using embedded for loops, updating L[i,j] with multiplier values and U[i,] with row reductions. 
  #i is for row navigation (ie. for a 3x3 matrix, we go to rows 2 and 3).
  #j is for column navigation (ie. for a 3x3 matrix, we got to columns 1 and 2).
  for(i in 2:n){ 
    for(j in 1:(i-1)){
      L[i,j] <- U[i,j] / U[j,j]
      U[i,] <- U[i,] - L[i,j] * U[j,]
    }
  }
  #Show the product of L and U to verify that it equals A
  check_A <- L %*% U
  print(check_A)
  
  #Return matrices L, U in list form.
  return(list(L, U))

}

A <- cbind(c(1,3,1), c(-1,1,0), c(2,0,1))
lu_soln(A)
##      [,1] [,2] [,3]
## [1,]    1   -1    2
## [2,]    3    1    0
## [3,]    1    0    1
## [[1]]
##      [,1] [,2] [,3]
## [1,]    1 0.00    0
## [2,]    3 1.00    0
## [3,]    1 0.25    1
## 
## [[2]]
##      [,1] [,2] [,3]
## [1,]    1   -1  2.0
## [2,]    0    4 -6.0
## [3,]    0    0  0.5

1st output matrix: verification of A as the product of LU
2nd output matrix: L
3rd output matrix: U

As can be observed above, our matrix factorization via LU decomposition of A, a 3x3 matrix was successful.