Problem Set 1

In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3 × 2 matrix A

\[A = \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 4\\ \end{bmatrix} \]

write code in R to compute \(X = AA^T\) and \(Y = A^TA\). Then, compute the eigenvalues and eigenvectors of \(X\) and \(Y\) using the built-in commands in R. Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of \(X\) and \(Y\). In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both \(X\) and \(Y\) are the same and are squares of the non-zero singular values of \(A\). Your code should compute all these vectors and scalars and store them in variables.

Compute \(X = AA^T\) and \(Y = A^TA\)

A <- matrix(c(1,2,3,-1,0,4), nrow = 2, byrow = T)
#Compute the tranpose of a matrix
my_transpose <- function (A) {
  #Create a temp matrix with the reverse dimensions n x m for transpose matrix
  T <- matrix(A, nrow = ncol(A), ncol = nrow(A))
  # replace the columns and rows of A[i,j] with A[j,i]
  for(i in 1:nrow(A)) {
    for(j in 1:ncol(A)) {
      T[j,i] <- A[i,j]
    }
  }
   return(T)
}

#Calculate the transpose of A
T <- my_transpose(A)
T
##      [,1] [,2]
## [1,]    1   -1
## [2,]    2    0
## [3,]    3    4
#Compute X & Y by multiplying by the transpose of A
X <- A%*%T
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
Y <- T%*%A
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25

Compute the Eigenvalues and Eigenvectors of X & Y

X_e <- eigen(X)
#Show the eigenvalues and eigenvectors of X
X_e$values
## [1] 26.601802  4.398198
X_e$vectors
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
Y_e <- eigen(Y)
#Show the eigenvalues and eigenvectors of Y
Y_e$values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
Y_e$vectors
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001

Compute the left-singular, singular values, and right-singular vectors of \(A\)

#Single Value Decomposition
A_svd <- svd(A)
#vector containing the singular values of x sorted decreasingly
A_svd$d
## [1] 5.157693 2.097188
#matrix whose columns contain the left singular vectors of A
A_svd$u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
#matrix whose columns contain the right singular vectors of A
A_svd$v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824

Show that they are indeed eigenvectors of \(X\) and \(Y\)

#X is the same as the left singular vectors of A
X_e$vectors 
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
A_svd$u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
#Y is the same value as the right singular vectors of A
Y_e$vectors
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001
A_svd$v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824

Show that two non-zero eigenvalues of \(X\) and \(Y\) are the same and are squares of the non-zero singular values of \(A\)

X_e$values
## [1] 26.601802  4.398198
Y_e$values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
A_svd$d**2
## [1] 26.601802  4.398198

Problem Set 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature: \(B = myinverse(A)\) where \(A\) is a matrix and \(B\) is its inverse and \(A×B = I\). The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of \(A\) to compute the inverse.

\(A^−1 = C^T/det(A)\)

Compute the inverse of a well-conditioned full-rank square matrix using co-factors

#find the cofactors for the the matrix 
cofact <- function(a) {
  # creating temp matrix the same size as a
  cofact <- a 
  #iterate through the rows and columns of the square matrix
  for(i in 1:dim(a)[1]){
    for(j in 1:dim(a)[2]){
      # overwrite temp matrix with the cofactors exccept for the ith & jth row/column
      cofact[i,j] <- ((-1)^(i+j)*det(a[-i,-j]))  
    }
  }
  return(cofact) 
}
#function to compute the inverse of the full rank square matrix
myinverse <- function(a){
    det_a <- det(a)
    cofact_a <- cofact(a)
    adj <- t(cofact_a)
    b <- adj/det_a
}
#Define a full rank square matrix
A <- matrix(c(2,4,1,2,-5,3,-4,1,2), nrow = 3, byrow = TRUE)
#Compute the inverse of A
B <- myinverse(A)
#Show that A times it's inverse is the Identity matrix
A%*%B
##               [,1]         [,2]         [,3]
## [1,]  1.000000e+00 0.000000e+00 5.551115e-17
## [2,]  0.000000e+00 1.000000e+00 1.110223e-16
## [3,] -5.551115e-17 5.551115e-17 1.000000e+00