Problem Set 1

In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3x2 matrix A

\[A=\begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 4 \end{bmatrix}\]

write code in R to compute \(X=A{ A }^{ T } and{\ Y=A }^{ T }A\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in command in R.

Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A.

Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.

Answer:

(a) Compute the \(A{ A }^{ T }and{\ A }^{ T }A\) and their eigenvalues and eigenvectors.

# Create matrix A
A <- matrix(c(1,2,3,-1,0,4), byrow=T, nrow=2, ncol=3)
A
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4
# Transpose matrix A
A_trans <- t(A)

# Compute A x transpose of A (AAT)
X <- A%*%A_trans
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
# Compute transpose of A x A
Y <- A_trans%*%A
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25
# Compute eigenvalue and eigenvector of matrix X
X_eigenvalue <- round(eigen(X)$values, digits=2)
X_eigenvalue
## [1] 26.6  4.4
X_eigenvector <- round(eigen(X)$vectors, digits=2)
X_eigenvector
##      [,1]  [,2]
## [1,] 0.66 -0.75
## [2,] 0.75  0.66
# Compute eigenvalue and eigenvector of matrix Y
Y_eigenvalue <- round(eigen(Y)$values, digits=2)
Y_eigenvalue
## [1] 26.6  4.4  0.0
Y_eigenvector <- round(eigen(Y)$vectors, digits=2)
Y_eigenvector
##       [,1]  [,2]  [,3]
## [1,] -0.02 -0.67  0.74
## [2,]  0.25 -0.72 -0.65
## [3,]  0.97  0.18  0.18

(b) Compute the left-singular, singular values, and right-singular vectors of A.

# Compute the SVD of A
A_svd <- svd(A, nu=nrow(A), nv=ncol(A))

# Get the left-singular vectors of A
round(A_svd$u, digits=2)
##       [,1]  [,2]
## [1,] -0.66 -0.75
## [2,] -0.75  0.66
# Get the singular values of A
round(A_svd$d, digits=2)
## [1] 5.16 2.10
# Get the right-singular vectors of A
round(A_svd$v, digits=2)
##       [,1]  [,2]  [,3]
## [1,]  0.02 -0.67 -0.74
## [2,] -0.25 -0.72  0.65
## [3,] -0.97  0.18 -0.18

(c) Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y and show the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A.

From the above computation, we can see that the elements in the eigenvectors of X and Y are same as the two sets of singular vectors except the sign(+/-) for some of the elements.

To prove that, we use the eigenvectors of X as the left-singular vectors of U and the eigenvectors of Y as the right-singular vectors of V and then do the matrix multiplication of \(U\Sigma { V }^{ T }\) where \(\Sigma\) is the diagonal matrix for the eigenvalues, we will then get the original matrix A as demonstrated below. Therefore, the sign(-/+) of the elements in the eigenvectors from the two built-in functions, eigen() or svd(), is arbitrary and does not matter.

# Create a 2x3 diagonal matrix for the eigenvalues (5.2, 2.1)
egv_diag_mtx <-  matrix(c(5.2,0,0,0,2.1,0), byrow=T, nrow=2, ncol=3)

# Show that eigenvectors from AAT and ATA are the two sets of singular vectors
round(X_eigenvector%*%egv_diag_mtx%*%t(Y_eigenvector), digits=0)
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4

From the result below, we can see that the square of non-zero singular values of A are the same as the non-zero eigenvalues of both X and Y and the third eigenvalue of Y is zero.

# Squares of the non-zero singular values of A
round((A_svd$d)^2, digits = 1)
## [1] 26.6  4.4
# Eigenvalues of X
X_eigenvalue
## [1] 26.6  4.4
# Eigenvalues of Y
Y_eigenvalue
## [1] 26.6  4.4  0.0

Problem Set 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature:

B = myinverse(A)

where A is a matrix and B is its inverse and AxB = I. The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.

Answer: Function below is to produce an inverse matrix when a given matrix which dimension is square and determinant is non-zero.

Function

myinverse <- function(A) {
  
  # Check if Matrix = Square
  if(length(unique(dim(A)))!=1){
    
    return(noquote("Square matrix is required."))
    
    }
  
  # Check if inverse of a matrix exists
  if(round(det(A), digits=3) == 0){
    
    return(noquote("Inverse doesn't exist."))
    
      } else{
    
       # Create a matrix that has same dimension as A
       cofactors <- A
    
      # Calculate co-factors for a matrix and overwrite the values in co-factors matrix
      for(i in 1:dim(A)[1]){
      
        for(j in 1:dim(A)[2]){
        
          cofactors[i,j] <- (det(A[-i,-j])*(-1)^(i+j))
        
          }
      }

    # Calculate the determinant of a matrix using the co-factors as A x co-factors transpose =        diagonal matrix with determinant A.
    dtm <- as.numeric(A[1,]%*%t(cofactors)[,1])
    
    # Calculate the inverse
    A_inverse <- t(cofactors)/dtm
  
    }
  

  return(A_inverse)
  
}

Testing

4x3 Non-Square Matrix: Below testing is to show that the myinverse function will return a message to user that a square matrix is required to use this function to perform inverse matrix computation.

y <-  matrix(c(3,0,2,2,0,-2,0,1,1,3,7,9), byrow=T, nrow=4, ncol=3)

myinverse(y)
## [1] Square matrix is required.

Determinant Equal to Zero: Below testing is to show that the myinverse function will return a message to user that the input matrix which determinant is zero and therefore, inverse matrix doesn’t exist.

z <-  matrix(c(1,2,3,4,5,6,7,8,9), byrow=T, nrow=3, ncol=3)

myinverse(z)
## [1] Inverse doesn't exist.

3x3 Square Matrix: Below is to show that the myinverse function will compute the correct inverse matrix for a given matrix where it meets these two conditions: square matrix and non-zero determinant. Also, the original matrix times its inverse matrix will produce an identity matrix.

A <- matrix(c(3,0,2,2,0,-2,0,1,1), byrow=T, nrow=3, ncol=3)

# Inverse Matrix
myinverse(A)
##      [,1] [,2] [,3]
## [1,]  0.2  0.2    0
## [2,] -0.2  0.3    1
## [3,]  0.2 -0.3    0
# Identity Matrix
round(A%*%myinverse(A), digits=6)
##      [,1] [,2] [,3]
## [1,]    1    0    0
## [2,]    0    1    0
## [3,]    0    0    1

4x4 Square Matrix: Below is to show that the myinverse function will compute the correct inverse matrix for a given matrix where it meets these two conditions: square matrix and non-zero determinant. Also, the original matrix times its inverse matrix will produce an identity matrix.

A <- matrix(c(1,3,-1,-3,-2,-9,2,-6,-2,0,4,26,-3,-9,7,2), byrow=T, nrow=4, ncol=4)

# Inverse Matrix
myinverse(A)
##            [,1]      [,2]  [,3] [,4]
## [1,] -147.00000 -66.00000 -33.5   17
## [2,]   35.33333  15.66667   8.0   -4
## [3,]  -15.00000  -7.00000  -3.5    2
## [4,]   -9.00000  -4.00000  -2.0    1
# Identity Matrix
round(A%*%myinverse(A), digits=6)
##      [,1] [,2] [,3] [,4]
## [1,]    1    0    0    0
## [2,]    0    1    0    0
## [3,]    0    0    1    0
## [4,]    0    0    0    1