Homework Assignment 4

CUNY 605 Fundamentals of Computational Mathematics.

1. Problem Set 1

In this problem, we’ll verify using R that SVD and Eigenvalues are elated as worked out in the weekly module. Given a 3 x 2 matrix A.

\[ A = \left[\begin{array}{rrr} 1 & 2 & 3 \\ -1 & 0 & 4 \\ \end{array} \right]\]

Write code in R to compute \(X = AA^T\) and \(Y = A^TA\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R.

# Create Matrix A Variable
A <- matrix(c(1,-1,2,0,3,4), ncol = 3, byrow = FALSE)
print("Matrix A: ")
## [1] "Matrix A: "
A
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4
# Compute X = AA^T
# First define A-Transpose
A_transpose <- t(A)
print("A Transpose: ")
## [1] "A Transpose: "
A_transpose
##      [,1] [,2]
## [1,]    1   -1
## [2,]    2    0
## [3,]    3    4
print("A %*% A_transpose: ")
## [1] "A %*% A_transpose: "
X <- A %*% A_transpose
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
# Compute Y = A^T(A)
print("A %*% A_transpose: ")
## [1] "A %*% A_transpose: "
Y <- A_transpose %*% A
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25

Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command.

If we remember our formula for Singular Value Decomposition, \(M = U\sum\limits V^*\). M = Matrix M (m x n), U = Left-Singular value, and \(\sum\limits\) = singular values, and \(V^*\) = Right-Singular Value.

# Using the svd command
SVD <- svd(A)
print("SVD Values (U = Left, V = Right, and D = singular values): ")
## [1] "SVD Values (U = Left, V = Right, and D = singular values): "
SVD
## $d
## [1] 5.157693 2.097188
## 
## $u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
## 
## $v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824

Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y.

First, recall diagnonalization:

For a square matrix with linearly independent eigenvectors, \(A = PDP^{-1}\), where P is a matrix whose columns are the eigenvectors of A and D is a diagonal matrix with the eigenvalues of A along the diagnoal.

Again, we must remember that \(X = AA^T = UDU^T\). And D is a matrix with values \(x_{ij}\) where \(i = j\) are the eigenvalues. Below are the results for both matrices X and Y. As of note, in order to obtain the singular values, you take square root of the eigenvalues.

# Matrix X
print("Find the Eigenvalues of X (AA^T): ")
## [1] "Find the Eigenvalues of X (AA^T): "
eigenX <- eigen(X)
eigenX$values
## [1] 26.601802  4.398198
print("Find its corresponding Eigenvectors: ")
## [1] "Find its corresponding Eigenvectors: "
eigenX$vectors
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
print("Find the singular values: ")
## [1] "Find the singular values: "
singular_X <- sqrt(eigenX$values)
singular_X
## [1] 5.157693 2.097188
# Just on a side note, let's see if the singular values obtained here equal to the singular values obtained in the svd() function.
print("Does the singular values obtained here equals to the singular values obtained in the svd() function as demonstrated here? ")
## [1] "Does the singular values obtained here equals to the singular values obtained in the svd() function as demonstrated here? "
SVD$d
## [1] 5.157693 2.097188
# Matrix Y
print("Find the Eigenvalues of Y (A^T*A): ")
## [1] "Find the Eigenvalues of Y (A^T*A): "
eigenY <- eigen(Y)
eigenX$values
## [1] 26.601802  4.398198
print("Find its corresponding Eigenvectors: ")
## [1] "Find its corresponding Eigenvectors: "
eigenY$vectors
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001
print("Find the singular values: ")
## [1] "Find the singular values: "
singular_Y <- sqrt(eigenY$values)
singular_Y
## [1] 5.157693e+00 2.097188e+00 1.029068e-08
# Just on a side note, let's see if the singular values obtained here equal to the singular values obtained in the svd() function.
print("Does the singular values obtained here equals to the singular values obtained in the svd() function as demonstrated here? ")
## [1] "Does the singular values obtained here equals to the singular values obtained in the svd() function as demonstrated here? "
SVD$d
## [1] 5.157693 2.097188

As you can see, we confirmed that the singular values are indeed the eigenvectors. (Please keep in mind that some of the values produce negative signs. For example, some of the vectors are multipled by a scalar of -1. However, these are equivalent eigenvectors since they are both elements of the same vector spaces.)

In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A as seen above.

2. Problem Set 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature: B = myinverse(A) where A is a matrix and B is its inverse and AxB = I. The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagnoal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse. Let’s refer to the handout given in the weekly modules.

1.2. Inverse of a matrix. Let’s define a co-factor matrix whose elements are co-factors corresponding to a single row and column of the parent matrix A. For instance if you have a 3x3 matrix A defined as

\[A = \left[\begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right]\]

Then the co-factor matrix corresponding to this is defined as

\[C = \left[\begin{array}{rrr} C_{11} & C_{12} & C_{13} \\ C_{21} & C_{22} & C_{23} \\ C_{31} & C_{32} & C_{33} \\ \end{array} \right]\]

where each of the \(C_{ij}\) represent the corresponding co-factors of A. Let us take the product of A and CT, the transpose of C. We get,

\[\left[\begin{array}{rrr} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{array} \right] * \left[\begin{array}{rrr} C_{11} & C_{21} & C_{31} \\ C_{12} & C_{22} & C_{32} \\ C_{13} & C_{23} & C_{33} \\ \end{array} \right] = \left[\begin{array}{rrr} det(A) & 0 & 0 \\ 0 & det(A) & 0 \\ 0 & 0 & det(A) \\ \end{array} \right]\]

We can easily verify that the diagonal elements of this product are det(A). For instance, row 1 of A times column 1 of CT is \(a_{11}C_{11} + a_{12}C_{12} + a_{33}C_{33} = det(A)\) by the definition of the determinant.

Now, we can see that \(A^{-1} = C^{T}/det(A)\). So, we have a simple formula for constructing the inverse of a full-rank matrix. We compute its determinant and all of its co-factors. We take the transpose of the co-factor matrix and divide it by the determinant of the matrix to get the inverse.

Now to the code:

Z <- matrix(c(1,0,1,2,4,0,3,5,6),ncol=3)

myinverse <- function(A){
  
  # Loading libraries
  if(!require(generalCorr))install.packages("generalCorr")
  require(generalCorr)
  
  # Check dimensions of the matrix
  nrow0 <- nrow(A)
  ncol0 <- ncol(A)
  
  # Checks to see if matrix A is a square. If not, breaks function
  if (nrow0 != ncol0){
    print("Please input a square matrix.")
    break
  }
  
  # Calculate a cofactor matrix C using the cofactor() function from generalCorr and det() function and place it in matrix C
  num_of_elements <- nrow0 * ncol0
  C <- matrix(rep(0, num_of_elements), ncol = ncol0)
  
  for (r in 1:nrow0){
    for (c in 1:ncol0){
      C[r,c] <- (-1)^(r+c)*det(cofactor(A,r,c))
    }
  }
  
  # Now we need the transpose of cofactor matrix C
  C_transpose <- t(C)
  
  # In order to get the det(A) Identity Matrix, we need to take the product of A and C_transpose
  det_A_matrix <- A %*% C_transpose
  
  # First, obtain the det(A) value. Given that the det(A) is the same on the diagonal, we'll choose element 1,1 from the matrix
  det_A <- det_A_matrix[1,1]
  
  # To calculate the inverse, B(inverse of A) = A^(-1) = C^(T)/det(A)
  B <- C_transpose/det_A
  
  # Outprint onto the screen
  print("Matrix A: ")
  print(A)
  
  print("Cofactor Matrix of A: ")
  print(C)
  
  print("Transposed matrix of Cofactor Matrix of A: ")
  print(C_transpose)
  
  print("Determinant of A: ")
  print(det_A)
  
  print("Inverse matrix of A: ")
  print(B)
  
  print("A %*% B (its inverse): ")
  print(A %*% B)
  
  return(B)
  
}

B <- myinverse(Z)
## Loading required package: generalCorr
## Loading required package: np
## Nonparametric Kernel Methods for Mixed Datatypes (version 0.60-3)
## [vignette("np_faq",package="np") provides answers to frequently asked questions]
## [vignette("np",package="np") an overview]
## [vignette("entropy_np",package="np") an overview of entropy-based methods]
## Loading required package: xtable
## Loading required package: meboot
## Loading required package: dynlm
## Loading required package: zoo
## 
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
## 
##     as.Date, as.Date.numeric
## Loading required package: nlme
## Loading required package: psych
## [1] "Matrix A: "
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]    0    4    5
## [3,]    1    0    6
## [1] "Cofactor Matrix of A: "
##      [,1] [,2] [,3]
## [1,]   24    5   -4
## [2,]  -12    3    2
## [3,]   -2   -5    4
## [1] "Transposed matrix of Cofactor Matrix of A: "
##      [,1] [,2] [,3]
## [1,]   24  -12   -2
## [2,]    5    3   -5
## [3,]   -4    2    4
## [1] "Determinant of A: "
## [1] 22
## [1] "Inverse matrix of A: "
##            [,1]        [,2]        [,3]
## [1,]  1.0909091 -0.54545455 -0.09090909
## [2,]  0.2272727  0.13636364 -0.22727273
## [3,] -0.1818182  0.09090909  0.18181818
## [1] "A %*% B (its inverse): "
##               [,1]         [,2]         [,3]
## [1,]  1.000000e+00 5.551115e-17 1.110223e-16
## [2,] -2.220446e-16 1.000000e+00 2.220446e-16
## [3,] -4.440892e-16 0.000000e+00 1.000000e+00

So as you can see in the above example, we had input in a matrix and obtain its inverse. When we multipled matrix A with matrix B (its inverse), we obtained an identiy matrix as identified above. To truly see if the inverse of matrix of A from myinverse() function is truly the inverse. Let’s trial is against the R built-in function of inv().

library(matlib)
## 
## Attaching package: 'matlib'
## The following objects are masked from 'package:generalCorr':
## 
##     cofactor, minor
## The following object is masked from 'package:psych':
## 
##     tr
print("Inverse matrix from myinverse() function: ")
## [1] "Inverse matrix from myinverse() function: "
print(B)
##            [,1]        [,2]        [,3]
## [1,]  1.0909091 -0.54545455 -0.09090909
## [2,]  0.2272727  0.13636364 -0.22727273
## [3,] -0.1818182  0.09090909  0.18181818
print("Inverse of matrix A using the inv() function: ")
## [1] "Inverse of matrix A using the inv() function: "
inv(Z)
##            [,1]        [,2]        [,3]
## [1,]  1.0909091 -0.54545455 -0.09090909
## [2,]  0.2272727  0.13636364 -0.22727273
## [3,] -0.1818182  0.09090909  0.18181818

As you can see, the code works well.