#ASSIGNMENT 4 IS 605 FUNDAMENTALS OF COMPUTATIONAL MATHEMATICS - 2014

1. Problem Set 1

In this problem, well verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3 × 2 matrix A \[A =\begin{Bmatrix} 1 & 2 & 3\\ −1 & 0 & 4\\ \end{Bmatrix}\]

Write code in R to compute X = AAT and Y = ATA

A = matrix(c(1,2,3,
             -1,0,4), nrow=2, byrow=TRUE)


A
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4
X = A%*% t(A)
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
Y = t(A) %*% A
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25

Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commans in R.

Eigenvalues

lambda_x = eigen(X)$values
lambda_x
## [1] 26.601802  4.398198
lambda_y = eigen(Y)$values
lambda_y
## [1] 2.660180e+01 4.398198e+00 1.058982e-16

Eigenvectors

s_x = eigen(X)$vectors
s_x
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
s_y = eigen(Y)$vectors
s_y
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001

Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command.

svd_a = svd(A)
svd_a
## $d
## [1] 5.157693 2.097188
## 
## $u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
## 
## $v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824

Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A.

In conclusion following he above calculations, it is apparent that U contains the eigenvalues of X, that is \[U = S_x =\begin{Bmatrix} -0.6576043 & -0.7533635\\ -0.7533635 & 0.6576043\\ \end{Bmatrix}\]

Note also that V contains the first two eigenvectors of Y. \[V = S_{y1:2} =\begin{Bmatrix} 0.01856629 & -0.6727903\\ -0.25499937 & -0.7184510\\ -0.96676296 & 0.1765824\\ \end{Bmatrix}\]

In both cases above, the first columns, corresponding to the first eigenvalues of X and Y, are shown in the opposite direction in U and V. These vectors are equivalent, as they simply represent scalar multiplication and do not affect orthonormality.

Finally, the non-zero eigenvalues of V are equivalent to the eigenvalues of U – these are equivalent to the square of the singular values d: \[A_x = A_{y1:2} =\begin{Bmatrix} 26.601802 & 0\\ 0 & 4.398198\\ \end{Bmatrix}=Σ^2\]

Note that: \[Σ =\begin{Bmatrix} 5.157693 & 0\\ 0 & 2.097188\\ \end{Bmatrix}=Σ^2\]

Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.

2. Problem Set 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature:

***B = myinverse(A)

Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.

myinverse <- function(A) {
  #confirm whether matrix is square and full rank
  if (nrow(A) == ncol(A) & Matrix::rankMatrix(A)[1] == nrow(A)) {
    A_square = TRUE
  } else {A_square = FALSE}
  
  if (A_square == TRUE) {
    size = nrow(A)
    C = matrix(nrow = size, ncol = size)
    for (i in 1:size) {
      for (j in 1:size) {
        M = A[-i, -j]
        C[i, j] = (-1)^(i + j) * det(M)
      }
    }
    #Inverse of C divided by det(A)
    inversed = t(C) / det(A)
  } else {inversed = "Is not Invertible"}
  return(inversed)
}

A is a matrix and B is its inverse and A×B = I. The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1.

Test Matrix A

\[A = \begin{Bmatrix} 1 & 2 & 3\\ 1 & 1 & 1\\ 2 & 0 & 1\\ \end{Bmatrix}\]

A = matrix(c(1,2,4,3,4,3,3,1,1,3,1,8,2,1,7,1),nrow=4)
A
##      [,1] [,2] [,3] [,4]
## [1,]    1    4    1    2
## [2,]    2    3    3    1
## [3,]    4    3    1    7
## [4,]    3    1    8    1
B = myinverse(A)
B
##            [,1]       [,2]        [,3]        [,4]
## [1,] -0.9285714  1.2142857  0.14285714 -0.35714286
## [2,]  0.2142857  0.1428571 -0.07142857 -0.07142857
## [3,]  0.2714286 -0.3857143 -0.05714286  0.24285714
## [4,]  0.4000000 -0.7000000  0.10000000  0.20000000

Confirm that the results from myinverse() are similar to those from the solve function in R.

round(A %*% B, 5)
##      [,1] [,2] [,3] [,4]
## [1,]    1    0    0    0
## [2,]    0    1    0    0
## [3,]    0    0    1    0
## [4,]    0    0    0    1
identical(round(B, 5), round(solve(A), 5))
## [1] TRUE

Or

B = myinverse(A)
C = solve(A)

round(B, 5) == round(C, 5)
##      [,1] [,2] [,3] [,4]
## [1,] TRUE TRUE TRUE TRUE
## [2,] TRUE TRUE TRUE TRUE
## [3,] TRUE TRUE TRUE TRUE
## [4,] TRUE TRUE TRUE TRUE

Please submit PS1 and PS2 in an R-markdown document with your first initial and last name.