Problem 1

In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3 x 2 matrix A write code in R to compute \(X = AA^T\) and \(Y = A^TA\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R.

\[A=\begin{bmatrix}1&2&3\\-1&0&4\end{bmatrix}\]

Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A.

Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.

Compute X and Y

A <- matrix(c(1,2,3,-1,0,4), 2, byrow=T)
X <- A %*% t(A)
Y <- t(A) %*% A

Compute Eigenvectors & Eigenvalues of X and Y

eigX <- eigen(X)
eigY <- eigen(Y)

Compute Singular Values

singA <- svd(A)

Compare values

comp1 <- cbind(singA$u, eigX$vectors)
colnames(comp1) <- c('SVDu1', 'SVDu2', 'EVX1=u1', 'EVX2=u2')
knitr::kable(comp1)
SVDu1 SVDu2 EVX1=u1 EVX2=u2
-0.6576043 -0.7533635 0.6576043 -0.7533635
-0.7533635 0.6576043 0.7533635 0.6576043
comp2 <- cbind(singA$v, eigY$vectors)
comp2 <- comp2[,1:4]
colnames(comp2) <- c('SVDv1', 'SVDv2', 'EVY1=v1', 'EVY2=v2')
knitr::kable(comp2)
SVDv1 SVDv2 EVY1=v1 EVY2=v2
0.0185663 -0.6727903 -0.0185663 -0.6727903
-0.2549994 -0.7184510 0.2549994 -0.7184510
-0.9667630 0.1765824 0.9667630 0.1765824

We can see from the table that the singular vectors obtained using the SVD command are the same as the eigenvectors of X and Y. The only difference is whether they are positive or negative, which is due to them being declared in unit vector form. The ratios are otherwise consistent.

Problem 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant.

Your function should have the following signature: \(B = myinverse(A)\) where A is a matrix and B is its inverse and \(A \times B = I\). The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.

MyInverse Function

myinverse <- function(A)
 {
  rowA<-nrow(A) #find rows of A
  A2<-A
  shellmatA<-matrix(NA, nrow=rowA,ncol=rowA) #This is a shell matrix 
  for (i in 1:rowA) #looping through columns
  {
    for (j in 1:rowA) #looping through rows 
    {
      sub_A<-A2[-i,-j] #This will compute the determinant with built in functions 
      shellmatA[i,j]<-det(sub_A)
    }
  }
  detA<-det(A2) #determinant of A
  if (detA !=0) #Inverse of A using ratio of transpose over determinant.
      {
        invA<-t(shellmatA)/detA
      }
  return(invA)
  
}

Testing with

\[ \begin{bmatrix} 2 & 3 & -1 \\ 2 & -1 & 2 \\ -2 & -1 & 3 \end{bmatrix} \]

tMatrix <- matrix(c(2,3,-1,2,-1,2,-2,-1,3), nrow=3, ncol=3, byrow = TRUE)
invMatrix <- myinverse(tMatrix)
invMatrix
##             [,1]       [,2]       [,3]
## [1,]  0.03571429 -0.2857143 -0.1785714
## [2,] -0.35714286 -0.1428571 -0.2142857
## [3,]  0.14285714 -0.1428571  0.2857143

We can verify by:

solve(tMatrix)
##            [,1]       [,2]       [,3]
## [1,] 0.03571429  0.2857143 -0.1785714
## [2,] 0.35714286 -0.1428571  0.2142857
## [3,] 0.14285714  0.1428571  0.2857143