Problem Set 1

In this problem, we’ll verify using R that SVD (Singular Value Decomposition) and Eigenvalues are related:

Given a 3 x 2 matrix \(A\) \[ A = \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 4 \end{bmatrix} \]

(A <- matrix(c(1,2,3,-1,0,4),byrow=T, nrow=2))
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4

Write code in R to compute \(X = A A^T\) and \(Y = A^T A\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R.

First, compute \(X = A A^T\) and \(Y = A^T A\)

(X <- A %*% t(A))
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
(Y <- t(A) %*% A)
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25

Now, compute the eigenvalues and eigenvectors of X and Y using the built-in command ‘eigen’

(eX <- eigen(X))
## $values
## [1] 26.601802  4.398198
## 
## $vectors
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
(eY <- eigen(Y))
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
## 
## $vectors
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001

Compute the left-singular, singular values, and right-singular vectors of A using the svd command.

#svd() function calculates the singular values and the left and right singular vectors for A. 
(svdA <- svd(A))
## $d
## [1] 5.157693 2.097188
## 
## $u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
## 
## $v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824
(svdA$d)^2
## [1] 26.601802  4.398198

Notice that the above gives us the singular values of 5.157693 and 2.097188, and the left singular vector $u , and the right singular vector $v.

Lets now compare the eigenvalues and eigenvectors with the svd results.

#Compare the left singular vector U to the eigenvectors of X
(eX$vectors)
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
(svdA$u)
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
round(abs(eX$vectors)) == round(abs(svdA$u))
##      [,1] [,2]
## [1,] TRUE TRUE
## [2,] TRUE TRUE

Notice that these vectors are same - with just a difference that one of the vectors is multiplied by a scalar -1. However, since these vectors are same with just a scalar difference , we can say that these are defined on the same plane. Hence, the left singular vectors are also the eigenvectors of X.

Similarly, lets look at the eigenvectors of Y with the right singular vectors:

(eY$vectors)
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001
(svdA$v)
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824
round(abs(eY$vectors[,-3])) == round(abs(svdA$v))
##      [,1] [,2]
## [1,] TRUE TRUE
## [2,] TRUE TRUE
## [3,] TRUE TRUE

Let’s now compare the eigenvalues and the squares of singular values:

(eX$values)
## [1] 26.601802  4.398198
(eY$values)
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
(svdA$d^2)
## [1] 26.601802  4.398198
round(eX$values) == round(svdA$d^2)
## [1] TRUE TRUE
#compare now the eigenvalues of Y - removing the value closer to zero
round(eY$values[-3]) == round(svdA$d^2)
## [1] TRUE TRUE

In the above, notice that the eigenvalues for \(X\), the eigenvalues for \(Y\) and the square of the singular values are same.

Problem Set 2

Write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors.

#Computes the inverse for a well-conditioned, full rank square matrix.
#Inputs: A = Matrix to be inverted.
#Output: Inverse of A.
myinverse <- function(A) {

  #If its not invertable, stop!
  if (det(A) == 0) {
    stop('Non Invertable Matrix !')
  }
  
  #Initialize the rows and columns
  m <- nrow(A)
  n <- ncol(A)
  
  #Declare the co-factor matrix.
  C <- diag(0, nrow = m, ncol = n)
  
  #compute the cofactors
  #- loop thru rows and cols of A, and remove the current row/col, and calc the det, with proper sign.
  for(i in 1:m)
  {
    for(j in 1:n)
    {
        C[i, j] <- (-1)^(i+j) * det(A[-i, -j])
    }
  }
  
  #Inverse of A = t(C)/det(A)
  return((t(C) / det(A)))
}

Client calls:

set.seed(40)
(A <- matrix(sample(10:20, 9, replace=TRUE),byrow=T, nrow =3))
##      [,1] [,2] [,3]
## [1,]   17   19   17
## [2,]   11   12   15
## [3,]   12   16   14
(B <- round(myinverse(A),2))
##       [,1]  [,2]  [,3]
## [1,]  0.39 -0.03 -0.44
## [2,] -0.14 -0.18  0.37
## [3,] -0.17  0.24  0.03
#using builtin inverse function: 'solve'
(sB <- round(solve(A),2))
##       [,1]  [,2]  [,3]
## [1,]  0.39 -0.03 -0.44
## [2,] -0.14 -0.18  0.37
## [3,] -0.17  0.24  0.03
#compare both
(B == sB)
##      [,1] [,2] [,3]
## [1,] TRUE TRUE TRUE
## [2,] TRUE TRUE TRUE
## [3,] TRUE TRUE TRUE
#verify, AB = I
round(A %*% B)
##      [,1] [,2] [,3]
## [1,]    1    0    0
## [2,]    0    1    0
## [3,]    0    0    1