Data605 Assignment4

Problem Set 1

Given a 3 × 3 matrix: \[A = \begin{bmatrix}-1 & 1 & 5 \\1 & 3 & 0 \\0 & -2 & 1\end{bmatrix}\] Write code in R to compute \(X = AA^T\) and \(Y = A^TA\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R.

# matrix
A = matrix(c(-1,1,0,1,3,-2,5,0,1), 3, 3)
A
##      [,1] [,2] [,3]
## [1,]   -1    1    5
## [2,]    1    3    0
## [3,]    0   -2    1
# Compute X and Y:

x <- A%*%t(A)
x
##      [,1] [,2] [,3]
## [1,]   27    2    3
## [2,]    2   10   -6
## [3,]    3   -6    5
y <- t(A)%*%A
y
##      [,1] [,2] [,3]
## [1,]    2    2   -5
## [2,]    2   14    3
## [3,]   -5    3   26
# compute eigenvalues and eigenvectors
eigen_x <- eigen(x)
eigen_x
## $values
## [1] 27.4907376 14.0000000  0.5092624
## 
## $vectors
##            [,1]       [,2]       [,3]
## [1,] 0.99086430  0.0000000  0.1348627
## [2,] 0.07480836 -0.8320503 -0.5496326
## [3,] 0.11221254  0.5547002 -0.8244489
eigen_y <- eigen(y)
eigen_y
## $values
## [1] 27.4907376 14.0000000  0.5092624
## 
## $vectors
##            [,1]       [,2]       [,3]
## [1,] -0.1747144  0.2223748  0.9591790
## [2,]  0.1889822  0.9636241 -0.1889822
## [3,]  0.9663129 -0.1482499  0.2103839

Now, compute the left-singular, singular values, and right-singular vectors of A using the svd() command.

svd_A <- svd(A)
svd_A # v = right, u = left
## $d
## [1] 5.2431610 3.7416574 0.7136263
## 
## $u
##             [,1]          [,2]       [,3]
## [1,] -0.99086430 -7.771561e-16  0.1348627
## [2,] -0.07480836  8.320503e-01 -0.5496326
## [3,] -0.11221254 -5.547002e-01 -0.8244489
## 
## $v
##            [,1]       [,2]       [,3]
## [1,]  0.1747144  0.2223748 -0.9591790
## [2,] -0.1889822  0.9636241  0.1889822
## [3,] -0.9663129 -0.1482499 -0.2103839

Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y.

# Compare left singular vector u to the eigenvectors of x
eigen_x$vectors
##            [,1]       [,2]       [,3]
## [1,] 0.99086430  0.0000000  0.1348627
## [2,] 0.07480836 -0.8320503 -0.5496326
## [3,] 0.11221254  0.5547002 -0.8244489
svd_A$u
##             [,1]          [,2]       [,3]
## [1,] -0.99086430 -7.771561e-16  0.1348627
## [2,] -0.07480836  8.320503e-01 -0.5496326
## [3,] -0.11221254 -5.547002e-01 -0.8244489
round(abs(eigen_x$vectors)) == round(abs(svd_A$u))
##      [,1] [,2] [,3]
## [1,] TRUE TRUE TRUE
## [2,] TRUE TRUE TRUE
## [3,] TRUE TRUE TRUE

Vectors are same except one of the vectors multiplied by a scalar of -1, it is just a scalar on the same plane. Left singular vectors are the eigenvectors of X. it perform with the eigenvectors of Y and the right singular vectors.

(eigen_y$vectors)
##            [,1]       [,2]       [,3]
## [1,] -0.1747144  0.2223748  0.9591790
## [2,]  0.1889822  0.9636241 -0.1889822
## [3,]  0.9663129 -0.1482499  0.2103839
(svd_A$v)
##            [,1]       [,2]       [,3]
## [1,]  0.1747144  0.2223748 -0.9591790
## [2,] -0.1889822  0.9636241  0.1889822
## [3,] -0.9663129 -0.1482499 -0.2103839
round(abs(eigen_y$vectors)) == round(abs(svd_A$v))
##      [,1] [,2] [,3]
## [1,] TRUE TRUE TRUE
## [2,] TRUE TRUE TRUE
## [3,] TRUE TRUE TRUE

Non-zero eigenvalues of both X and Y are the same and are squares of the non-zero singular values of A.

# all.equal
all.equal(eigen_x$values, (svd_A$d)^2)
## [1] TRUE
all.equal(eigen_y$values, (svd_A$d)^2)
## [1] TRUE

Problem Set 2

Write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. You may use built-in R commands to compute the determinant. Your function should have the following signature: B = myinverse(A) where A is a matrix and B is its inverse and \(A×B = I\).

A matrix is full rank if all rows and columns are linearly independent.

inverse <- function(A) {

  #Check invertible
  if (det(A) == 0) {
    stop('matrix is non-invertible')
  }
  
  m <- nrow(A)
  n <- ncol(A)
  
  # create co-factor matrix
  co_m <- diag(0, nrow=m, ncol=n)
  
  #compute the cofactors
  
  #and calculate the det() and sign
  for(i in 1:m)
  {
    for(j in 1:n)
    {
        co_m[i,j] <- (-1)^(i+j)*det(A[-i,-j]) # calculate by minior
    }
  } 
  co_m
  
  #Inverse of A = t(co_m)/det(A) -- t() gives transpose
  return((t(co_m) / det(A)))
}

# Try it out 
A = matrix(c(1,9,18,1,2,3,-5,-4,3), 3, 3)
A
##      [,1] [,2] [,3]
## [1,]    1    1   -5
## [2,]    9    2   -4
## [3,]   18    3    3
B <- inverse(A)
B
##       [,1]       [,2]       [,3]
## [1,] -0.50  0.5000000 -0.1666667
## [2,]  2.75 -2.5833333  1.1388889
## [3,]  0.25 -0.4166667  0.1944444
# Check using solve() -- for inverse of square matrix
s_A <- solve(A)
round(s_A,2) == round(B,2)
##      [,1] [,2] [,3]
## [1,] TRUE TRUE TRUE
## [2,] TRUE TRUE TRUE
## [3,] TRUE TRUE TRUE