Assignment 4

Problem Set 1

In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3 × 2 matrix A \[ A = \begin{bmatrix} 1 & 2 & 3\\ -1 & 0 & 4\\ \end{bmatrix} \] write code in R to compute \[X = AA^T\] and \[Y = A^TA\]. Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commans in R.

#set up our matrix A
A = matrix(c(1,2,3,-1,0,4), nrow = 2, byrow = TRUE)
#find $$A^T$$

rows <- dim(A)[1] #rows will be number of rows in A, im this case 2
cols <- dim(A)[2] #number of columns, which is 3
A.trans <- matrix(A, nrow = cols, ncol = rows, byrow = TRUE)#we have effectively switched our rows and columns to give us A transpose

AAT <- function(A, B){
  X <- matrix(NA, nrow = rows, ncol = rows)
    for(i in 1:rows){
        for(j in 1:rows){
            X[i,j] <- sum(A[i,]*B[,j])
        }
  }
  return(X)
}
#the function above take values A and B.There's an empty matrix with nrows = rows in A. For i and j in the rows, we are multiplying A and B, essentially solving A%*%A.trans

X <- AAT(A, A.trans)
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
#just to check, we will run A%*%A.trans just to confirm

A%*%A.trans
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
#I was able to get an output but could not figure out how to get rid of the NA's. I eventually fixed it by setting my  ncol = rows and not col like I initially had it
ATA <- function(A, B){
  Y <- matrix(NA, nrow = cols, ncol = cols)
    for(i in 1:cols){
        for(j in 1:cols){
            Y[i,j] <- sum(A[i,]*B[,j])
        }
  }
  return(Y)
}
#this does same as above but for the A.trans%*%A. We will check below to ensure that we get the same resulting matrix

Y <- ATA(A.trans, A)
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25
A.trans%*%A
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25
#We have shown that our matrix multiplication function and the built-in multipication function give the same result
x.eigen <- eigen(X)
x.val <- x.eigen$values
x.vect <- x.eigen$vectors

#finding the eigenvectors and eigenvalues for  X
y.eigen <- eigen(Y)
y.val <- y.eigen$values
y.vect <- y.eigen$vectors

#finding the eigenvectors and eigenvalues for  Y

Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A.Your code should compute all these vectors and scalars and store them in variables.

A.svd <- svd(A)
A.svd$d #diagnals
## [1] 5.157693 2.097188
LS <- A.svd$u #left singular
LS[,1] <- -LS[,1]
RS <- svd(A, nv = 3)$v#right singular
RS <- -RS
y.vect
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001
RS
##             [,1]       [,2]       [,3]
## [1,] -0.01856629  0.6727903  0.7396003
## [2,]  0.25499937  0.7184510 -0.6471502
## [3,]  0.96676296 -0.1765824  0.1849001
all.equal(y.vect, RS)
## [1] "Mean relative difference: 0.7159335"
#I could not figure out where my mistake took place when solving for y.vect but when I check for equality, I get false. My $$A^T$$ calculated is equal to that of the in-built function, as a re my X and Y values but when I check he vectors with the right singular, they are not equal.
x.vect
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
LS
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
all.equal(x.eigen$vectors, LS) #https://stat.ethz.ch/pipermail/r-help/2012-June/315408.html
## [1] TRUE
#when tested with all.eqal, we see that our two matrices have equal values

Please add enough comments in your code to show me how to interpret your steps.

Problem Set 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature: B = myinverse(A)where A is a matrix and B is its inverse and AB = I. The o -diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.

Please submit PS1 and PS2 in an R-markdown document with your first initial and last name.

Just some things to keep in mind: + Co-factor is the determinant of the sub-matrix generated when we remove the first row and column of A. + We alternate signs when finding the determinant of A. + Determinant cannot be 0. + Matrix A must be square

myinverse <- function(A){
  #check first that A is square 
  row.A = nrow(A)
  col.A = ncol(A)

  if ((row.A) != (col.A)) {stop("Matrix is not square")}
    else
      if (det(A) == 0) {stop("Determinant is 0. Inverse does not exist")} #check to make sure that determinant is not 0
  else
    coeff <-  diag(NA, row.A, col.A)
  for(i in 1:row.A) {
       for(j in 1:col.A){
          coeff[i, j] <- (-1)^(i+j) * det(A[-i, -j]) #looping through for cofactor matrix
          }
     }
  B <- ((t(coeff) / det(A))) #this is from $$A^-1 = C^T/det(A)$$(c is coeff in this case)
  return(B)
}

We would like to check that our function works.

A = matrix(c(1,2,3,-1,0,4), nrow = 2, byrow = TRUE)
myinverse(A)#we got the expected output that A is not square
A = matrix(c(-1, 0, -1, 0, 1, 0, 1, 0, 1), nrow = 3, byrow = TRUE)
myinverse(A)
A =  round(matrix(rnorm(9),3, 3),2)
myinverse(A)#check with a randomly generated matrix
##            [,1]        [,2]       [,3]
## [1,] -40.450225 -30.0876500 -13.860932
## [2,]   8.873290   6.9623156   3.342931
## [3,]   1.862563   0.8153489   1.031926