Problem Set # 1

In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module.

Given a 3 × 2 matrix A, \(A\quad =\quad \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 4 \end{bmatrix}\)

A <- matrix(c(1,2,3, -1, 0, 4), nrow = 2, ncol = 3, byrow = TRUE)

# Display matrix A 

A
##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4

We will now compute X and Y as follows: \(X\quad =\quad A\cdot { A }^{ T }\) and \(Y\quad =\quad { A }^{ T }\cdot A\)

X <- A%*%t(A)
Y <- t(A)%*%A

# Display X and Y
X
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
Y
##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25

We are now going to compute the Eigenvalues and Eigenvectors for X and Y. This will be done with R built-in functions.

# Eigen Values and Vectors for X
eigen(X)$values
## [1] 26.601802  4.398198
eigen(X)$vectors
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043
# Eigen Values and Vectors for Y
eigen(Y)$values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
eigen(Y)$vector
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001

On examination, the first 2 eigenvalues of X are the same of the first 2 eigenvalues of Y. Using these we can compute the singular values of A by calculating the squareroot of these values, denoted s1, s2.

s1 <- sqrt(eigen(X)$values[1])
s2 <- sqrt(eigen(X)$values[2])

Using these singular values we will built the diagonal 2x3 matrix

S <- diag(c(s1, s2), nrow = 2, ncol = 3)
S
##          [,1]     [,2] [,3]
## [1,] 5.157693 0.000000    0
## [2,] 0.000000 2.097188    0

Now using the built-in svd function in r, we will decompose the Matrix A.

# Singular Values, equivalent to s1 and s2 calculated above
d <- svd(A)$d
d
## [1] 5.157693 2.097188
# Left-Singular vectors
u <- svd(A)$u
u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
# Rignt-Singular vectors
v <- svd(A)$v
v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824

We want to show the u is composed of eigen vectors of X, we will compare the eigenvectors of X with the left-singular vectors.

x1 <- eigen(X)$vectors[,1]
x2 <- eigen(X)$vectors[,2]

u1 <- u[,1]
u2 <- u[,2]

round(x1 - (-1*u1),12)
## [1] 0 0
round(x2 - u2, 12)
## [1] 0 0

From the result of the operations, it is clear that first column vector in U (u1) is -1 * first eigenvector of X. Since eigenvectors are ambidextrous, u1 is an eigenvector of X. The 2nd column vector in U (u2) is equal to the 2nd eigenvector of X.

Similarly, we will compare the eigenvectors of Y with the right-singular vectors.

y1 <- eigen(Y)$vectors[,1]
y2 <- eigen(Y)$vectors[,2]

v1 <- v[,1]
v2 <- v[,2]

round(y1 - (-1*v1), 12)
## [1] 0 0 0
round(y2 - v2,12)
## [1] 0 0 0

From the result of the operations, we can be concluded that the first column vector in V (v1) is -1 * first eigenvector of Y and the 2nd column vector in V (v2) is equal to the 2nd eigenvector of Y.

Problem Set # 2

Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors.

myinverse <- function(A){
  
  ### This function will compute the inverse of well-conditioned full-rank square matrix using co-factors.
  ### Hence no input validation will be done.
  ### In order to determine the co-factors the built-in determinant function in r will be used.  
  
  # copy input matrix 
  tp_A <- A
  
  # Identify dimension of A 
  a_row <- nrow(A)

  # Create an NA filled matrix with same dimension as A, denoted C
  C <- matrix(data=NA,nrow=a_row,ncol=a_row)
  
  for (i in 1:a_row){
    # For each rows, loop through Column and determine corresponding submatrix M_ij, 
    # find determinant and store in appropriate C[i,j] position
    for (j in 1:a_row){
      M_ij <-tp_A[-i,-j]
      C[i,j] <- det(M_ij)
    }   # end of for loop for columns of A
  }     # end of for loop for rows of A
  
  # find determinant of A and store it in d_A
  d_A <- det(tp_A)
  
  # Calculate inverse of A as 1/det(A)*t(C), denoted inv_A
  if (d_A != 0){
    inv_A <- t(C)/d_A
  }
  return(inv_A)
}

#Testing Function 
C <- matrix(c(1,2,4,2,-1,3,4,0,1), nrow=3, ncol=3, byrow = TRUE)
inv_C <- myinverse(C)

inv_C
##             [,1]        [,2]       [,3]
## [1,] -0.02857143  0.05714286  0.2857143
## [2,] -0.28571429 -0.42857143 -0.1428571
## [3,]  0.11428571 -0.22857143 -0.1428571
solve(C)
##             [,1]        [,2]       [,3]
## [1,] -0.02857143 -0.05714286  0.2857143
## [2,]  0.28571429 -0.42857143  0.1428571
## [3,]  0.11428571  0.22857143 -0.1428571