In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a 3 × 2 matrix A:
\[A=\left[ \begin{matrix} 1 & 2 & 3 \\ -1 & 0 & 4 \end{matrix} \right]\] write code in R to compute \(X=A{ A }^{ T }\) and \(Y={ A }^{ T }A\) (1). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R (2).
Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command (3). Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y (4). In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A (5).
Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.
A = matrix(c(1,2,3,-1,0,4), nrow=2, ncol=3, byrow=TRUE)
First, we must calculate A transpose
# transpose function
(AT = t(A))
## [,1] [,2]
## [1,] 1 -1
## [2,] 2 0
## [3,] 3 4
Now, we can calculate \(X=A{ A }^{ T }\) and \(Y={ A }^{ T }A\)
# Calculating X
(X <- A%*%AT)
## [,1] [,2]
## [1,] 14 11
## [2,] 11 17
# Calculating Y
(Y <- AT%*%A)
## [,1] [,2] [,3]
## [1,] 2 2 -1
## [2,] 2 4 6
## [3,] -1 6 25
Using built in functions, we can auto calculate the eigenvalues and eigenvectors.
# Calculating the eigenvalues and eigenvectors of x
(eigenX <- eigen(X))
## eigen() decomposition
## $values
## [1] 26.601802 4.398198
##
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
# Calculating the eigenvalues and eigenvectors of Y
(eigenY <- eigen(Y))
## eigen() decomposition
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
##
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
First, we must create the Singular value. As these are the values already created, we put them into a matrix for later calculation.
# Computing the Singular values
(S <- diag(c(eigen(X)$values[1], eigen(X)$values[2]), nrow = 2, ncol = 3))
## [,1] [,2] [,3]
## [1,] 26.6018 0.000000 0
## [2,] 0.0000 4.398198 0
# singular value decomposition to decompose Matrix A
(d <- svd(A)$d)
## [1] 5.157693 2.097188
# Left singular vector
(u <- svd(A)$u)
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
# right singular vector
(v <- svd(A)$v)
## [,1] [,2]
## [1,] 0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296 0.1765824
X1 <- eigen(X)$vectors[,1]
X2 <- eigen(X)$vectors[,2]
U1 <- u[,1]
U2 <- u[,2]
(round(X1 - (-1*U1),12))
## [1] 0 0
(round(X2 - U2, 12))
## [1] 0 0
Y1 <- eigen(Y)$vectors[,1]
Y2 <- eigen(Y)$vectors[,2]
V1 <- v[,1]
V2 <- v[,2]
(round(Y1 - (-1*V1),12))
## [1] 0 0 0
(round(Y2 - V2, 12))
## [1] 0 0 0
The results of the above operations show the first column of X(x1) and Y(y1) are -1*first eigenvector of the same X/Y respectively and the second columns are equal to the second eigenvector of the same.
Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature:
B = myinverse(A)
Where A is a matrix and B is its inverse and A×B = I. The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.
NOTE: I utilized another function letter so it would not interfere with the Matrix calculation above.
cofactors <- function(Q) {
cofactors <- Q
for(i in 1:nrow(Q)){
for(j in 1:ncol(Q)){
cofactors[i,j] <- (det(Q[-i,-j])*(-1)^(i+j))
}
}
return(cofactors)
}
myinverse <- function(Q){
Q_cofactors <- cofactors(Q)
Q_cofactors_trans <- t(Q_cofactors)
Q_det <- det(Q)
return(Q_cofactors_trans/Q_det)
}
A sample 5x5 matrix is created to test the above function.
## [1] "Sample Matrix"
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0 3 2 8 4
## [2,] 0 4 4 8 4
## [3,] 2 8 3 8 0
## [4,] 2 6 6 8 5
## [5,] 1 7 2 2 2
## [1] "The Co-Factor of the Sample"
## [,1] [,2] [,3] [,4] [,5]
## [1,] 320 -24 -416 100 240
## [2,] -782 64 396 -17 -212
## [3,] 72 16 -8 76 -160
## [4,] 424 -96 48 -28 104
## [5,] -136 160 -80 -96 112
## [1] "The inverse of the Sample"
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.37383178 -0.91355140 0.084112150 0.49532710 -0.15887850
## [2,] -0.02803738 0.07476636 0.018691589 -0.11214953 0.18691589
## [3,] -0.48598131 0.46261682 -0.009345794 0.05607477 -0.09345794
## [4,] 0.11682243 -0.01985981 0.088785047 -0.03271028 -0.11214953
## [5,] 0.28037383 -0.24766355 -0.186915888 0.12149533 0.13084112
## [1] "Confirming multiplying the Inverse by the Matrix results in identity pattern"
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.000000e+00 1.110223e-15 -5.551115e-17 2.164935e-15 8.326673e-16
## [2,] -5.551115e-17 1.000000e+00 -5.551115e-17 -2.220446e-16 -1.110223e-16
## [3,] 0.000000e+00 1.776357e-15 1.000000e+00 3.275158e-15 1.609823e-15
## [4,] -1.387779e-16 -5.551115e-16 -2.775558e-16 1.000000e+00 -1.387779e-16
## [5,] 5.551115e-17 2.220446e-16 4.996004e-16 2.775558e-16 1.000000e+00