In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a \(3 x 2\) matrix \(\textbf{A}\)
\[ A = \begin{bmatrix} 1 && 2 && 3 \\ -1 && 0 && 4 \end{bmatrix} \]
write code in R to compute \(X = AA^{T}\) and \(Y = A^{T}A\). Then, compute the eigenvalues and eigenvectors of X and Y using the built-in commands in R. Then, compute the left-singular, singular values, and right-singular vectors of A using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A. Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.
First we will use R to get the transpose. This is done fairly simply by hand, but with more complicated matrices it’s better to let the computer do the work.
A <- matrix(c(1,2,3,-1,0,4), nrow = 2, ncol = 3, byrow = TRUE)
A
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] -1 0 4
A_T <- t(A)
A_T
## [,1] [,2]
## [1,] 1 -1
## [2,] 2 0
## [3,] 3 4
Next we compute X and Y.
X = A%*%A_T
X
## [,1] [,2]
## [1,] 14 11
## [2,] 11 17
Y = A_T%*%A
Y
## [,1] [,2] [,3]
## [1,] 2 2 -1
## [2,] 2 4 6
## [3,] -1 6 25
Now we get the Eigenvalues and Vectors of X.
eigenX <-eigen(X)
eigenX
## eigen() decomposition
## $values
## [1] 26.601802 4.398198
##
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
eVecX <- eigenX$vectors
eValX <- eigenX$values
Sometimes R does the eigenvector values a little more complicated that needed, however \(0.7533635/0.6576043 = 1.14\) so we can’t make these eigenvectors look much better.
The Eigenvectors and -values for Y are:
eigenY <- eigen(Y)
eigenY
## eigen() decomposition
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
##
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
eVecY <- eigenY$vectors
eValY <- eigenY$values
Eigenvalue 3 is probably 0 with some rounding error, in that case the eigenvector can be written
\[ \begin{bmatrix} 4 \\ -3.5 \\ 1 \end{bmatrix} \]
Arithmetic on the other two eigenvectors did not result in rational numbers so I will leave them as is. Note that translating from scientific notation the other eigenvalues are 26.60180 and 4.398198 which are the same as X, as mentioned in the problem text.
eValX
## [1] 26.601802 4.398198
eValY
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
N.B., that X and Y are both symmetric matrices, so their Eigenvectors ought to be orthoganal. We can test this by using the dot product.
eVecX[1,]%*%eVecX[1,]
## [,1]
## [1,] 1
eVecX[1,]%*%eVecX[2,]
## [,1]
## [1,] 0
eVecY[1,]%*%eVecY[3,]
## [,1]
## [1,] 5.551115e-17
eVecX[2,]%*%eVecX[2,]
## [,1]
## [1,] 1
eVecY[2,]%*%eVecY[3,]
## [,1]
## [1,] -4.163336e-17
eVecY[3,]%*%eVecY[3,]
## [,1]
## [1,] 1
Within rounding errors of R, \(e^{-17}\) is very small, this these conditions are true.
We will now compute the Left-Singular Values of A. For side-by-side comparisons, I’ll include the Eigenvectors of X. The left hand decompostion in svd() is set by letting nu = nrow(A) = 2 and nv = 0.
left_sing <- svd(A, nu = 2, nv = 0)
left_sing
## $d
## [1] 5.157693 2.097188
##
## $u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
eVecX
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
One differs by a negative, but they are otherwise the same vectors.
We will now compute the Right-Singular Values of A. For side-by-side comparisons, I’ll include the Eigenvectors of Y. The right hand decomposition is set by letting nu = 0 and nv = ncol(A) = 3.
right_sing <- svd(A, nu = 0, nv = 3)
right_sing
## $d
## [1] 5.157693 2.097188
##
## $v
## [,1] [,2] [,3]
## [1,] 0.01856629 -0.6727903 -0.7396003
## [2,] -0.25499937 -0.7184510 0.6471502
## [3,] -0.96676296 0.1765824 -0.1849001
eVecY
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
Again, two are different by a factor of -1, but they are otherwise the same.
This also means that the singular value vectors form an orthonormal set like the eigenvectors.
As for the eigenvalues:
(right_sing$d)^2
## [1] 26.601802 4.398198
eValX
## [1] 26.601802 4.398198
(left_sing$d)^2
## [1] 26.601802 4.398198
eValY
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
As stated in the problem text the non-zero eigenvalues of X and Y are the same as the non-zero singular values.
Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature:
B = myinverse(A)
where A is a matrix and B is its inverse and \(AxB = I\). The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse. Please submit PS1 and PS2 in an R-markdown document with your first initial and last name.
#Make a test matrix first, different everytime:
A <- matrix(sample(0:9, 25, replace = TRUE), nrow=5, ncol=5)
myinverse <- function(M){
if(is.matrix(M) & nrow(M) == ncol(M) & det(M) != 0){
print("Calculating Inverse of inputted Matrix:")
detM <- det(M)
C <- matrix(0, nrow = nrow(M), ncol = ncol(M))
for(i in 1:nrow(M)){
for(j in 1:ncol(M)){
C[j,i] = (-1)^(i+j)*det(M[-i,-j]) #Calculate the cofactors, assign it to the co-Factor Matrix.
#Flipping the idecies here saves calculating the transpose later.
}#for j
}#for i
invM = C/detM #This will give the Inverse per the procedure in the reading.
}#if
else{
print("Invalid Input, Input should be a square Matrix with non-zero determanent")
}#else
return(invM)
}#myinverse
A
## [,1] [,2] [,3] [,4] [,5]
## [1,] 7 3 9 4 7
## [2,] 9 7 3 8 9
## [3,] 4 0 3 0 9
## [4,] 4 0 7 0 7
## [5,] 0 4 8 7 7
B = myinverse(A)
## [1] "Calculating Inverse of inputted Matrix:"
B
## [,1] [,2] [,3] [,4] [,5]
## [1,] 0.46850394 -0.13779528 0.27559055 -0.5354331 -0.11023622
## [2,] -1.88845144 0.96719160 -1.60104987 2.7296588 -0.02624672
## [3,] -0.08923885 0.02624672 -0.21916010 0.3162730 0.02099738
## [4,] 1.35958005 -0.63517060 1.10367454 -2.0538058 0.09186352
## [5,] -0.17847769 0.05249344 0.06167979 0.1325459 0.04199475
I = A%*%B
I
## [,1] [,2] [,3] [,4] [,5]
## [1,] 1.000000e+00 -2.109424e-15 -5.329071e-15 -9.436896e-15 3.885781e-16
## [2,] 6.661338e-16 1.000000e+00 -1.598721e-14 -1.665335e-14 6.661338e-16
## [3,] 1.998401e-15 0.000000e+00 1.000000e+00 -2.220446e-15 3.330669e-16
## [4,] 2.220446e-15 5.551115e-17 2.664535e-15 1.000000e+00 3.330669e-16
## [5,] 2.220446e-16 -2.553513e-15 -1.332268e-14 -9.436896e-15 1.000000e+00
As you can see, The off-diagonals are close to zero as \(10^{-13 to-16}\) are very small and due to rounding errors in r, and the main diagonal terms are 1 or very close to 1. This is fairly robust, as I have added an if-else statement to make sure the correct data type is entered and that it meets the conditions to be invertible, square and a non-zero determinant.