DATA605 ASSIGNMENT 4
1 Problem set 1
In this problem, we’ll verify using R that SVD and Eigenvalues are related as worked out in the weekly module. Given a \(3 \times 2\) matrix \(A\) \[A=\begin{bmatrix}1 & 2 & 3\\ -1 & 0 & 4\end{bmatrix}\qquad(1)\]
write code in R to compute \(X=AA^{T}\) and \(Y=A^{T}A\). Then, compute the eigenvalues and eigenvectors of \(X\) and \(Y\) using the built-in commands in R.
Then, compute the left-singular, singular values, and right-singular vectors of \(A\) using the svd command. Examine the two sets of singular vectors and show that they are indeed eigenvectors of \(X\) and \(Y\). In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both \(X\) and \(Y\) are the same and are squares of the non-zero singular values of \(A\).
Your code should compute all these vectors and scalars and store them in variables. Please add enough comments in your code to show me how to interpret your steps.
Answer:
1.1 Step 1: Define Matrix \(A\):
Define variable \(A=\begin{bmatrix}1 & 2 & 3\\ -1 & 0 & 4\end{bmatrix}\).
## $`Matrix A:`
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] -1 0 4
1.2 Step 2: Compute \(X=AA^{T}\)
## $`Matrix X:`
## [,1] [,2]
## [1,] 14 11
## [2,] 11 17
1.3 Step 3: Compute \(Y=A^{T}A\)
## $`Matrix Y:`
## [,1] [,2] [,3]
## [1,] 2 2 -1
## [2,] 2 4 6
## [3,] -1 6 25
1.4 Step 4: Compute eigenvalues & eigenvectors of \(X\)
## eigen() decomposition
## $values
## [1] 26.601802 4.398198
##
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
1.5 Step 5: Compute eigenvalues & eigenvectors of \(Y\)
## eigen() decomposition
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
##
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
1.6 Step 6: Compute SVD of \(A\)
## $d
## [1] 5.157693 2.097188
##
## $u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
##
## $v
## [,1] [,2] [,3]
## [1,] 0.01856629 -0.6727903 -0.7396003
## [2,] -0.25499937 -0.7184510 0.6471502
## [3,] -0.96676296 0.1765824 -0.1849001
1.7 Step 7: Comparison
Examine the two sets of singular vector and show that they are indeed eigenvectors of X and Y.
Since for any eigen value \(\lambda\), the eigenvector can be any vector that span the eigenspace \(E_{\lambda}\). Therefore while the outputs of the singular vector of svd(A)
and eigen(X)
| eigen(Y)
may appear different, however we will check whether they are a linear combination of another.
- Get all singular vectors of matrix \(A\) and all eigenvectors of \(X\) and \(Y\).
U <- svd(A, nu = nrow(A), nv = ncol(A))$u
V <- svd(A, nu = nrow(A), nv = ncol(A))$v
X_sing_vec <- eigen(X)$vectors
Y_sing_vec <- eigen(Y)$vectors
- Create a function to check whether each column vector in the singlar vectors of matrix \(A\) is a linear combination of the respondent column vector in the eigenvectors of \(X\) or \(Y\).
check_if_eig_vec_identical <- function(sing_vec,eigen_vec){
result <- vector()
# check if there is a unique scalar of all elements in both column vector
for(i in 1:ncol(eigen_vec)){
s <- (round(eigen_vec[,i]/sing_vec[,i],0)) %>%
as.factor() %>%
unique() %>%
length()
#if a unique scalar exists, then on column vector is a linear combinaton of another
result <- ifelse(s==1, TRUE,FALSE)
}
description <- paste0('The singlular vectors of ',
deparse(substitute(sing_vec)),
' and the eigenvectors of ',
deparse(substitute(eigen_vec)))
return(ifelse(all(result) == TRUE,
paste0(description,' are the same.'),
paste0(description,' are different.')))
}
- Check if singular vectors of \(U\) and the eigenvectors of \(X\) are the same.
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
## [1] "The singlular vectors of U and the eigenvectors of X_sing_vec are the same."
- Check if singular vectors of \(V\) and the eigenvectors of \(Y\) are the same.
## [,1] [,2] [,3]
## [1,] 0.01856629 -0.6727903 -0.7396003
## [2,] -0.25499937 -0.7184510 0.6471502
## [3,] -0.96676296 0.1765824 -0.1849001
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
## [1] "The singlular vectors of V and the eigenvectors of Y_sing_vec are the same."
2 Problem Set 2
Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors. In order to compute the co-factors, you may use built-in commands to compute the determinant. Your function should have the following signature:
\(B = myinverse(A)\)
where A is a matrix and B is its inverse and \(A\times B = I\). The off-diagonal elements of I should be close to zero, if not zero. Likewise, the diagonal elements should be close to 1, if not 1. Small numerical precision errors are acceptable but the function myinverse should be correct and must use co-factors and determinant of A to compute the inverse.
Please submit PS1 and PS2 in an R-markdown document with your first initial and last name.
2.1 Part 1: Create function myinverse
myinverse <- function(x){
#Error handler
stopifnot(det(x)!=0 | class(x)!='matrix' | nrow(x)!=ncol(x))
# Create co-factor matrix c
c <- x
for(i in 1:nrow(x)){
for(j in 1:ncol(x)){
#co-factors = (-1)^(i+j) * det(sub-matrices)
c[i,j] <- (-1)^(i+j)*det(matrix(x[-i,-j],nrow(x)-1, ncol(x)-1))
}
}
# return inv(x) = t(c)/det(x)
return(t(c)/det(x))
}
2.2 Part 2: Validation
create a function to test the customized function myinverse
check_myinverse <- function(n){
A = matrix(rnorm(n**2)*n, n,n)
return(list('Random Matrix A' = A,
'myinverse(A)' = myinverse(A),
'myinverse(A)%*%A' = round(myinverse(A)%*%A,10)))
}
Test the function myinverse
with a random matrix which has maximum of 10 columns
## $`Random Matrix A`
## [,1] [,2] [,3]
## [1,] -1.635754 4.100158 1.2039130
## [2,] 3.316234 6.305496 0.7926307
## [3,] -5.380913 2.015045 3.8725837
##
## $`myinverse(A)`
## [,1] [,2] [,3]
## [1,] -0.3895726 0.229637089 0.07410921
## [2,] 0.2920333 -0.002450583 -0.09028604
## [3,] -0.6932623 0.320353346 0.40817855
##
## $`myinverse(A)%*%A`
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1