Inverse of a Matrix / Single Value Decomposition
Problem Set #1
Given a 3 × 2 matrix A: \[A = \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 4 \end{bmatrix}\]
write code in R to compute \(\mathbf{X} = \mathbf{AA^T}\) and \(\mathbf{Y} = \mathbf{A^TA}\). Then, compute the eigenvalues and eigenvectors of \(\mathbf{X}\) and \(\mathbf{Y}\) using the built-in commands in R.
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] -1 0 4
## [,1] [,2]
## [1,] 14 11
## [2,] 11 17
## [,1] [,2] [,3]
## [1,] 2 2 -1
## [2,] 2 4 6
## [3,] -1 6 25
Compute the eigenvalues and eigenvectors of X and Y using the built-in commans in R.
## [1] "eigenvalues of X:"
## $values
## [1] 26.601802 4.398198
## [1] "eigenvalues of Y:"
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
## [1] "eigenvectors of X:"
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
## [1] "eigenvectors of Y:"
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
Compute the left-singular, singular values, and right-singular vectors of A using the svd command:
## [1] "singular values of A"
## $d
## [1] 5.157693 2.097188
## [1] "left singular vectors of A"
## $u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
## [1] "right singular vectors of A"
## $v
## [,1] [,2]
## [1,] 0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296 0.1765824
Examine the two sets of singular vectors and show that they are indeed eigenvectors of X and Y. In addition, the two non-zero eigenvalues (the 3rd value will be very close to zero, if not zero) of both X and Y are the same and are squares of the non-zero singular values of A:
#will use the asrenal library's comparedf
library( arsenal )
#recast as dataframes and use comparedf
#get the summary and assign to a variables
cmp_Xu <- summary( comparedf( as.data.frame( Xeig['vectors'] ), as.data.frame( components['u'] ) ) )
cmp_Yv <- summary( comparedf( as.data.frame( Yeig['vectors'] ), as.data.frame( components['v'] ) ) )
#we are only interested in the differences between the values in the dataframes
numdiffs_Xu <- n.diffs( cmp_Xu ) #determines how many differences were found between values
numdiffs_Yv <- n.diffs( cmp_Yv )
paste( 'Num differences between the eigenvectors of X and the left singular vectors of A: ', numdiffs_Xu)## [1] "Num differences between the eigenvectors of X and the left singular vectors of A: 0"
## [1] "Num differences between the eigenvectors of Y and the right singular vectors of A: 0"
#comparing the eigenvalues of X, Y and the singular values of A
numdigits <- 10
test1 <- round( Xeig['values'][[1]][1], numdigits ) == round( Yeig['values'][[1]][1], numdigits )
test2 <- round( Xeig['values'][[1]][2], numdigits ) == round( Yeig['values'][[1]][2], numdigits )
#comparing X to the squares of A
test3 <- round( Xeig['values'][[1]][1], numdigits ) == round( components['d'][[1]][1]^2, numdigits )
test4 <- round( Xeig['values'][[1]][2], numdigits ) == round( components['d'][[1]][2]^2, numdigits )
if ( test1 == TRUE & test2 == TRUE )
print( 'The 2 nonzero eigenvalues of X & Y are equal' )## [1] "The 2 nonzero eigenvalues of X & Y are equal"
if ( test3 == TRUE & test4 == TRUE )
paste( 'The 2 eigenvalues of X are equal to the square of the singular values of A (it follows that this holds for the non zero eigenvalues of Y as well)' )## [1] "The 2 eigenvalues of X are equal to the square of the singular values of A (it follows that this holds for the non zero eigenvalues of Y as well)"
Problem Set #2
write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors.
The Inverse Matrix will be calculated with these steps (ref#2):
- calculate the matrix of minors
- find the matrix of cofactors
- find the adjugate
- multiply by 1/Determinant
#functionalize finding the co-factor
findCofactor = function( M ){
cofactor_M <- M
stopifnot( length( unique( dim( M ) ) )==1 ) #make sure M is square
sizeM <- dim( M )[1]
for(i in 1:sizeM ){
for( j in 1:sizeM ){
cofactor_M[i,j] <- ( det(M[-i,-j]) * (-1)^(i+j) )
#STEP1 the first term (det(M[-i,-j])) finds the matrix of minors
#STEP2 the second term ((-1)^(i+j)) alternates +/- for matrix of cofactors
}
}
return( cofactor_M )
}
myinverse = function(M){
cofactor <- findCofactor( M )
T_cofactor <- t( cofactor ) #STEP3 take the transpose of the cofactor to find the adjugate (adjoint)
det_M <- det(M)
M_inverse <- T_cofactor/det_M #STEP4 multiply by 1/det to find the inverse
return( M_inverse )
}## [,1] [,2] [,3]
## [1,] 1 5 5
## [2,] 3 3 1
## [3,] 7 5 4
## [,1] [,2] [,3]
## [1,] -0.1458333 -0.1041667 0.2083333
## [2,] 0.1041667 0.6458333 -0.2916667
## [3,] 0.1250000 -0.6250000 0.2500000
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1