Starting with martix A as describe in the assignment.
A<- matrix(c(1,2,3,-1,0,4), byrow=T, nrow=2)
A
## [,1] [,2] [,3]
## [1,] 1 2 3
## [2,] -1 0 4
Computing X and Y, \(X=AA^{T}\) and \(Y=A^{T}A\)
X<- A%*%t(A)
Y<- t(A)%*%A
Using the built-in functions in R to compute the eigenvectors. Showing the eigen vectors and values of X first.
E_x<-eigen(X)
E_y<-eigen(Y)
E_x
## eigen() decomposition
## $values
## [1] 26.601802 4.398198
##
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
And the eigen decomp for Y
E_y
## eigen() decomposition
## $values
## [1] 2.660180e+01 4.398198e+00 5.465713e-17
##
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
Indeed one the eigen values nearly zero for matrix \(Y\).
Now using the svd function for A
svd<-svd(A)
svd
## $d
## [1] 5.157693 2.097188
##
## $u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
##
## $v
## [,1] [,2]
## [1,] 0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296 0.1765824
Let’s compare this outputs to show that svd$v is an eigen vector of Y.
svd$v
## [,1] [,2]
## [1,] 0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296 0.1765824
E_y$vectors[,1:2]
## [,1] [,2]
## [1,] -0.01856629 -0.6727903
## [2,] 0.25499937 -0.7184510
## [3,] 0.96676296 0.1765824
So the values are the same, but the signs are different in places but this likely due to the unit vector nature.
svd$u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
E_x$vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
The same is true for the the eigen vector of Y and svd$u.
Now let’s examine the squares of the eigen values of X and Y to the non-singular decomposition of A.
E_x$values
## [1] 26.601802 4.398198
E_y$values[1:2]
## [1] 26.601802 4.398198
So we see that the non-zero eigen values of X and Y are the same. I’ll take the square root of these values and compare them to svd$d
sqrt_root_E_x<-sqrt(E_x$values)
sqrt_root_E_x
## [1] 5.157693 2.097188
svd$d
## [1] 5.157693 2.097188
Indeed we see they are the same. Great.
Build function to compute the inverse of a matrix.
This implementation requires that the matrix be square and full-rank. First I’m going to make a function that builds the Cofactor matrix. Secondly, I write the inverse function that returns the inverse a matrix by calling the function that makes the Cofactors.
make.cofactors <- function(mat) {
cofact <- mat
for(i in 1:dim(mat)[1]){
for(j in 1:dim(mat)[2]){
cofact[i,j] <- (det(mat[-i,-j])*(-1)^(i+j))
}
}
return(cofact)
}
Once we have the cofactor matrix we can easily compute the inverse
myinverse<- function(mat){
C_t<-t(make.cofactors(mat))
return(C_t / det(mat))
}
A<- matrix(c(2,6,1,-3,0,5,5,4,-7), byrow=T, nrow=3)
myinverse(A)
## [,1] [,2] [,3]
## [1,] 0.7142857 -1.6428571 -1.0714286
## [2,] -0.1428571 0.6785714 0.4642857
## [3,] 0.4285714 -0.7857143 -0.6428571
solve(A)
## [,1] [,2] [,3]
## [1,] 0.7142857 -1.6428571 -1.0714286
## [2,] -0.1428571 0.6785714 0.4642857
## [3,] 0.4285714 -0.7857143 -0.6428571
And we see that these are the same result. Great.