Given a rectangular matrix \(A\)
\[\begin{bmatrix} 1 & 2 & 3\\ -1 & 0 & -4\\ \end{bmatrix}\]
compute \(X = AA^T\) and \(Y = A^TA\). Then compute the eiganvalues of \(X\) and \(Y\).
A = matrix(c(1,-1,2,0,3,4), nrow = 2)
X <- A%*%t(A)
Y <- t(A)%*%A
(Ex <- eigen(X))
## $values
## [1] 26.601802 4.398198
##
## $vectors
## [,1] [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635 0.6576043
(Ey <- eigen(Y))
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
##
## $vectors
## [,1] [,2] [,3]
## [1,] -0.01856629 -0.6727903 0.7396003
## [2,] 0.25499937 -0.7184510 -0.6471502
## [3,] 0.96676296 0.1765824 0.1849001
Compute the left-singular, singular and right-singular vectors of \(A\) and compare the left left-singular and right-singular vectors with the eiganvectors of \(X\) and \(Y\), respectively. In addition, compare the eigenvalues of X, Y and the square of the singular matrix.
# Need to force u to be m x m and v to be n x n. Since the eigenvalue of Ey is close to a trivial 0, R wants to
# make Y rank 2, not a rank of 3.
(sing <- svd(A, nu = nrow(A), nv = ncol(A)))
## $d
## [1] 5.157693 2.097188
##
## $u
## [,1] [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635 0.6576043
##
## $v
## [,1] [,2] [,3]
## [1,] 0.01856629 -0.6727903 -0.7396003
## [2,] -0.25499937 -0.7184510 0.6471502
## [3,] -0.96676296 0.1765824 -0.1849001
At first glance, \(E_X\) resembles \(u\) and \(E_Y\) resembles \(v\) in magnitude, but not in direction. This is just how R managed to solve the eigenvectors. The direction is just flipped for \(E_{X1}\) and \(E_{Y1}\) relative to \(u\) and \(v\), respectively. This is just a scalar multiplier of -1 for these respective column matrices. Using the absolute value of these matrices should negate this legitimate difference and enable us to compare the values. Due to Y being rank 3 (just barely with a eigenvalue close to 0) in order to compare the vectors, only the first 2 columns are compared for the eigenvalues and the square of the singular matrix. The message Numeric: lengths (2, 3) differ would be produced otherwise. all.equal is used to ignore any small rounding errors that would exist between the comparisons.
all.equal(abs(Ex$vectors), abs(sing$u))
## [1] TRUE
all.equal(abs(Ey$vectors), abs(sing$v))
## [1] TRUE
all.equal(Ex$values[1:2], Ey$values[1:2], sing$d%*%sing$d)
## [1] TRUE
We see that the left and right-singular matrices are equal to the eigenvectors in addition to the eigenvalues and the square of the singular matrix being equal.
Using the procedure outlined in section 1 of the weekly handout, write a function to compute the inverse of a well-conditioned full-rank square matrix using co-factors.
# User-defined funciton
myinverse <- function(A) {
# C will be the co-factor matrix
C <- matrix(rep(0, length(A)), nrow = nrow(A))
for (i in 1:nrow(A)) {
for (j in 1:ncol(A)) {
# Cij is the determinant of the submatrix Mij, negated as necessary.
C[i,j] <- ifelse((i+j) %% 2 == 0, 1, -1)*det(A[-i,-j])
}
}
return(t(C)/det(A))
}
# MAIN
A <- matrix(c(7,0,-3,2, 3, 4, 1, -1, -2), nrow = 3)
(B <- myinverse(A))
## [,1] [,2] [,3]
## [1,] -2 8 -5
## [2,] 3 -11 7
## [3,] 9 -34 21
A%*%B
## [,1] [,2] [,3]
## [1,] 1.000000e+00 -2.131628e-14 -1.065814e-14
## [2,] -1.776357e-15 1.000000e+00 0.000000e+00
## [3,] -3.552714e-15 1.421085e-14 1.000000e+00
all.equal(A%*%B, diag(nrow = nrow(A)))
## [1] TRUE
As mentioned previously, all.equal ignores minor differences that could be attributable to round-off errors. As shown above, the off-diagonal elements of the identity matrix \(I\) are very close to 0 and the main diagonals are essentially 1.