Data 605 - Show that Khan Academy and textbook eigenspace method produce the same result in EE.C19.

Heather Geiger

Not sure if anyone else noticed, but the Khan Academy video (https://www.khanacademy.org/math/linear-algebra/alternate-bases/eigen-everything/v/linear-algebra-finding-eigenvectors-and-eigenspaces-example) uses a slightly different method to get the eigenspace vs. the textbook.

However, I will show here that they result in the same eigenvector.

My understanding was that textbook proposes solving for the following equation:

(original_matrix - (lambda*identity_matrix)) x eigenvector = 0

While the video proposes reversing this order for subtraction, like so:

((lambda * identity_matrix) - original_matrix) x eigenvector = 0

Here is the textbook matrix where we are to find the eigenspace.

original_matrix <- matrix(c(-1,2,-6,6),byrow=TRUE,nrow=2,ncol=2)

original_matrix
##      [,1] [,2]
## [1,]   -1    2
## [2,]   -6    6

Using the characteristic polynomial, you get lambda=2 and lambda = 3. In this example, I will focus on lambda=2.

Now, let’s go over the Khan way first.

Let’s get ((lambda * identity_matrix) - original_matrix).

lambda <- 2
identity_matrix <- matrix(c(1,0,0,1),byrow=TRUE,nrow=2,ncol=2)

lambda_times_identity_matrix <- lambda*identity_matrix

lambda_times_id_matrix_minus_original <- lambda*identity_matrix - original_matrix

lambda_times_id_matrix_minus_original
##      [,1] [,2]
## [1,]    3   -2
## [2,]    6   -4

Get all zeroes in the second row.

lambda_times_id_matrix_minus_original[2,] <- lambda_times_id_matrix_minus_original[2,] - 2*lambda_times_id_matrix_minus_original[1,]

lambda_times_id_matrix_minus_original
##      [,1] [,2]
## [1,]    3   -2
## [2,]    0    0

Show that the eigenvector c(2,3) gives a product of 0 doing it the Khan academy way.

lambda_times_id_matrix_minus_original %*% c(2,3)
##      [,1]
## [1,]    0
## [2,]    0

What about the textbook way?

original_minus_lambda_times_id_matrix <- original_matrix - lambda*identity_matrix

original_minus_lambda_times_id_matrix
##      [,1] [,2]
## [1,]   -3    2
## [2,]   -6    4

Get all zeroes in the second row again.

original_minus_lambda_times_id_matrix[2,] <- original_minus_lambda_times_id_matrix[2,] - 2*original_minus_lambda_times_id_matrix[1,]

original_minus_lambda_times_id_matrix
##      [,1] [,2]
## [1,]   -3    2
## [2,]    0    0

Show that the eigenvector c(2,3) gives a product of 0 doing it the textbook way.

original_minus_lambda_times_id_matrix %*% c(2,3)
##      [,1]
## [1,]    0
## [2,]    0

In summary, I was initially confused by the textbook notation, as it was different than the Khan academy method in the order of subtraction. However, it really does not matter which way you order them, as either way you get the same eigenvector in the end.