HELLO!
Teknik Informatika UIN MAULANA MALIK IBRAHIM MALANG|| Lalu Egiq Fahalik Anggara_220605110066 | linier algebra
by Prof. Dr. Suhartono, M.Kom
Let A be an r × c matrix and B a c × t matrix, i.e. B = [b1 : b2 : · · · : bt]. The product AB is the r × t matrix given by:
AB = A[b1 : b2 : · · · : bt] = [Ab1 : Ab2 : · · · : Abt]
A <- matrix(c(1, 3, 2, 2, 8, 9), ncol = 2)
B <- matrix(c(5, 8, 4, 2), ncol = 2)
A %*% B
## [,1] [,2]
## [1,] 21 8
## [2,] 79 28
## [3,] 82 26
Vectors as matrices One can regard a column vector of length r as an r × 1 matrix and a row vector of length c as a 1 × c matrix.
Some special matrices – An n × n matrix is a square matrix – A matrix A is symmetric if A = A>. – A matrix with 0 on all entries is the 0–matrix and is often written simplyas 0. – A matrix consisting of 1s in all entries is of written J. – A square matrix with 0 on all off–diagonal entries and elements d1, d2, . . . , dn on the diagonal a diagonal matrix and is often written diag{d1, d2, . . . , dn} – A diagonal matrix with 1s on the diagonal is called the identity matrix and is denoted I. The identity matrix satisfies that IA = AI = A.
#0-matrix and 1-matrix
matrix(0, nrow = 2, ncol = 3)
## [,1] [,2] [,3]
## [1,] 0 0 0
## [2,] 0 0 0
matrix(1, nrow = 2, ncol = 3)
## [,1] [,2] [,3]
## [1,] 1 1 1
## [2,] 1 1 1
#Diagonal matrix and identity matrix
diag(c(1, 2, 3))
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 2 0
## [3,] 0 0 3
diag(1, 3)
## [,1] [,2] [,3]
## [1,] 1 0 0
## [2,] 0 1 0
## [3,] 0 0 1
#Note what happens when diag is applied to a matrix:
diag(diag(c(1, 2, 3)))
## [1] 1 2 3
diag(A)
## [1] 1 8
Inverse of matrices In general, the inverse of an n × n matrix A is the matrix B (which is also n × n) which when multiplied with A gives the identity matrix I. That is, AB = BA = I. One says that B is A’s inverse and writes B = A−1 . Likewise, A is Bs inverse.
Some facts about inverse matrices are: – Only square matrices can have an inverse, but not all square matrices have an inverse. – When the inverse exists, it is unique. – Finding the inverse of a large matrix A is numerically complicated (but computers do it for us). In Section ?? the issue of matrix inversion is discussed in more detail.
#Finding the inverse of a matrix in R is done using the solve() function:
A <- matrix(c(1, 3, 2, 4), ncol = 2, byrow = T)
A
## [,1] [,2]
## [1,] 1 3
## [2,] 2 4
B <- solve(A)
B
## [,1] [,2]
## [1,] -2 1.5
## [2,] 1 -0.5
A %*% B
## [,1] [,2]
## [1,] 1 0
## [2,] 0 1
Solving systems of linear equations Example 11 Matrices are closely related to systems of linear equations. Consider the two equations x1 + 3x2 = 7 2x1 + 4x2 = 10 From the Figure it follows that there are 3 possible cases of solutions to the system 1. Exactly one solution – when the lines intersect in one point 2. No solutions – when the lines are parallel but not identical 3. Infinitely many solutions – when the lines coincide.
A <- matrix(c(1, 2, 3, 4), ncol = 2)
b <- c(7, 10)
x <- solve(A) %*% b
x
## [,1]
## [1,] 1
## [2,] 2
Inverting an n × n matrix* In the following we will illustrate one frequently applied methopd for matrix inversion. The method is called Gauss–Seidels method and many computer programs, including solve() use variants of the method for finding the inverse of an n × n matrix.
#Consider the matrix A:
A <- matrix(c(2, 2, 3, 3, 5, 9, 5, 6, 7), ncol = 3)
A
## [,1] [,2] [,3]
## [1,] 2 3 5
## [2,] 2 5 6
## [3,] 3 9 7
#We want to find the matrix B = A−1. To start, we append to A the identity matrix and call the result AB:
AB <- cbind(A, diag(c(1, 1, 1)))
AB
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 2 3 5 1 0 0
## [2,] 2 5 6 0 1 0
## [3,] 3 9 7 0 0 1
On a matrix we allow ourselves to do the following three operations (sometimes called elementary operations) as often as we want:
The aim is to perform such operations on AB in a way such that one ends up with a 3 × 6 matrix which has the identity matrix in the three leftmost columns. The three rightmost columns will then contain B = A−1. Recall that writing e.g. AB[1,] extracts the enire first row of AB. • First, we make sure that AB[1,1]=1. Then we subtract a constant times the first row from the second to obtain that AB[2,1]=0, and similarly for the third
AB[1, ] <- AB[1, ]/AB[1, 1]
AB[2, ] <- AB[2, ] - 2 * AB[1, ]
AB[3, ] <- AB[3, ] - 3 * AB[1, ]
AB
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 1.5 2.5 0.5 0 0
## [2,] 0 2.0 1.0 -1.0 1 0
## [3,] 0 4.5 -0.5 -1.5 0 1
Next we ensure that AB[2,2]=1. Afterwards we subtract a constant times the second row from the third to obtain that AB[3,2]=0:
AB[2, ] <- AB[2, ]/AB[2, 2]
AB[3, ] <- AB[3, ] - 4.5 * AB[2, ]
Now we rescale the third row such that AB[3,3]=1:
AB[3, ] <- AB[3, ]/AB[3, 3]
AB
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 1.5 2.5 0.5000000 0.0000000 0.0000000
## [2,] 0 1.0 0.5 -0.5000000 0.5000000 0.0000000
## [3,] 0 0.0 1.0 -0.2727273 0.8181818 -0.3636364
#Then AB has zeros below the main diagonal.
#We then work our way up to obtain that AB has zeros above the main diagonal:
AB[2, ] <- AB[2, ] - 0.5 * AB[3, ]
AB[1, ] <- AB[1, ] - 2.5 * AB[3, ]
AB
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 1.5 0 1.1818182 -2.04545455 0.9090909
## [2,] 0 1.0 0 -0.3636364 0.09090909 0.1818182
## [3,] 0 0.0 1 -0.2727273 0.81818182 -0.3636364
AB[1, ] <- AB[1, ] - 1.5 * AB[2, ]
AB
## [,1] [,2] [,3] [,4] [,5] [,6]
## [1,] 1 0 0 1.7272727 -2.18181818 0.6363636
## [2,] 0 1 0 -0.3636364 0.09090909 0.1818182
## [3,] 0 0 1 -0.2727273 0.81818182 -0.3636364
Now we extract the three rightmost columns of AB into the matrix B. We claim that B is the inverse of A, and this can be verified by a simple matrix multiplication
B <- AB[, 4:6]
A %*% B
## [,1] [,2] [,3]
## [1,] 1.000000e+00 0.000000e+00 0.000000e+00
## [2,] -4.440892e-16 1.000000e+00 4.440892e-16
## [3,] -2.220446e-16 8.881784e-16 1.000000e+00
Least squares Consider the table of pairs (xi, yi) below
x 1.00 2.00 3.00 4.00 5.00 y 3.70 4.20 4.90 5.70 6.00
The first question is: Can we find a vector β such that y = Xβ? The answer is clearly no, because that would require the points to lie exactly on a straight line. A more modest question is: Can we find a vector βˆ such that Xβˆ is in a sense “as close to y as possible”. The answer is yes. The task is to find βˆ such that the length of the vector e = y − Xβ
is as small as possible. The solution is
βˆ = (X>X)−1X>y
A neat little exercise – from a bird’s perspective On a sunny day, two tables are standing in an English country garden. On each table birds of unknown species are sitting having the time of their lives. A bird from the first table says to those on the second table: “Hi – if one of you come to our table then there will be the same number of us on each table”. “Yeah, right”, says a bird from the second table, “but if one of you comes to our table, then we will be twice as many on our table as on yours”.
Question: How many birds are on each table? More specifically, • Write up two equations with two unknowns. • Solve these equations using the methods you have learned from linear algebra. • Simply finding the solution by trial–and–error is considered cheating.
daftar pustaka : https://www.math.uh.edu/~jmorgan/Math6397/day13/LinearAlgebraR-Handout.pdf