Task 1 :Geometric Transformation of Shapes Using Matrix Multiplication

Context: In computer graphics and data visualization, geometric transformations are fundamental. These transformations, such as translation, scaling, rotation, and reflection, can be applied to shapes to manipulate their appearance.

Task: Create a simple shape (like a square or triangle) using point plots in R. Implement R code to apply different transformations (scaling, rotation, reflection) to the shape by left multiplying a transformation matrix by each of the point vectors. Demonstrate these transformations through animated plots.

a: Create a Shape: Define a simple shape.

# Define triangle shape with points
triangle <- matrix(c(0, 0, 1, 0, 0.5, 1), ncol=2, byrow=TRUE)

# Function to plot shape with the specified color and title
plot_shape <- function(points, title){
  df <- as.data.frame(points)
  colnames(df) <- c("x", "y")
  ggplot(df, aes(x, y)) + 
    geom_polygon(fill="lightpink", color="deeppink", alpha=0.7) +  # Set fill to lightpink and color to deeppink
    ggtitle(title) + 
    coord_fixed() +
    xlim(-2, 2) + ylim(-2, 2)
}

# Original shape
plot_shape(triangle, "Original Triangle")

b: Apply Transformations:

  • Scaling: Enlarge or shrink the shape.
scaling_matrix <- matrix(c(2, 0, 0, 2), nrow=2)
scaled_triangle <- triangle %*% scaling_matrix

ggplot(as.data.frame(scaled_triangle), aes(V1, V2)) + 
  geom_polygon(fill="lightpink", color="deeppink") + 
  coord_fixed() + ggtitle("Scaled Triangle")

  • Rotation: Rotate the shape by a given angle.
theta <- pi / 4  # 45 degrees
rotation_matrix <- matrix(c(cos(theta), -sin(theta), sin(theta), cos(theta)), nrow=2)
rotated_triangle <- triangle %*% rotation_matrix

ggplot(as.data.frame(rotated_triangle), aes(V1, V2)) + 
  geom_polygon(fill="lightpink", color="deeppink") + 
  coord_fixed() + ggtitle("Rotated Triangle by 45 Degrees")

  • Reflection: Reflect the shape across an axis.
reflection_matrix <- matrix(c(1, 0, 0, -1), nrow=2)
reflected_triangle <- triangle %*% reflection_matrix

ggplot(as.data.frame(reflected_triangle), aes(V1, V2)) + 
  geom_polygon(fill="lightpink", color="deeppink") + 
  coord_fixed() + ggtitle("Reflected Triangle Across X-axis")

  • Animate Transformations: Use a loop to incrementally change the transformation matrix and visualize the effect on the shape over time.

## Task 2: Matrix Properties and Decomposition

Proofs

a.

  • Prove that \(AB\neq BA\)

\[ A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix} \ \ B = \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} \]

\[ AB = \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \\ \end{pmatrix} \]

\[ BA = \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 0 & 0 \\ \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \\ \end{pmatrix} \]

\(\therefore AB\neq BA\)

b.

  • Prove that \(A^t A\) is always symmetric.

Definition: SYM Symmetric Matrix (Beezer : A First Course in Linear Algebra: pg. 186) The matrix A is symmetric if \(A = A^t\) \(\therefore A^t A = (A^t A)^t\)

Theorem: MMT Matrix Multiplication and Transposes: (Beezer : A First Course in Linear Algebra: pg. 166) \((AB)^t = B^t A^t\) \(\therefore (A^t A)^t = (A^t)^t A^t\)

Theorem TT Transpose of a Transpose (Beezer : A First Course in Linear Algebra: pg. 168) \((A^t)^t=A\) \(\therefore (A^t)^t A^t=A A^t\)

Which essentially shows that \(A^tA\) is equal to its own transpose.

c.

  • Prove that the determinant of \(A^T A\) is non-negative

  • Multiplying the Matrix \(A^t A\) results in a square matrix $(nm)(nm)=(nn) $

  • Above proof shows its always symmetric.

  • Symmetric Matrices have real eigen values.

  • The determinant of a matrix is the product of its eigenvalues.

  • So for any vector \(x \in R^n\) if we consider a non zero vector x than \(x^t(A^t A)x= (Ax)^t(Ax)=(Ax)(Ax)=||Ax^2||\)

  • \(||Ax^2||\) represents the square of Euclidean norm (length) of vector \(Ax\), which is always non-negative.

  • With \(A^tA\) positive semi-definite, its implied so are the eigen values since \(\lambda \ of A^t A\) has to satisfy \(x^t(A^t A)x = \lambda x^r x\)

  • Given

    • \(x^t(A^t A)x = ||A^t|| \geq 0\)
    • \(x^t x\) (norm of x square) is \(\lambda\)
    • scales \(x^tx\)

\(\therefore\) the equation must be non-negative as well.

d. Singular Value Decomposition (SVD) and Image Compression

Context: Every time you upload a photo to social media, algorithms often compress your image to reduce file size without significantly degrading quality. One of the key techniques used in this process is Singular Value Decomposition (SVD), which is another form of matrix factorization.

Task: Write an R function that performs Singular Value Decomposition (SVD) on a grayscale image (which can be represented as a matrix). Use this decomposition to compress the image by keeping only the top k singular values and their corresponding vectors. Demonstrate the effect of different values of k on the compressed image’s quality. You can choose any open-access grayscale image that is appropriate for a professional program.

Instructions:

  • Read an Image: Convert a grayscale image into a matrix.
# Use the full path to the image if it is in a different folder
img <- load.image("C:\\Users\\Admin\\Desktop\\IMG_2150-gray.jpg")
gray <- grayscale(img)
gray_matrix <- as.matrix(gray)

# Print the dimensions to confirm it's a matrix
print(dim(gray_matrix))
## [1] 1290 1590
plot(gray, main = "Grayscale Image")

  • Perform SVD: Factorize the image matrix \(A\) into \(U \sum V^T\) using R’s built-in svd() function.
gray_svd <- svd(gray_matrix)
gray_U<- gray_svd$u
gray_Sigma<- gray_svd$d
gray_Vt<- t(gray_svd$v)
  • Compress the Image: Reconstruct the image using only the top k singular values and vectors.
gray_reconstruct <- function(U,Sigma,Vt,k){
  U_k <- U[,1:k]
  Sigma_k <- diag(Sigma[1:k])
  Vt_k <- Vt[,1:k]
  
  img_reconstructed <- U_k %*% Sigma_k %*% t(Vt_k)
  
  return(img_reconstructed)
}
# gray_reconstruct(gray_U, gray_Sigma, gray_Vt, k)
  • Visualize the Result: Plot the original image alongside the compressed versions for various values of k (e.g., k = 5, 20, 50).
k_values <- c(20, 40, 60)  # Values of k for compression

# Set up plotting area
par(mfrow = c(1, length(k_values) + 1)) # Arrange plots

# Visualize the original image
image(gray_matrix, col=gray.colors(256),  main = "Original Image", axes = FALSE)

# Visualize reconstructed images for different k values
for (k in k_values) {
  img_reconstructed <- gray_reconstruct(gray_U, gray_Sigma, gray_Vt, k)
    # Ensure the image matrix is properly scaled and in the correct range
  img_reconstructed <- pmax(pmin(img_reconstructed, 1), 0)  # Clip values to [0, 1] range
  
  # Plot the reconstructed image
  image(img_reconstructed, col=gray.colors(256), main = paste("Reconstructed k =", k), axes = FALSE)
}

rm(list = ls())

Task 3. Matrix Rank, Properties, and Eigenspace

a. Determine the Rank of the Given Matrix:

Find the rank of the matrix \(A\). Explain what the rank tells us about the linear independence of the rows and columns of matrix \(A\). Identify if there are any linear dependencies among the rows or columns.

\[ A = \begin{pmatrix} 1 & 4 & 1 & 3\\ -2 & -3 & 4 & 1 \\ 5 & 6 & 2 & 8 \\ -1 & -2 & 3 & 7\\ \end{pmatrix} \]

rank_matrix <- matrix(c(1, 4, 1, 3, -2, -3, 4, 1, 5, 6, 2, 8, -1, -2, 3, 7), nrow = 4, byrow = TRUE)
print(rank_matrix)
##      [,1] [,2] [,3] [,4]
## [1,]    1    4    1    3
## [2,]   -2   -3    4    1
## [3,]    5    6    2    8
## [4,]   -1   -2    3    7

The reduce row echelon form of this matrix is

rank_matrix_echelon<-rref(rank_matrix)
print(rank_matrix_echelon)
##      [,1] [,2] [,3] [,4]
## [1,]    1    0    0    0
## [2,]    0    1    0    0
## [3,]    0    0    1    0
## [4,]    0    0    0    1
qr(rank_matrix)[2]
## $rank
## [1] 4

Rank is the # of linearly independent rows or columns of a matrix. Matrix A has a rank of 4, meaning all rows and columns are linearly independent. For rows and columns there are no linear dependencies. \(\therefore\) No row or column can be written as a linear combination of another row/column.

b. Matrix Rank Boundaries:

b1.

  • Given an \(m\times n\) matrix where \(m > n\), determine the maximum and minimum possible rank, assuming that the matrix is non-zero.

Assumption:

\[ A \neq \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0\\ \end{pmatrix} \]

Given our assumption the Minimum for the matrix is 1 by exclusion. The maximum Rank is the smaller of \(m \ \& \ n\) (row and columns) \(\therefore\). Since it is stated that \(m > n\) the maximum rank is \(n\). \(\therefore\) the the rank lies between \(1\) and \(n\), inclusively.

c.

  • Prove that the rank of a matrix equals the dimension of its row space (or column space). Provide an example to illustrate the concept.

Row Rank of Matrix: Max # of linearly independent Rows. Column Rank of Matrix: Max # of linearly independent Column Row Space of a Matrix: Vector space spanned by its row. Column Space of a Matrix: Vector space spanned by its column Dimensions of Vector Space: # of Vectors in a basis for a space (i.e. 3 of linearly independent vectors in a space).

Proof Strategy: Show Rank is equal to the dimension of both row space and column space.

Step 1: Create a Matrix of Rank 1

Proof_Matrix <- matrix(c(1, 2, 3, 4, 2, 4, 6, 8, 3, 6, 9, 12), nrow = 3, ncol = 4, byrow = TRUE)

Proof_Matrix
##      [,1] [,2] [,3] [,4]
## [1,]    1    2    3    4
## [2,]    2    4    6    8
## [3,]    3    6    9   12
Proof_rank <- qr(Proof_Matrix)$rank
Proof_rank
## [1] 1

Step 2: Show that the Row and column Space dimensions is 1, b/c Row and Columns are linearly dependent.

# Find row space (basis of row vectors) using singular value decomposition (SVD)
svd_A <- svd(Proof_Matrix)
row_space_basis <- svd_A$u[, 1:Proof_rank]
print("Row space basis vectors:")
## [1] "Row space basis vectors:"
print(row_space_basis)
## [1] 0.2672612 0.5345225 0.8017837
# Find column space (basis of column vectors)
column_space_basis <- svd_A$v[, 1:Proof_rank]
print("Column space basis vectors:")
## [1] "Column space basis vectors:"
print(column_space_basis)
## [1] 0.1825742 0.3651484 0.5477226 0.7302967

This proves that the rank of a matrix equals the dimension of its row space (or column space).

d. Rank and Row Reduction:

  • Determine the rank of matrix \(B\). Perform a row reduction on matrix \(B\) and describe how it helps in finding the rank. Discuss any special properties of matrix \(B\) (e.g., is it a rank-deficient matrix?).

\[ B = \begin{pmatrix} 2 & 5 & 7\\ 4 & 10 & 14 \\ 1 & 2.5 & 3.5 \\ \end{pmatrix} \]

B<-matrix(c(2, 5, 7, 4, 10, 14, 1, 2.5, 3.5), nrow = 3, byrow = TRUE)

The rank of B is

B_rank<-qr(B)$rank
print(B_rank)
## [1] 1
rref_B<-rref(B)
print(rref_B)
##      [,1] [,2] [,3]
## [1,]    1  2.5  3.5
## [2,]    0  0.0  0.0
## [3,]    0  0.0  0.0

Rank is 1 the only non-zero row. Row reduction helps illustrate how \(Rank > Number \ of \ Rows\) making it “rank deficient” and showing it has linear dependencies. Essentially because they are linearly dependent, there are not 3 vectors but 1 with a unique direction. The second and third columns are just scalar multiples of the first.

e Compute the Eigenvalues and Eigenvectors:

Find the eigenvalues and eigenvectors of the matrix \(A\). Write out the characteristic polynomial and show your solution step by step. After finding the eigenvalues and eigenvectors, verify that the eigenvectors are linearly independent. If they are not, explain why.

\[ A = \begin{pmatrix} 3 & 1 & 2\\ 0 & 5 & 4 \\ 0 & 0 & 2 \\ \end{pmatrix} \]

For eigenvalues and characteristic polynomials use

\[ det(A - \lambda I) = \left| \begin{matrix} 3- \lambda & 1 & 2\\ 0 & 5- \lambda & 4 \\ 0 & 0 & 2- \lambda \\ \end{matrix} \right| \]

Characteristic plynomial is: \(det(A- \lambda I)=0 \rightarrow (3- \lambda) (5- \lambda) (2- \lambda)=0\) Therefore \(\lambda^3 - 10 \lambda^2 +31 \lambda - 30 = 0\) The eigen values are \(\lambda_1=3 , \lambda_2=5 , \lambda_3=2\) respectively

A<-matrix(c(3,1,2,0,5,4,0,0,2), nrow = 3, byrow = TRUE)
print(A)
##      [,1] [,2] [,3]
## [1,]    3    1    2
## [2,]    0    5    4
## [3,]    0    0    2

The eigenvectors are:

Eigen_matrix<-eigen(A)$vector
print(Eigen_matrix)
##           [,1] [,2]       [,3]
## [1,] 0.4472136    1 -0.3713907
## [2,] 0.8944272    0 -0.7427814
## [3,] 0.0000000    0  0.5570860

if the eigen vectors are linearly independent the determinant of the Eigen_matrix will be a non-zero.

det_Eigen_matrix <-det(Eigen_matrix)
cat("Determinant of the matrix of eigenvectors:\n")
## Determinant of the matrix of eigenvectors:
print(det_Eigen_matrix)
## [1] -0.4982729
# Check if the eigenvectors are linearly independent
if (det_Eigen_matrix != 0) {
  cat("The eigenvectors are linearly independent.\n")
} else {
  cat("The eigenvectors are NOT linearly independent.\n")
}
## The eigenvectors are linearly independent.

f. Diagonalization of Matrix:

  • Determine if matrix \(A\) can be diagonalized. If it can, find the diagonal matrix and the matrix of eigenvectors that diagonalizes \(A\).

I matrix \(A\) can be diagnolized, then the Eigen_matrix \(P\) times the Diagonal matrix of eigenvalues time the inverse of \(P\) should equal to \(A\)

P<-Eigen_matrix
D<-diag(eigen(A)$values)
P_inv<-solve(P)
A_diag<- P %*% D %*% P_inv
cat("Matrix P (eigenvectors):\n")
## Matrix P (eigenvectors):
print(P)
##           [,1] [,2]       [,3]
## [1,] 0.4472136    1 -0.3713907
## [2,] 0.8944272    0 -0.7427814
## [3,] 0.0000000    0  0.5570860
cat("Diagonal matrix D (eigenvalues):\n")
## Diagonal matrix D (eigenvalues):
print(D)
##      [,1] [,2] [,3]
## [1,]    5    0    0
## [2,]    0    3    0
## [3,]    0    0    2
cat("Matrix P inverse (eigenvectors inverse):\n")
## Matrix P inverse (eigenvectors inverse):
print(P_inv)
##      [,1]      [,2]     [,3]
## [1,]    0  1.118034 1.490712
## [2,]    1 -0.500000 0.000000
## [3,]    0  0.000000 1.795055
cat("Matrix A reconstructed from P, D, and P^-1:\n")
## Matrix A reconstructed from P, D, and P^-1:
print(A_diag)
##      [,1] [,2] [,3]
## [1,]    3    1    2
## [2,]    0    5    4
## [3,]    0    0    2

So A can be diagonalize with matrices noted above