##
## Attaching package: 'EBImage'
## The following objects are masked from 'package:OpenImageR':
##
## readImage, writeImage
We are given data containing of 17 images of shoes of 1200X2500 dimension each. We will first load the data in a variables.
num <- length(list.files("/Users/karmagyatso/Documents/cunySps/data605/wk-4/jpg/"))
files_dir <- "/Users/karmagyatso/Documents/cunySps/data605/wk-4/jpg/"
file_images <- list.files(path = files_dir, pattern = "\\.jpg")[1:num]
#saves the images as file list
First, we are reading the image and storing in the shoe_jpg variable. Then, creating a empty data matrix
Using loop to fill the empty matrix data with RGB vectors
shoe_jpg = readJPEG(paste0(files_dir, file_images[1]))
data = matrix(0, num, prod(dim(shoe_jpg)))
for (i in 1:num){
im <- readJPEG(paste0(files_dir, file_images[i]))
r <- as.vector(im[,,2])
g <- as.vector(im[,,3])
b <- as.vector(im[,,1])
data[i,]= t(c(r,g,b))
}
Create a dataframe with the transpose of the data matrix.
shoes_df = as.data.frame(t(data))
We will look at the variance to determine the threshold for variability.
cum_var = cumsum(eigenvalues) / sum(eigenvalues)
cum_var
## [1] 0.6833138 0.7824740 0.8353528 0.8629270 0.8825040 0.8996099 0.9144723
## [8] 0.9271856 0.9374462 0.9472860 0.9561859 0.9647964 0.9732571 0.9804242
## [15] 0.9874038 0.9941511 1.0000000
thres <- min(which(cum_var > .80))
thres
## [1] 3
A graphical representation of how fast we can achieve the result.
plot(1:num, y = cum_var, type = "b")
length(cum_var)
## [1] 17
We’re creating a diagonal matrix based on the number of threshold components (in our case 3) before scaling it. This is our scaling matrix. We then multiply our original scaled dataframe by our first three eigenvectors followed by that diagonal matrix.
scaling_shoes = diag(eigenvalues[1:thres]^(-1/2)) / (sqrt(nrow(scaled_shoes)-1))
eigen_shoes = scaled_shoes%*%eigenvectors[,1:thres]%*%scaling_shoes
eigenimage <- array(eigen_shoes[,3], dim(shoe_jpg))
imageShow(eigenimage)
height=1200; width=2500;scale=20
plot_jpeg = function(path, add=FALSE)
{ jpg = readJPEG(path, native=T) # read the file
res = dim(jpg)[2:1] # get the resolution, [x, y]
if (!add) # initialize an empty plot area if add==FALSE
plot(1,1,xlim=c(1,res[1]),ylim=c(1,res[2]),asp=1,type='n',xaxs='i',yaxs='i',xaxt='n',yaxt='n',xlab='',ylab='',bty='n')
rasterImage(jpg,1,1,res[1],res[2])
}
image_array=array(rep(0,length(file_images)*height/scale*width/scale*3), dim=c(length(file_images), height/scale, width/scale,3))
for (i in 1:num){
temp=resize(readJPEG(paste0(files_dir, file_images[i])),height/scale, width/scale)
image_array[i,,,]=array(temp,dim=c(1, height/scale, width/scale,3))
}
We create a empty matrix Flat and populated all the images into the arraywith RBG vector
flat=matrix(0, 17, prod(dim(image_array)))
for (i in 1:num) {
newim <- readJPEG(paste0(files_dir, file_images[i]))
r=as.vector(image_array[i,,,1]); g=as.vector(image_array[i,,,2]);b=as.vector(image_array[i,,,3])
flat[i,] <- t(c(r, g, b))
}
shoes=as.data.frame(t(flat))
Using the par() to divide the frame into desired grid. using mai to determine the margin size specified in inches in the format c(bottom, left, top, right).
par(mfrow=c(3,3))
par(mai=c(.3,.3,.3,.3))
for (i in 1:num){ #plot the first images only
plot_jpeg(writeJPEG(image_array[i,,,]))
}
Eigenimages refer to a group of eigenvectors that have the ability to identify images through a process called dimensionality reduction. This involves transforming images into separate red, green, and blue vectors which are then arranged in a matrix. The covariance matrix is then calculated, and from this, the eigenimages are generated.
Given that images of shoes can have a large number of pixels, processing them computationally can be quite inefficient. To address this, the matrices are reduced to lower dimensions in an attempt to represent the set of images more efficiently.
“Chat.openai.com.” ChatGPT, https://chat.openai.com/chat/4cdade1c-0280-476b-be4f-29affab5a6ca
Herrero, Diego. “Face Recognition Using Eigenfaces.” RPubs, 26 Oct. 2019, https://rpubs.com/dherrero12/543854.