For a DSI assignment, our group has collected selfies for the QS project. However, as a first semester student with no professional coding background, I actually had no idea how to analyse them. There are many great websites offering online tools to extract emotion from photos. However, with more than 600 selfies, it is almost impossible to upload photos and record them one by one.
Then I found this blog: Face API in R – Microsoft Cognitive Services, which explains how to setup and use Azure FACE API. With the help of this blog and also inspired by my STDS group assignment (Great thanks to Yunseok!), I finally wrote my first R loop for this. Hope this will help others who are also struggling.
In this blog, I’m introducing two FACE APIs for batch extracting emotion data from photos.
First, let’s load the libraries we need for the project.
For Azure API, click here to register for an account. In order to use this API, we also need to link the account to a credit card.
A free subscription includes 30,000 transactions per month with 20 calls per Minute.
Azure Face API Pricing
Then, setup the following request URL to get smile and emotion.
Assign API subscription key.
Read the csv that contains all the photo names in the working directory.
Using the following code to batch extract emotion data.
# Create an empty dataframe
output <- data.frame()
for (i in 1:nrow(photo_list)){
# Error handling
tryCatch({
# Get the photo with photo list
body_image <- upload_file(photo_list[i, 1])
# Use POST methord to get result from API
result <- POST(face_api_url,
body = body_image,
add_headers(.headers = c("Content-Type" = "application/octet-stream",
"Ocp-Apim-Subscription-Key" = api_key)))
# Store result of the image to temp in data frame
temp <- as.data.frame(content(result))
temp$photo <- photo_list[i, 1]
# Combine result with others
output <- rbind(output, temp)
# Pause 3 seconds per call, since the limit is 20 calls per mins
# Otherwise will get errors after 20 calls
Sys.sleep(3)
},
# Error handling
error = function(e) {
writeLines(sprintf("Error at index : %d, error : %s", i, e))
}
)
}
# Drop and reorder columns
output <- output %>%
select(-faceId, -starts_with("faceRectangle")) %>%
select(photo, everything())
# Rename columns
colnames(output) <- c("photo", "smile", "anger", "contempt", "disgust", "fear", "happiness", "neutral", "sadness", "surprise")| photo | smile | anger | contempt | disgust | fear | happiness | neutral | sadness | surprise |
|---|---|---|---|---|---|---|---|---|---|
| Jun_20200402_1.JPEG | 0 | 0 | 0.008 | 0 | 0 | 0 | 0.975 | 0.016 | 0 |
| Jun_20200402_2.JPEG | 0 | 0 | 0.000 | 0 | 0 | 0 | 0.999 | 0.001 | 0 |
| Jun_20200402_3.JPEG | 0 | 0 | 0.000 | 0 | 0 | 0 | 0.999 | 0.000 | 0 |
| Jun_20200403_1.JPEG | 0 | 0 | 0.000 | 0 | 0 | 0 | 0.999 | 0.001 | 0 |
| Jun_20200403_2.JPEG | 0 | 0 | 0.000 | 0 | 0 | 0 | 0.999 | 0.000 | 0 |
Well, it seems I wasn’t very expressive when taking these selfies. Although the API works alright, but the result is not very useful for analysis.
The API was mentioned by Yeonsoo in the DSI discussion forum. After testing with their Demo, I found this API performs much better than the Azure one. Click here to register on the website.
F.A.C.E. API by Sightcorp
The free trail only last 2 weeks, with include 5k montly usages with 1 call per second.
Same step, setup the request URL.
Assign API subscription key for this API.
Using the following code to batch extract emotion data.
# Create a new dataframe
photo_output <- data.frame()
for (i in 1:nrow(photo_list)){
# Error handling
tryCatch({
# Get the photo with photo list
body_image <- upload_file(photo_list[i, ])
body_api = list('app_key' = face_api_key, 'img'= body_image)
# Use POST method to get result from API & Use content to convert result into list
json_pkg <- POST(face_api_url, body = body_api)
json_content <- content(json_pkg)
# Extract essential features (mood, gender, age, emotions, etc) in a new list
temp <- c(mood = json_content$people[[1]]$mood, json_content$people[[1]]$emotions)
# Convert list to data frame
temp <- as.data.frame(temp) %>%
# Add photo name to the data frame
mutate(photo_id = photo_list[i, ]) %>%
# Reorder columns
select(photo_id, everything())
# Combine result of this photo with others
photo_output <- rbind(photo_output, temp)
},
# Error handling
error = function(e) {
writeLines(sprintf("Error at index : %d, error : %s", i, e))
}
)
}
The final output from F.A.C.E. API by Sightcorp:
| photo_id | mood | sadness | disgust | anger | surprise | fear | happiness |
|---|---|---|---|---|---|---|---|
| Jun_20200402_1.JPEG | 3 | 33 | 5 | 1 | 21 | 26 | 1 |
| Jun_20200402_2.JPEG | 39 | 13 | 17 | 20 | 4 | 13 | 1 |
| Jun_20200402_3.JPEG | 21 | 27 | 9 | 1 | 26 | 26 | 1 |
| Jun_20200403_1.JPEG | 19 | 26 | 4 | 2 | 16 | 10 | 2 |
| Jun_20200403_2.JPEG | 45 | 8 | 22 | 4 | 2 | 7 | 51 |
Pros: Free subcription including 30,000 calls per month
Cons: Not sensitive to people with less facial expressions
Pros: Provide more insights relates to emotion
Cons: Free trial only last 2 weeks