faceid gif

Article publish date: March 12, 2018

Summary of Article

This article explored how Apple’s new FaceID worked (in 2018) and reverse engineered it to create their own version using Keras and an Xbox Kinect. It was essentially broken up into two sections: how does FaceID work, and how can we implement it on our own. Ultimately, the author was successful in creating his own proof-of-concept of the FaceID unlocking mechanisms based on the FaceID whitepaper

How does FaceID Work?

We would think that FaceID would be simply a classification problem: either the face in front of the camera is an authorized user, or it isn’t. However, this article brings to attention that the underlying algorithms are actually doing much more, and in a much different way than expected. One of Apple’s main selling points on FaceID was that it adapts to your face even if your appearance changes over time.

knitr::include_graphics('faceidchange.gif')
FaceID Adapts Over Time

FaceID Adapts Over Time

If we had implemented FaceID with just a binary classifier model, each time new data/images of the user came in, the whole model would have to train from scratch again, which is computationally expensive and extremely slow. The author theorizes that Apple uses a Siamese-like convolutional neural network that can essentially compute distances/similarity metrics between images.

This way, Apple can pre-train the model on their end, and they wouldn’t need any further training when they’re shipped to our devices. Using their large resources, they can even train the FaceID model to recognize edge cases (twins, masks, etc). Classification using this model would simply be checking if the face in front of the phone is similar enough to the profile image stored on the phone.

FaceID in Python

The author used the deep learning library Keras to implement his own version of FaceID in Python. However, before designing any parts of the model, he had to collect data of face images. Luckily, he found an RGB-D face dataset here. Then, using the SqueezeNet CNN architecture as a base model, he tweaked it a bit to take two images as input and outputs the distances between the images as outputs. Images of the same person should output low distances while images of different people should output high distances. If the distance outputted is under some threshold, the model should signal ā€œunlocked.ā€ After training, the author did some data visualization using t-SNE (t-distributed stochastic neighbor embedding) and PCA (principal component analysis).

Ultimately, the model outputted distances of around 0.4 for the same person, and about 1.0 for different people. Success!

About the Author

Norman Di Palo is a relatively active author on Medium whose interests are in deep learning and robotics.

You can find the source code of his FaceID implementation at this Github repo.

Random Plots of R Data

HairEyeColor Plot

m <- melt(HairEyeColor)
## Warning in melt(HairEyeColor): The melt generic in data.table has been passed a
## table and will attempt to redirect to the relevant reshape2 method; please note
## that reshape2 is deprecated, and this redirection is now deprecated as well.
## To continue using melt methods from reshape2 while both libraries are attached,
## e.g. melt.list, you can prepend the namespace like reshape2::melt(HairEyeColor).
## In the next version, this warning will become an error.
ggplot(m)+geom_bar(aes(x=Hair, y=value, fill=Eye), stat='identity',position='dodge')+facet_grid(Sex~.)

Beaver Plot

Sleep Datatable with DT