drawing

Date that article was published: April 7, 2016

Summary of Article

Machine learning is one of the hottest topics in software development and data science, and companies are putting big money behind it. However, what happens if your machine learning algorithm has a racial bias? Unfortunately, law enforcement systems across the nation are utilizing a biased facial-recognition system to help identify passerbys in public, real-time to help them identify wanted suspects/criminals. The facial-recognition software they use was trained on a predominantly white training set which has resulted in improved accuracy of identification of white individuals, but a much less accurate result when it comes to identifying non-Caucasian features. This has especially impacted the African American community and resulted in false arrests. Unsurprisingly, when comparing similar facial recognition systems amongst different countries, it was found to be that each country’s system was most accurate at identifying individuals of their ethnicity. However, the major ethical dilemma is due to the US law enforcement’s lack of transparency in their facial recognition system and refusal to submit their system to the National Institute of Standards and Technologies (NIST) to test and measure bias. Because of the refusal to submit the software to NIST, it shows a lack of judgment and an increased value to save face rather than to actually improve the system.

Author Information

CLARE GARVIE is a law fellow at the Center on Privacy & Technology at Georgetown Law.

drawing

JONATHAN FRANKLE is the staff technologist at the Center on Privacy & Technology at Georgetown Law.

drawing

About The Atlantic

The Atlantic is an American magazine and publisher that provides daily coverage and analysis of breaking news, politics and international affairs, education, technology, health, science, and culture. The Executive Editor of the website is Adrienne LaFrance and the Editor-in-Chief is Jeffrey Goldberg.

The Atlantic

What Do I Think?

I think that this article is interesting because it shines a light on how even though we usually associate racial bias as something only people can have, and code as a cold, distinct separate entity, the two can easily become intertwined. While an algorithm may not have a political view or opinions of its own, it can easily become biased based on the training set it is fed. It is important to keep this in mind in order to maintain integrity in the data science community.

Furthermore, the law enforcement should be more transparent in submitting their software to NIST for review because the overall impact the bias could have outweighs any face they are trying to save. This is a lesson that rings true for all data and computer scientists – own up to your mistakes even in the seemingly black and white world of numbers so you can continually improve your product.