Mitigating the Harm of Recommender Systems

Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.

The Problem

Recommender systems rely on user information to make decisions about product recommendations. The information is acquired from user profiles and relies heavily on patterns in search behavior. If, for example, an individual searches for a kitchen scale on Amazon, they might be shown rolling papers and packaging bags because similar users tend to purchase those items together. These unintentional recommendations extend into any domain, and it’s important that the developers of such systems proactively search for, manage, and mitigate any harmful effects.

The Solutions

Self-regulated recommender systems have been shown to naturally incorporate biased and discriminatory behaviors unless there are explicit instructions not to do so. Therefore, many of the current solutions involve active human involvement as a preventative measure. In A Better Recommendation System, Renee Diresta provides a few examples of efforts being done to mitigate these harmful effects:

  • Project Redirect - As the name implies, this effort focuses on redirecting individuals searching for terrorist videos on YouTube to content that is intended to de-radicalize them. Perhaps one of the most challenging things about this project is that the curated alternatives were carefully picked so as not to explicitly reject the terrorist messages. Rather, the content includes de-radicalizing messages that are from real people, some at the front lines, some celebrities, some public figures. This ensures that the messages are relatable, respected, and understandable.
  • Fact Checking - Nowadays, it seems like fake news propogates social networks faster than accurate information. Recommendation systems are no exception to this. Unless they are explicitly designed to fact-check, they can often recommend information that is highly viewed and searched for but false. YouTube is countering this by linking directly to Wikipedia articles to fight conspiracy theories. This allows users to post content, but also ensures that the viewers are aware that what they’re seeing isn’t valid.
  • Blacklists - This involves creating lists of terms, topics, products, or authors that are not included in recommendation results. For example, this “Do Not Amplify” list might include known fake Twitter accounts.

Conclusions

Ultimately, when designing a recommendation system, it’s important to recognize that, just like humans, they are subject to false, inaccurate, and radical information. Rather than take a reactive apprach, the designers of such systems must proactively search for, identify, and prevent the harm that could result.