Mitigating the Harm of Recommender Systems

Objective

Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.

Up Next: A Better Recommendation System

YouTube, the Great Radicalizer

Social Influence Bias in Recommender Systems: A Methodology for Learning, Analyzing, and Mitigating Bias in Ratings

  • Today, recommendation engines are perhaps the biggest threat to societal cohesion on the internet—and, as a result, one of the biggest threats to societal cohesion in the offline world, too. The recommendation engines we engage with are broken in ways that have grave consequences: amplified conspiracy theories, gamified news, nonsense infiltrating mainstream discourse, misinformed voters. Recommendation engines have become The Great Polarizer.

  • According to Social Influence Bias in Recommender Systems Social influence bias can be significant in recommender systems and that this bias can be substantially reduced with machine learning. To apply this methodology to other recommender systems, a key question for future work is how to extend the approach to large item inventories and how much training data is required in such cases. One idea is to cluster/classify items into a small number of representative categories and train a model for each category.

  • According to Algorithmic bias detection and mitigation Algorithms are harnessing volumes of macro- and micro-data to influence decisions affecting people in a range of tasks, from making movie recommendations to helping banks determine the creditworthiness of individuals.

  • First, all detection approaches should begin with careful handling of the sensitive information of users, including data that identify a person’s membership in a federally protected group (e.g., race, gender). In some cases, operators of algorithms may also worry about a person’s membership in some other group if they are also susceptible to unfair outcomes. An examples of this could be college admission officers worrying about the algorithm’s exclusion of applicants from lower-income or rural areas; these are individuals who may be not federally protected but do have susceptibility to certain harms (e.g., financial hardships).

  • In the former case, systemic bias against protected classes can lead to collective, disparate impacts, which may have a basis for legally cognizable harms, such as the denial of credit, online racial profiling, or massive surveillance.[23] In the latter case, the outputs of the algorithm may produce unequal outcomes or unequal error rates for different groups, but they may not violate legal prohibitions if there was no intent to discriminate.