Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.
Recommender systems rely on user information to make decisions about product recommendations. The information is acquired from user profiles and relies heavily on patterns in search behavior. If, for example, an individual searches for a kitchen scale on Amazon, they might be shown rolling papers and packaging bags because similar users tend to purchase those items together. These unintentional recommendations extend into any domain, and it’s important that the developers of such systems proactively search for, manage, and mitigate any harmful effects.
Self-regulated recommender systems have been shown to naturally incorporate biased and discriminatory behaviors unless there are explicit instructions not to do so. Therefore, many of the current solutions involve active human involvement as a preventative measure. In A Better Recommendation System, Renee Diresta provides a few examples of efforts being done to mitigate these harmful effects:
Ultimately, when designing a recommendation system, it’s important to recognize that, just like humans, they are subject to false, inaccurate, and radical information. Rather than take a reactive apprach, the designers of such systems must proactively search for, identify, and prevent the harm that could result.