Mitigating the Harm of Recommender Systems
How to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.
Threats of Recommender system?
RECOMMENDATION ENGINES are perhaps the biggest threat to societal cohesion on the internet and offline.
The recommendation engines we engage with are broken in ways that have grave consequences: amplified conspiracy theories, gamified news, nonsense infiltrating mainstream discourse, misinformed voters. Recommendation engines have become The Great Polarizer.
YouTube algorithm seems to have concluded that people are drawn to content that is more extreme than what they started with — or to incendiary content in general.
YouTube leads viewers down a rabbit hole of extremism, while Google racks up the ad sales.
How can we make the internet’s recommendation engines more ethical ?
The systems don’t actually understand the content, they just return what they predict. That’s because their primary function is to help achieve one or two specific key performance indicators (KPIs) chosen by the company.
As extreme, polarizing, and sensational content continues to rise to the top, it’s increasingly obvious that curatorial algorithms need to be tempered with additional oversight, and reweighted to consider what they’re serving up.
Project Redirect, an effort by Google Jigsaw, redirects certain types of users who are searching YouTube for terrorist videos—people who appear to be motivated by more than mere curiosity. Rather than offer up more violent content, the approach of that recommendation system is to do the opposite—it points users to content intended to de-radicalize them. This project has been underway around violent extremism for a few years, which means that YouTube has been aware of the conceptual problem, and the amount of power their recommender systems wield, for some time now.
Implementing choice architecture, a term for the way that information or products are presented to people in a manner that takes into account individual or societal welfare while preserving consumer choice.
Methodology for Learning, Analyzing, and Mitigating Bias in Ratings?
Article proposed a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of social influence bias in recommender systems.
Learning phase: a baseline dataset is established with an initial set of participants by allowing them to rate items twice: before seeing the median rating, and again after seeing it.
Analysis phase: a new non-parametric significance test based on the Wilcoxon statistic can quantify the extent of social influence bias in this data.If this bias is significant, it propose a Mitigation phase.
Mitigation Phase :where mathematical models are constructed from this data using polynomial regression and the Bayesian Information Criterion (BIC) and then inverted to produce a filter that can reduce the effect of social influence bias.
Conclusion: These results suggest that social influence bias can be significant in recommender systems and that this bias can be substantially reduced with machine learning. To apply this methodology to other recommender systems, a key question for future work is how is how to extend the approach to large item inventories and how much training data is required in such cases. One idea is to cluster/classify items into a small number of representative categories and train a model for each category. We believe that selecting an optimal set of items for training in this context may be posed as a submodular maximization problem.