Mitigating the Harm of Recommender Systems
Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.
Renee Diresta, Wired.com (2018): Up Next: A Better Recommendation System
Zeynep Tufekci, The New York Times (2018): YouTube, the Great Radicalizer
Sanjay Krishnan, Jay Patel, Michael J. Franklin, Ken Goldberg (n/a): Social Influence Bias in Recommender Systems: A Methodology for Learning, Analyzing, and Mitigating Bias in Ratings
Recommender systems work by doing precisely what they are ‘designed’ to do. The mind of one or more human beings are usually behind the algorithmic design and instantiation of recommender engines.As humans, our thought patterns are guided by our experiences and background, whether culturally or otherwise. Thus, to some extent,as individuals, we are all biased in one way or the other.Individual biases can be subtly incorporated into algorithmic design, leading to results that may be considered unethical. Hence,in considering ways to prevent algorithmic discrimination, it would be prudent to see how this can be done from the perspective of the human designers.
One way suggested in the literature is DIVERSITY-IN-DESIGN. In explaining this approach Turner-Lee et al explains that operators of algorithms should consider the role of diversity within their work teams, training data, and the level of cultural sensitivity within their decision-making processes. They continued that employing diversity in the design of algorithms upfront will trigger and potentially avoid harmful discriminatory effects on certain protected groups, especially racial and ethnic minorities. While the immediate consequences of biases in these areas may be small, the sheer quantity of digital interactions and inferences can amount to a new form of systemic bias. They explained that the operators of algorithms should not discount the possibility or prevalence of bias and should seek to have a diverse workforce developing the algorithm, integrate inclusive spaces within their products, or employ “diversity-in-design,” where deliberate and transparent actions will be taken to ensure that cultural biases and stereotypes are addressed upfront and appropriately. Adding inclusivity into the algorithm’s design can potentially vet the cultural inclusivity and sensitivity of the algorithms for various groups and help companies avoid what can be litigious and embarrassing algorithmic outcomes (Turner-Lee et al., 2019). Their findings are especially critical in HR related recommendation algorithms.
Turner-Lee et al also suggested that operators of algorithms regularly audit for bias. Audits prompt the review of both input data and output decisions, and when done by a third-party evaluator, they can provide insight into the algorithm’s behavior. Developing a regular and thorough audit of the data collected for the algorithmic operation, along with responses from developers, civil society, and others impacted by the algorithm, will better detect and possibly deter biases.
Turner-Lee, N., Resnick, P., & Barton, G. (2019, May 22). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings; Brookings. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/