Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.
I have read this article: Zeynep Tufekci, The New York Times (2018): YouTube, the Great Radicalizer
In this article, the author mentioned about Youtube recommender system algorithm. It uses the metrics of watching time. So, the more time people spend on the site, the more ads are viewed, and the more money the company makes. Youtube claimed that in 2012, the Youtube changed its algorithm from driving views to driving watch time and also in 2018, they modified its algorithm again to recommend more “trusted journalistic organizations” after it promoted conspiracy theories in the wake of the Parkland shooting (and again in 2019). But still Guillaume Chaslot, who is a former google engineer, is concerned that people “have no idea what the algorithm is doing.”
Chaslot developed AlgoTransparancy as a way to prevent algorithic discrimination. Chaslot’s team developed this program to identify the videos YouTube’s recommendation algorithm most often recommends, based on a given search. The purpose of this project is to inform citizens on the mechanisms behind the algorithms that determine and shape our access to information. I think it could be a solution for radicalizing effects because by informing users that they can be situated in algorithmic discrimination through recommender engine, users can be acknowledgable and prevent themselve by misleading.
Reference https://www.pcmag.com/news/does-youtubes-algorithm-lead-to-radicalization
https://algotransparency.org/?date=10-06-2020&keyword=