Mitigating the Harm of Recommender Systems
Read one or more of the articles below and consider how to counter the radicalizing effects of recommender systems or ways to prevent algorithmic discrimination.
The Article I decided to read was Renee Diresta, Wired.com (2018): Up Next: A Better Recommendation System. One of the challenges to algorithmic approach to solve human problems, needs, is that the nunaces of the human point of view dilutes or dissapear that’s the trap of “Data driven decisions”, people tend to forget that these ares support tools for decision making but nowadays are used without much questioning. Another issue is complexity of the algorithms by itself techniques sucha as SVD, Neural Networks, etc present a challenge on “Explainability” meaning what were the steps (not in a mathematically fashion) that you arrived to X or Y Conclusion.
One of the critical elements for prevent the kind of problems the article describes is multidisciplinary teams (not only IT,DS) from the deisgn, implementation and evaluation of these models soft or non technical consideration needs to be made.
Bias detection from the algorithm perspective is getting more developed with better result but still won’t identify specific issues from the actual result , human review must be maintained as a constant,
Pinterest and google rely on automation on recommendation and had been trying to have content review manually but reallity is that with the huge amount of content generated by the second it will never be an efficient method to prevent the radicalizing effect of a recommender, however AI and Neural Network combination with recommender model allows us to really explode on the features that we can identify and include or not as part of the recommender and automated curation thru computer vision , contxt analysis, etc might be a better vehicle to solve these problems no without the warning that we might be creating new ones.