As more systems and sectors are driven by predictive analytics, there is increasing awareness of the possibility and pitfalls of algorithmic discrimination. In what ways do you think Recommender Systems reinforce human bias? Reflecting on the techniques we have covered, do you think recommender systems reinforce or help to prevent unethical targeting or customer segmentation? Please provide one or more examples to support your arguments.

In many of the recommender systems that I have made in this course so far, the predicted ratings come from a model built on the rating of other users. This is great at picking up common trends among the user base, but also comes with the implicit (and sometimes explicit) biases of the userbase. Some of this comes with the implication of always targeting an “average” user. This can often have the effect of reinforcing or implying stereotypes. As a personal example, it’s not uncommon for me to get ads in Spanish even though I’m using the app in English, but many people in my zip code are, and those are good enough odds. In the linked talk by Evan Estola, he gives an example of a person buying a kitchen scale on Amazon, and the recommended items were all associated with drug dealing. Those items had been purchased together enough times and so the recommender was just doing its job and finding the items purchased together more often than other items. Therefore, it is likely for recommender systems to reinforce targeting and segmentation that can become unethical or even harmful just by their nature.

Other examples given later in the talk provided examples that showed the potential harm in stereotype reinforcement possible and observed in other recommenders. Part of the blame can be shouldered by the nature of advertising, where targeting and stereotyping can be financially beneficial. Therefore, it is not surprising that advertisers would rely on things such as race or gender as part of the models to recommend which advertisements people see. In fact, on some level it is necessary that they do, but there is a difference between products or groups that are explicitly for a given gender or race. As a white male, a good recommender would not show me ads highlighting products for women (unless of course in the form of an advertisment specifically targeted for men, ie. a gift for your wife/mother/etc.) or a support group for a different race. As Estola mentioned, gender did have a role in the recommender model but only on this explicit level, so only women would be recommened groups for women in tech and so forth. The key is to eliminate the more implicit biases that come with centering recommendations on averages and common associations, such as showing fewer groups in tech in general to women or showing more advertisments for resources for dealing with a criminal record for Black women. While the recommender seems to be doing its job, it comes at the cost of stereotyping, reinforcing them by the absence of other advertisements that could be more relevant (or serendipitous) to the individual. As a math teacher of primarily women, stereotype threat is something I am constantly observing and try to combat. It does not seem like a stretch to think that this is something that could be reinforced in a recommender’s algorithm, such as returning simpler results based on gender when asked about mathematical topics.