Research Discussion Assignment 3

As more systems and sectors are driven by predictive analytics, there is increasing awareness of the possibility and pitfalls of algorithmic discrimination.

In what ways do you think Recommender Systems reinforce human bias?

Recommendation systems can reinforce human biases because similarity matrices are inherently built to group individiuals together

Content filtering - recommendations can be generated based on information about a user or movie. e.g.ย user content could include age, sex, location etc. movie content could include actors, director, genre etc.

Collaborative filtering - recommendations can be generated based on the history of a user. for example if two users liked the same movies, a recommendation can be made to user A based on a movie user B enjoyed that user A has not yet seen. This method is usually more accurate than content filtering but suffers from cold starts when lack of user data is initially sparse.

When looking for similarities, human biases can be inforced by majority based recommendations. For example, if a majority of engineers are men, a recommender system may recommend engineering literature to males with higher probabilities than recommended to females.

algorithms screening applicants for colleges algorithms screning individuals for jobs people with black sounding names more likely to serve ads based on criminal records for meetup, an algorithm might infer that woman are more likely not to be interested in data, need to control algorithm to not infer this.

Reflecting on the techniques we have covered; do you think recommender systems reinforce or help to prevent unethical targeting or customer segmentation?

I believe recommender systems if controlled properly can lead to systems with minimal biases, or at the very least - potentially offensive biases. In the same case mentioned above with engineering books potentially recommended to men over women, we may be able to control this bias by assigning weights that protect one under represented feature (in this case gender) from our recommender systems.

Please provide one or more examples to support your arguments.

Other examples of recommender systems that may prove to be bias and potentially detrimental include systems such as algorithms screening applicants for colleges. When looking through Ivy league applications, in order to sift through the vast number of applicants, a recommender system may be deployed to target users who are similar to previously accepted applicants. however this leads to biases that dont account for demographic changes exhibited in America and some way to control for those biases would be crucial

Similarly, algorithms screning individuals for jobs may want to be careful not to use recommender systems that maybe unintentionally, but unlawfully discard applicants due to factors such as race, gender, name, location etc.