As more systems and sectors are driven by predictive analytics, there is increasing awareness of the possibility and pitfalls of algorithmic discrimination.
In what ways do you think Recommender Systems reinforce human bias?
Reflecting on the techniques we have covered, do you think recommender systems reinforce or help to prevent unethical targeting or customer segmentation?
Please provide one or more examples to support your arguments.
A few resources:
Evan Estola (2016): When Recommendations Systems Go Bad; MLconf SEA 2016
Rishabh Jain (2016): When Recommendation Systems Go Bad
Moritz Hardt, Eric Price, Nathan Srebro (2016): Equality of Opportunity in Supervised Learning
With thousands of companies and millions of products, the overload of choices in one’s daily life can be overwhelming.
Algorithms are becoming autonomous and more powerful. How can an individual be expected to sort through all possible options available to them when looking for new music on Spotify, or pick their next read from the near endless selection of books on Amazon?
No ordinary individual could possibly spend that much time finding things they might enjoy.
Enter recommendation algorithms. However, these algorithms along with narrowing down the choices user have are also impacting the decisions users take. A doctor won’t question if an algorithm suggests signs of cancer, or a military man won’t question to shoot in the location recommended by the algorithm.
Recommendation algorithms come in all shapes and sizes from collaborative filtering and matrix decomposition to clustering and deep learning.
Each system is complex in its own way and differs in how they take in and process data, but they all share one key similarity- purpose. Recommendation algorithms are made to recommend! They are employed by companies to take in user data to generate a list of suggested future content or products for the purpose of increasing user retention.
Netflix’s algorithms show users shows they might like to win screen time and Amazon shows users products they might like to squeeze a little more cash from their wallets. These algorithms are incredibly useful tools for companies to give users what they want while also helping users to not be overwhelmed by too many choices. Sounds like a win-win, but is that really the case?
Recommendation algorithms were created by companies such as Facebook, YouTube, Netflix or Amazon for the purpose of helping people make decisions. An array of options are recommended and a choice is made by the user that is then fed as new knowledge to train the algorithm - without factoring in that the choice was in fact an output shown by the algorithm.
This creates a feedback loop, where the output of the algorithm becomes part of its input. As expected, recommendations similar to the choice that was made are shown.
This leaves us with a chicken-or-egg dilemma: Did you click on something because you were inherently interested in it, or did you click on it because you were recommended it? The answer, according to Chaney’s research, lies somewhere in between.
But the vast majority of algorithms do not understand the distinction, which results in similar recommendations inadvertently reinforcing the popularity of already-popular content. Gradually, this separates users into filter bubbles or ideological echo chambers where differing viewpoints are discarded.
Recommender systems are aimed to provide users with personalized product and information offerings. These systems take into consideration the user’s personal characteristics and past behaviors to generate a list of items that have been personalized as per the user’s tastes.
Although very successful, there are certain concerns related to the systems that it might lead to a self-reinforcing pattern of narrowing exposure and a shift in user’s interest. These problems are often called the “echo chamber” and “filter bubble” which re-inforce human bias - both conscious and unconscious.
“As users within these bubbles interact with the confounded algorithms, they are being encouraged to behave the way the algorithm thinks they will behave, which is similar to those who have behaved like them in the past,” says Chaney. The longer someone’s been active on a platform, the stronger these effects can be.
Some believe that the algorithms merely promote divisive behaviours already seen in the physical world. In fact, a 2015 Facebook research study concluded that user self-selection was to be blamed for the type of content seen in the news feed.
While that’s true, it’s only part of the story. Options can become increasingly narrower, and user choices can be restricted to increasingly extreme content. That is the effect of a phenomenon known as algorithmic confounding, a finding at the heart of the research published in October by researchers Allison Chaney, Brandon Stewart and Barbara Engelhardt at Princeton University.
Social media has been effective in showing us what we want to see but has also helped create echo-chambers at a large scale.
Recent events have sparked research on de-siloing these information bubbles from different perspectives.
Whether it’s allegations of ethnic cleansing in Myanmar, anti-Muslim violence in Sri Lanka or the “gilets jaunes” protests in France, it is clear that social media platforms are helping spread divisive messages online at an alarming rate and potentially fueling offline violence.
But the debate is about whether these platforms are an essential cause, without which these events could not have happened, or merely reflect real-world tensions.
One of the most common applications of machine learning today is in recommendation algorithms.
While these algorithms offer a great deal of convenience, they have some undesirable side effects. You’ve probably heard of them before: filter bubbles and echo chambers.
Despite past studies showing that users will tolerate lower levels of accuracy to gain the benefit of diverse recommendations, developers still have a disincentive to design their algorithms that way. “It is always easier to ‘be right’ by recommending safe choices,” Konstan says.
Recommendation systems built primarily based on machine learning algorithms are dependent on data. With the data being updated in real time for recommendation systems, the data used on the model might become biased itself.
This is due to how recommendations are generated - if users subscribe to the suggestions the algorithms offer, their tastes are guided towards what the data indicates they should like. And when their data is used to propose items for other users, these suggestions can compound. If companies trust autonomous algorithms with no checks in place on the data or resulting suggestions, the recommendations can circle in on itself, and be biased. By adding human involvement in the pipeline, monitoring for data overfitting can be accounted for.
DeepMind researchers published a paper last week, titled Degenerate Feedback Loops in Recommender Systems.
In the paper, researchers provide a new theoretical analysis examining the user dynamics role and the behavior of recommender systems, that can help remove the echo chamber from the filter bubble effect.
They ran simulations of five different recommendation algorithms, which placed different degrees of priority on accurately predicting exactly what the user was interested in over randomly promoting new content. The algorithms that prioritized accuracy more highly, they found, led to much faster system degeneracy.
In other words, the best way to combat filter bubbles or echo chambers is to design the algorithms to be more exploratory, showing you things that are less certain to capture your interest. Expanding the overall set of information from which the recommendations are drawn from can also help.
Approaching filter bubbles and echo chambers as interactive systems involving humans rather than mere machine-learning simulations would help in mitigating/containing the bias.
One such potential method to resolve these issues with the recommendation systems is using a person in the middle. This solution combines human expertise with computer efficiency, primarily helping to keep a check on the suggestions that end up being shown to the end user. Breaking the feedback loop might entail mimicking the ways by which humans discover items of interest offline: through friends and family, expert advisers, happy accidents or serendipitous chance.
As artificially intelligent systems draw inspiration from human intelligence, we may end up with more enjoyable and safer social media platforms.