"Mitigating the Hidden Side Effects of Recommendation Systems"

This disucssion is loosley based on an article that appeared on MIT Sloan's Mgmt Review by Gediminas Adomavicious, Jesse Boackstedt, Shawn P. Curly, Jingjing Zhang and Sam Ransbotham, Nov 13, 2018.

Reference:

(1) https://wearesocial.com/blog/2020/01/digital-2020-3-8-billion-people-use-social-media

(2) Sharma, J.M. Hofman, and D.J. Watts, “Estimating the Causal Impact of Recommendation Systems From Observational Data,” Proceedings of the Sixteenth ACM Conference on Economics and Computation (Portland, Oregon, June 15-19, 2015): 453-470.

(3) https://www.wired.com/story/creating-ethical-recommendation-engines/

(4) https://en.wikipedia.org/wiki/Choice_architecture

(5) https://goldberg.berkeley.edu/pubs/sanjay-recsys-v10.pdf

(6) https://link.springer.com/content/pdf/10.1007%2Fs11257-011-9112-x.pdf

(7) Rashid, A.M., Albert, I., Cosley, D., Lam, S.K., McNee, S.M., Konstan, J.A., Riedl, J.: Getting to know
you: learning new user preferences in recommender systems. In: Proceedings of the 7th international
conference on Intelligent user interfaces (IUI ’02), pp. 127–134. ACM, New York (2002)

(8) https://sloanreview.mit.edu/article/the-hidden-side-effects-of-recommendation-systems/

Introduction

The headline is: “Consumers and businesses should be aware of potential decision-making biases introduced by online recommendations”. But let’s take a step back and understand why its critically important that we as consumers should be aware and educated ourselves to the potential evils of recommender systems. Multiple researches[1] have shown that as the digital age gained more foothold into human lifes resulting in more e-commerce activities, we the hectic consumers, of anything digital are ever more reliant on recommender systems to make our lives easier. From consumption of goods through internet transactions, to digital ingestions and disemination of information. Even our social and educational fabric are knitted and shape by digital mediums. We are truly living in a digital age, whereby our digital presence has become our human existence.


Discussion I: The Ills of Recommender Systems for Consumers

As with many other new technologies, digital recommendations are potential source of bad consequences; that recommendations do more than just reflect our preferences — they actually shape them. If this sounds like science fiction, it is not. Recommendation systems have the potential to fuel biases and affect sales in unimaginable ways. Several findings[2] have discovered important implications for recommendation engine design in any setting where retailers use recommendation algorithms to improve customer experience and drive sales. For instance, reversion to the mean phenomena. This is the effect of group think or herd mentality. This is the idea that most users tend to rate towards the norms; average or median if they were told what those norms were. This has profound implications on buyers and sellers at the same time as diversities in recommendations become curtailed. Recommendation systems starts to self-learn defective signals which beget more defective signals and so on; you get the point here.

For consumers, recommendation engines have a potential dark side — they can manipulate preferences we don’t realize. After all, the details underlying recommendation algorithms are far from transparent. Faulty recommendation engines that inaccurately estimate consumers’ true preferences stand to discourage one potential buyers to pay for certain items and increase it for others, regardless of the likelihood of actual fit.


Discussion II: Consequences of Recommender Systems for Retailers

Both over- and underestimations are problems not only for consumers but for the sellers as well. Inflated ratings as alluded in the previous section, induce consumers to buy products they might not consider otherwise and could leave consumers disappointed from unmet expectations. Deflated ratings potentially turn off consumers from products they may have otherwise purchased. Mistakes hurt in both ways.

And the effects persist beyond dissatisfaction with a single purchase. They multiple over time. After consumers experienced a product, their feedback influences future personalized predictions. Biased feedback can contaminate the system and lead to a vicious cycle of biases — the online retail equivalent of an echo chamber. Designers could also get an artificially inflated view of prediction accuracy, compromising their reccommender systems’ ability to learn and improve.


Discussion III: Potential Mitigating Measures of “Recommender Systems Gone Bad”

What can we do about bad recommendations? whether it was ill-intended or accidental. According to the article by Renee Diresta in Wired.com [3], we should create a better Recommendation System not solely driven by profits but by ethics. What she meant was designers and engineers of these Tech companies should perform a paradigm shift in their design approaches and introduced something called “Choice Architecture”[4]. Choice architecture is a big word but it’s simply that the design of different ways in which choices can be presented to consumers, and the impact of that presentation on consumer decision-making. A way that information or products are presented to people that takes into account individual or societal welfare while preserving consumer choice.

From a user experience perspective, giving users more control over recommendations, including increasing transparency of recommendations and incorporating the context of recommendations[6]. Furthermore, recommender systems can and should resolve the conflict between benefit to a particular user or benefit to the community as a whole. One resolution as suggested by Rashid et al.[7] is to provide end-users with more choices, but with information about how their choices will provide value to the community at large.
Recommender systems must have an inherent built-in trust “Negentropy”. It must have order and structure designed in it to prevent such things like invasion of privacy to the end-users; this must be first and foremost in the design phase of any algorithms.

Lastly, a technical approach would be to pro-actively reduce social influence Biases or herd thinking by introducing innovations in algorithms and user interface design through human oversight itself. In their study “Social Influence Bias in Recommender Systems: A Methodology for Learning, Analyzing, and Mitigating Bias in Ratings”[5], These group of researchers from Berkely, have an actual Bias Mitigating model to correct for over/under recommendations. The result is to dis-amplifly over radicalizations and temper down the snowballing ill-effects of recommendations.


Conclusions


The need to rethink the ethics of recommendation engines is gaining critical mass as algorithms gained more power and responsibility in our daily lives. The state of affairs is unacceptable but changeable. We first as a society must educate ourselves better with the inner workings of these recommender systems. Only with more knoweledge in tow can we can engage in discussions and demands that lead to better and more beneficial recommender systems for us the consumers as oppose to assymetrically benefiting the profiteers only. We must not be naive to think that all AI endevours are for our benefit. There lies nefarious recommendation systems that were designed solely for profiteering purposes with no intentions for the common good. Let’s be serious about the bottom line of corporations behind these recommender systems.

From a technical standpoint, recommender systems can substantially reduced social influence biases with machine learing algorithms that actually catches it in the first place and performed self-corrections to the calculated outputs. These sorts of algorithms can be applied to recommender systems to prevent rapid propagation of dis-information, radicalized propaganda and artificially inflated reviews that creates a self-virtous cycle that only serves to distort the recommender system itself.