Describe how you think it works (content-based, collaborative filtering, etc). Does the technique deliver a good experience or are the recommendations off-target?
The IMDB rating system works by allowing users to cast votes on individual titles (movies or TV shows) that are aggregated into a rating. It is not the only tool that is used to assess the opinion of the public on a title. IMDB also includes a Metacritic Score for titles and user reviews. A Metacritic Score is a weighted average that assigns importance to different critics and publications depending on their reputation. A score is provided from 0 - 100 where 100 indicates a higher score. Scores higher than 81 can achieve “Must-See” recommendations.
IMDB applies various filters to the raw data to reduce the attempts of people who are voting to manipulate scores rather than providing a legitimate review. IMDB ratings are unbiased and deemed “accurate” (by IMDB). It is a rating that can be taken with some level of confidence due to the robustness of their aggregation and using thousands of user opinions rather than rating systems with smaller sample sizes.
As the graphic below implies, these filtering techniques work by making recommendations from one user to another based on similarities deduced from their data. this can be similarities of the items they are interested in (item-based recomendation), or similarities based on user history (user-based recommendation).
Discuss Attacks on recommender systems and mitigation ideas to prevent system abuse.
Attacks on recommmender systems can prove challenging to deal with in several cases. The example from IMDB talks about the film named “Code Name: K.O.Z” which is about a 2013 Turkish government corruption scandal. This raised a lot of critics with heavily dichotomized political views and resulted in users trying to severely downrate the movie prior to it’s release by giving it an influx of low ratings. Some ways to deal with attacks on recommender systems could be to identify fake profiles - looking at locations, review history, and date of profile creation could assist with this. additionally weighting reviewers based on their activity may be another potential method of mitigating this effect. Another method is using user and movie biases within the recommendation system and this allows us to adjust for user biases which are theoretically low due to the requirement of creating new accounts to downrate movies. A combination of these tools can allow modellers to calssify suspicious profiles and reviews as well as statistically adjust ratings to account for heavy biases.