Ranking Mechanisms in Twitter-like orums
Anish Das Sarma∗ Yahoo Research Santa Clara, CA, USA anishdas@yahoo-
Atish Das Sarma Georgia Institute of Technology Atlanta, GA, USA email@example.com
Sreenivas Gollapudi Microsoft Research Mountain View, CA, USA firstname.lastname@example.org
Rina Panigrahy Microsoft Research Mountain View, CA, USA email@example.com
We study the problem of designing a mechanism to rank items in forums by making use of the user reviews such as thumb and star ratings. We compare mechanisms where fo- rum users rate individual posts and also mechanisms where the user is asked to perform a pairwise comparison and state which one is better. The main metric used to evaluate a mechanism is the ranking accuracy vs the cost of reviews, where the cost is measured as the average number of reviews used per post. We show that for many reasonable prob- ability models, there is no thumb (or star) based ranking mechanism that can produce approximately accurate rank- ings with bounded number of reviews per item. On the other hand we provide a review mechanism based on pair- wise comparisons which achieves approximate rankings with bounded cost. We have implemented a system, shoutveloc- ity , which is a twitter-like forum but items (i.e., tweets in Twitter) are rated by using comparisons. For each new item the user who posts the item is required to compare two previous entries. This ensures that over a sequence of n posts, we get at least n comparisons requiring one review per item on average. Our mechanism uses this sequence of comparisons to obtain a ranking estimate. It ensures that every item is reviewed at least once and winning entries are reviewed more often to obtain better estimates of top items.
Categories and Subject Descriptors
H.3.3 Information Search and Retrieval [Ranking]
Ranking mechanisms, comparisons, thumb-based ranking
Ranking has become an important issue not just in web search but also in forums, blogs and social networks such as twitter. While there has been a large body of research on ranking for web search, there is little systematic study of ranking in the aforementioned forums. This paper presents a study of mechanisms for ranking in twitter-like forums. We use item as a generic term for posts (in forums), tweets (on twitter), and messages (in social networks).
The popular methods for rankings in forums include “star ratings”, “thumbs up-down ratings”, and“reputation points”. These ranking methods suffer from a few drawbacks. (1) Generally, they aid in the “rich gets richer phenomena” – the items that get rated a few times often end up being displayed on top thereby receiving more impressions and ratings; because of the large volume in most sites, other potentially good items often never recieve any ratings. (2) Giving an item a score (thumbs up or down), independent of other items results in unnormalized scores; We shall show that such an independent scoring usually needs a lot of feed- back before converging to accurate rankings; and finally, (3) In the absence of any incentives, it is impractical to expect all users to participate in the feedback process; At the same time, absence of any feedback results in user dissatisfaction, as items largely go without any rating. Therefore, appropri- ate incentive/reward systems coupled with user feedback are key to the efficiency and effectiveness of a rating mechanism.
Algorithms, Design, Experimentation, Human Factors
∗Work done while a student at Stanford University
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. WS M’10, February 4–6, 2010, New York City, New York, US . Copyright 2010 CM 978-1-60558-889-6/10/02 ...$10.00.
In this paper, we advocate a comparison-based ranking scheme, where feedback from users are sought in the form of comparisons. Users are shown a pair of items, and they express their opinion of which item they prefer. We theo- retically show that such a comparison-based ranking scheme converges to accurate rankings faster than independent rank- ing schemes described above. Moreover, we show that by us- ing comparison-based ranking, each item needs on average a bounded (constant) feedback bandwidth for our ranking to get very close to the accurate ranking. We have built a system, shoutvelocity1 (http://shoutvelocity.com) , that fully implements the comparison-based ranking scheme on
1Our system is called shoutvelocity where users can “shout” anything they want. When a user ‘shouts’ she is required to compare two previous shouts giving n comparisons over