X hits on this document

PDF document

Ranking Mechanisms in Twitter-like orums - page 2 / 10





2 / 10

top of a Web-based forum. Both theoretical and empiri- cal results show that such a system achieves good rankings with very little feedback from users, i.e., for each submitted item, on average we need a constant amount of feedback to ensure a good ranking. We note that, as a consequence, a comparison-based ranking mechanism mitigates drawbacks (1) and (2) mentioned above.

To handle drawback (3), what is really required is a self correcting method that automatically estimates the score of each item. In our system, we align the incentive mecha- nism with user satisfaction, i.e., user’s providing feedback are more likely to see the rank of their items faster than other users. Thus, we are able to make optimal use of the “feedback bandwidth” from the users who are inclined to themselves make use of the feedback mechanisms, without burdening other viewers.

The core techniques presented in this paper, and adopted by shoutvelocity, are useful in a number of applications. Ob- viously, as described above, our techniques are ideally suited for ranking posts in public forums such as digg, twitter, jokes sites, and ranking messages on social networks such as status messages on Facebook. Comparison-based ranking could also be used for ranking blog posts, providing scores and ranking items on online shopping sites such as Amazon and eBay, as well as providing generic or personalized movie recommendations from sites like IMDb. Our algorithm is also appropriate for ranking multimedia such as photos on Flickr2, and videos on YouTube (with the caveat that user feedback in the case of videos may take long and be burden- some). Finally, a far-fetched idea is to use comparisons for academic peer reviewing of papers, instead of independent feedback on each paper.

Contributions and Outline

The main contributions of this paper are:

  • 1.

    Review popular approaches for ranking items through a generic taxonomy (Section 2), and present the con- siderations that must be taken into account while choos- ing a particular ranking mechanism (Section 3).

  • 2.

    In Section 5, we propose a comparison-based rank- ing mechanism. We analytically study and compare thumbs-based and comparison-based ranking mecha- nisms, and theoretically show that comparison-based ranking is generally superior. Preliminaries for our ranking mechanism appear in Section 4, and rigorous proofs for our results from Sec- tion 5 appear in Section 7.

  • 3.

    In Section 6, we present shoutvelocity, a full-fledged forum that implements the comparison-based ranking method. shoutvelocity has been deployed on the Web for over a year, and we provide general statistics and screen shots from our system.

  • 4.

    In Section 8, we present a detailed experimental eval- uation of thumb-based and comparison-based rank- ing mechanisms based on synthetically generated data with various distributions of item scores as well as real

n shouts. All shouts converge to their merited score/rank very quickly.

2Such ranking has been implemented in practice, as we de- scribe in Section 9

Figure 1: Generic Architecture for Item Ranking System.

data from the shoutvelocity system. We show that comparison-based ranking converges to the right rank- ing with much less feedback.

Related work is presented in Section 9, and we conclude in Section 10.



In this section we provide a generic architecture of a sys- tem for ranking items and survey possible alternatives for its review module. Suppose we have n items. Our goal is to obtain the best possible ranking of the n items with as little e ort as possible. Intuitively, the amount of effort is cap- tured by the total work done by the reviewers who provide rating feedback.

We assume that any system for ranking items maintains a running score estimate for each item. These scores are refined based on user feedback, and the final ranking is given by ordering items in descending order of scores.

Figure 1 presents a generic framework for systems that score items. Items are added to (and maintained in) a database when users submit them. At any time, the Re- view Module picks a set of one or more items, and seeks feedback about them from the user. Based on the feedback provided, the items’ scores are revised and the database is updated. The choice of which items are shown for review- ing may be either implicit or explicit; we say it is explicit if there is a separate ‘rate’ link that when a reader clicks is shown one or more specific item to rate. An implicit system is one where the reader rates as she is browsing through the entries. This is essentially like an explicit system where the reader is asked to rate a specific item that is picked with a probability distribution that depends on the order in which the item entries are displayed; clearly lower items are exam- ined with lower probability.

Obviously, the main challenges here are in determining how the review module works, and how scores are updated based on user feedback.

Broadly, there are two approaches to reviewing items:

1. Independent Scoring: Each item is independently

Document info
Document views21
Page views21
Page last viewedThu Oct 27 12:58:53 UTC 2016