Figure 6: Screenshot of review screen from shoutve- locity.com.

If a mechanism does not rate an item till it loses at least once, then this item cannote be distinguished from better items. Therefore, it is impossible to estimate the ranks r

within r(1 ± ). Let g_{c}(x) denote 1

g_{c}(x), and let f(x) = 1

f(x). For

normally distributed qualities g_{c}(x) = erfc(x). Let us es- timate the score of the item with rank r; at that point

the cdf must have value 1 centrated around g_{c }^{1}(r/n).

r n . So the quality v_{r }is con- For large n we may simply

^{1}(r/n). Let s = r/n; then we can think of w r i t e v r = g c v s = g 1 c ( s ) a s a f u n c t i o n o f s . Let x = v_{s}. Number of thumb ratings required for an item

with score x till it loses at least once is

1 f (x)

, as in each rating

it loses with probability f(x). Now g_{c}(x) = s. For large x, g(x) = erfc(x) can be approximated as g^{0}(x)/x. Now

1

≈

x

=

x

f (x)

f^{0}(x)

(xs)

2

≥

x

1

2

1 (s)

2

≥

1 (log n)

2

/2

1 (s)

2

So total number of thumbs required is at least: P_{r∈1..n }1 (log n) 1 (r/n) 2 2 . So the average number of thumbs per item is: /2

1 (log n)

2

/2

X r r∈1..n n

1

2

1 n

≈

1 (log n)

2 2

Z 1 s =

1 n

## s

1

2

ds

=

1

(log n)

2 2

## O(n

2

^{1})

. For

>

1, clearly this quantity diverges to ∞. More gener-

ally we get

µ_{f }=

1 n

## X

r∈1..n

1/f (g_{c }

^{1}(r/n) ≈

Z 1 0

1

f (g_{c }

^{1}(s)

ds

=

Z 1 0

1

f (g_{c }

1

^{1}(1

s))

ds =

Z 1 0

1

1 f (g_{c }

^{1}(s)

ds

Proof of Theorem 5.2:

Let p = f(x) denote the prob-

ability of winning in one comparison; the variance

of

the

indicator variable for one comparison is p(1

p).

By central limit theorem (or by similar concentration in- equalities such as Hoeffding bounds), for a large number of comparisons k, the fraction of wins is expected to be around

p with variance ^{p(1}_{k }^{p) }. The probability that the estimate

for p has additive error more than is e confidence δ this gives:

Θ(

k p1

p)

)

.

For

k = Θ(

f (x)(1

2

f (x))

) log(1/δ)

However, we need to estimate x within error x ± x, which means we need to estimate p = f(x) within error |f(x ±

x)

f(x)| ≈ |f^{0}(x)|.

## Replacing by f^{0}(x) gives us the

desired expression:

f(x)(1 f(x))

20 2 f (x)

log(1/δ).

## Observe that for f(x) equal to the erf or the logistic func-

tion the quantity

f(x)(1 f(x))

02 f (x)

is decreasing in x. This means

that the number of comparisons required is fewer when com- pared to an item with similar quality.

Proof of Theorem 5.3: We show that successively com- paring the given item with geometrically decreasing per- centile ranks 1/2, ρ/2, ρ^{2}/2..., 1/n achieves the result. We compare the item to these items successively till we find two consecutive ranks between which the item lies. Choos- ing ρ = 1/(1 + ) ensures that the algorithm approximates the rank within a factor of 1 + . Next we bound the total

number of comparisons required. An item with percentile rank s =

r n

gets compared with

items with percentile ranks s(1 + ), s(1 + )^{2}, s(1 + )^{3}..1/2. Thus it gets compared with log(1/s)/ log(1 + ) items.

To get a good estimate the number of comparisons re- quired is minimized when it is compared to an item with closest quality which is s(1 + ).

To estimate the rank within error r, we need to es-

timate its quality v_{r }

within error

=

|v_{r±r }

v_{r}|

=

| v s ± s 1 / g 0 c ( g c

v_{s}| ≈ v^{0}(s)s. Let x = v_{s}. Then v^{0}(s) = (g_{c }

^{1})^{0}(s) =

1 ( s ) ) = 1 / g 0 c ( x ) .

Thus error in quality estimate is

= s/g^{0}(x).

## Since

is also the difference in the qual-

ities of the two items, number of comparisons required is

f( (

)(1 f^{0}(

f( ))

2

))

.

s / g 0 c ( x ) Since = s/(xg_{c}(x)) is small f( ), 1 /x = f( ) and Now = Θ ( / p l o g ( 1 / s ) ) . = f 0 ( ) a r e c o n s t a n t , s o n u m b e r o f c o m p a r i s o n s i s O ( 1 / 2 ) = O(log(1/s)). This is multiplied by at most log(1/s)/ log(1 +

).

## Now

total

number

of

comparisons

is

at

most

O

of

### P_{x:1..n }

log^{2}(n/x)/ log(1 + ) = O(n/ log(1 + )).

Thus, the average number of comparisons required is O(1/ log(1 + )) which is a constant.

8.

EXPERIMENT L RESULTS

8.1

Simulations over Synthetic Data

We perform experiments to compare the ranking schemes of the thumbs algorithm and the comparison based shoutve- locity algorithm. Experiments are done by considering n players (set to 1000) and choosing their true scores based on g(x) being the normal distribution with variance 1. We