be done in a constant number of rounds. The time complex- ity for computing a MIS by a distributed algorithm depends on the maximum degree of the graph. For small , the fastest known algorithms are based on coloring algorithms. A coloring with K colors can be turned into a MIS in K rounds of communication. Thus, if we can color a graph with K colors in t rounds, we can compute a MIS in t+K rounds. In [9], an extremely elegant algorithm which colors a graph with 3 colors in O(log^{∗ }n) rounds is described, resulting in a MIS algorithm with time complexity O(log^{∗ }n + 3 ). The algorithm was improved in [11] where an algorithm for 3

computing a MIS in time O(log

(

2

+

log

^{∗}n)) is given. In

[20], it is shown that it is even possible to color a graph with O( ^{2}) colors in O(log^{∗ }n) rounds giving a time com- plexity of O( ^{2 }+ log^{∗}n) for computing a MIS. The proof of [20] is based on the existence proof of a set system with certain properties. It is shown that such a set system exists and that it can be computed efficiently by a randomized al-

gorithm.

# If

is

constant,

all

three

algorithms

compute

a

MIS in O(log^{∗ }n) rounds. degree of G.

# Lemma 5.2 bounds the maximum

Lemma 5.2. In each iteration of Algorithm 1, the maxi- mum degree of G is at most 2^{2ρ}.

Proof Let be the length of the minimum distance be- tween any two nodes of G. Because the algorithm com- putes an independent set in each iteration, we have > r/2. Therefore, every ball of radius r/4 contains only one node. All neighbors of a node v ∈ V are in the ball B_{r}(v) of radius r around v. By the definition of the doubling dimension ρ, B_{r}(v) can be covered by 2^{2ρ }balls of radius r/4. Therefore, the number of nodes in B_{r}(v) is at most 2^{2ρ}.

When using the algorithm of [20] for computing the MIS, Lemma 5.2 implies the following corollary.

Corollary 5.3. The time complexity of a single itera- tion of the while loop of Algorithm 1 is O(log^{∗ }n + 2^{4ρ}), that is, for constant doubling dimension, the time complexity is O(log^{∗ }n).

Before coming to the description of a faster implementa- tion the while loop of Algorithm 1, we have a look at the complexity of lines 9 and 10. By Lemma 5.1, we know that each node has a cluster leader in its neighborhood. Line 9 thus can be computed in a single communication round. The time complexity of line 10 is more interesting. Similar to the construction of a MIS, a distributed coloring algo- rithm which colors a graph with K colors in time t can be turned into a coloring algorithm whith colors a graph with

+

1 colors in time t + K (

is the maximum

degree).

Algorithm 2 shows how this can be achieved.

# By Lemma 5.1, the maximum degree of the cluster graph

is at most 2^{4ρ }

1.

# If we use the algorithm of [20] for comput-

ing the initial coloring, this results in the following corollary.

Corollary 5.4. The time complexity of line 10 of Al- gorithm 1 is O(log^{∗ }n + 2^{8ρ}), that is, for constant doubling dimension, the time complexity is O(log^{∗ }n).

^{3}Converting a coloring to a MIS works essentially in the same way as the reduction of the number of colors which will be discussed in Algorithm 2. In fact, all nodes of color 1 form a MIS after applying algorithm 2

Algorithm 2 Color Reduction (node v_{i})

Input: coloring with colors {1, . . . , K} Output: coloring with colors {1, . . . , 1: for c := 1 to K do

+

1}

2: 3: 4: 5:

i f send color(v_{i}) to all neighbors; c o l o r ( v i ) = c t h e n c o l o r ( v i ) : = m i n i m a l p o s s i b l e c o l o r

6: od

5.2

# Fast Implementation of the Basic Algo- rithm

In this section, we will have a second look at the com- plexity of Algorithm 1 resulting in the main result of this paper. We need to start with a few general considerations concerning the synchronous message passing model. If nodes communicate for k rounds, they can only gather information which is at most k hops away. In principle, every distributed k-round algorithm can be formulated as follows.

1.

Collect complete k-neighborhood in graph in k com- munication rounds

2.

Compute the output by locally simulating the relevant part of the distributed algorithm (no communication needed)

Collecting the complete k-neighborhood can be achieved if all nodes send their complete states to all their neighbors in every round. After round i, all nodes know their i- neighborhood. Learning the i-neighborhoods of all neigh- bors in round i + 1 suffices to know the i + 1-neighborhood. The above formulation of a distributed algorithm of course has the drawback that messages can get extremely large. We will show that the message size can be kept moderate in our example.

Let us again consider a single iteration of the while loop of Algorithm 1. All communication which is needed to com- pute an iteration of the while loop is on G. Hence, all mes- sages are sent on edges which have length at most r. If we communicate for k rounds and if all messages of those k rounds are on edges of length at most r, then all collected information comes from distance at most k · r. In order to be able to compute everything locally, the nodes have to col- lect the complete neighborhood up to distance kr (w.r.t. the metric). That is, the nodes have to collect all information which is accessible by paths of length at most kr. Note that it is not necessary and it might not be possible to collect the whole ball of radius kr. Because of the triangle inequal- ity, it is possible to collect this information in 2kr rounds. Applying this to Algorithm 1, we get Lemma 5.5.

Lemma 5.5. Algorithm 1 can be computed in O(log^{∗ }n + 2^{8ρ}) rounds, that is, for constant doubling dimension, the time complexity can be reduced to O(log^{∗ }n).

Proof By Corollary 5.3, the number of rounds of an iteration of the while loop of Algorithm 1 is O(log^{∗ }n + 2^{4ρ}). Nodes therefore need to collect information from distance at most O(r(log^{∗ }n + 2^{4ρ})). To obtain the distance from which we need information in order to be able to locally compute the results of all iterations of the while loop, we have to sum up the distances for all iterations. We do not