X hits on this document





7 / 9

be done in a constant number of rounds. The time complex- ity for computing a MIS by a distributed algorithm depends on the maximum degree of the graph. For small , the fastest known algorithms are based on coloring algorithms. A coloring with K colors can be turned into a MIS in K rounds of communication. Thus, if we can color a graph with K colors in t rounds, we can compute a MIS in t+K rounds. In [9], an extremely elegant algorithm which colors a graph with 3 colors in O(logn) rounds is described, resulting in a MIS algorithm with time complexity O(logn + 3 ). The algorithm was improved in [11] where an algorithm for 3

computing a MIS in time O(log



  • +

    logn)) is given. In

[20], it is shown that it is even possible to color a graph with O( 2) colors in O(logn) rounds giving a time com- plexity of O( 2 + logn) for computing a MIS. The proof of [20] is based on the existence proof of a set system with certain properties. It is shown that such a set system exists and that it can be computed efficiently by a randomized al-










MIS in O(logn) rounds. degree of G.

Lemma 5.2 bounds the maximum

Lemma 5.2. In each iteration of Algorithm 1, the maxi- mum degree of G is at most 22ρ.

Proof Let be the length of the minimum distance be- tween any two nodes of G. Because the algorithm com- putes an independent set in each iteration, we have  > r/2. Therefore, every ball of radius r/4 contains only one node. All neighbors of a node v ∈ V are in the ball Br(v) of radius r around v. By the definition of the doubling dimension ρ, Br(v) can be covered by 22ρ balls of radius r/4. Therefore, the number of nodes in Br(v) is at most 22ρ.

When using the algorithm of [20] for computing the MIS, Lemma 5.2 implies the following corollary.

Corollary 5.3. The time complexity of a single itera- tion of the while loop of Algorithm 1 is O(logn + 24ρ), that is, for constant doubling dimension, the time complexity is O(logn).

Before coming to the description of a faster implementa- tion the while loop of Algorithm 1, we have a look at the complexity of lines 9 and 10. By Lemma 5.1, we know that each node has a cluster leader in its neighborhood. Line 9 thus can be computed in a single communication round. The time complexity of line 10 is more interesting. Similar to the construction of a MIS, a distributed coloring algo- rithm which colors a graph with K colors in time t can be turned into a coloring algorithm whith colors a graph with

  • +

    1 colors in time t + K (

is the maximum


Algorithm 2 shows how this can be achieved.

By Lemma 5.1, the maximum degree of the cluster graph

is at most 24ρ


If we use the algorithm of [20] for comput-

ing the initial coloring, this results in the following corollary.

Corollary 5.4. The time complexity of line 10 of Al- gorithm 1 is O(logn + 28ρ), that is, for constant doubling dimension, the time complexity is O(logn).

3Converting a coloring to a MIS works essentially in the same way as the reduction of the number of colors which will be discussed in Algorithm 2. In fact, all nodes of color 1 form a MIS after applying algorithm 2

Algorithm 2 Color Reduction (node vi)

Input: coloring with colors {1, . . . , K} Output: coloring with colors {1, . . . , 1: for c := 1 to K do

  • +


2: 3: 4: 5:

i f send color(vi) to all neighbors; c o l o r ( v i ) = c t h e n c o l o r ( v i ) : = m i n i m a l p o s s i b l e c o l o r

6: od


Fast Implementation of the Basic Algo- rithm

In this section, we will have a second look at the com- plexity of Algorithm 1 resulting in the main result of this paper. We need to start with a few general considerations concerning the synchronous message passing model. If nodes communicate for k rounds, they can only gather information which is at most k hops away. In principle, every distributed k-round algorithm can be formulated as follows.

  • 1.

    Collect complete k-neighborhood in graph in k com- munication rounds

  • 2.

    Compute the output by locally simulating the relevant part of the distributed algorithm (no communication needed)

Collecting the complete k-neighborhood can be achieved if all nodes send their complete states to all their neighbors in every round. After round i, all nodes know their i- neighborhood. Learning the i-neighborhoods of all neigh- bors in round i + 1 suffices to know the i + 1-neighborhood. The above formulation of a distributed algorithm of course has the drawback that messages can get extremely large. We will show that the message size can be kept moderate in our example.

Let us again consider a single iteration of the while loop of Algorithm 1. All communication which is needed to com- pute an iteration of the while loop is on G. Hence, all mes- sages are sent on edges which have length at most r. If we communicate for k rounds and if all messages of those k rounds are on edges of length at most r, then all collected information comes from distance at most k · r. In order to be able to compute everything locally, the nodes have to col- lect the complete neighborhood up to distance kr (w.r.t. the metric). That is, the nodes have to collect all information which is accessible by paths of length at most kr. Note that it is not necessary and it might not be possible to collect the whole ball of radius kr. Because of the triangle inequal- ity, it is possible to collect this information in 2kr rounds. Applying this to Algorithm 1, we get Lemma 5.5.

Lemma 5.5. Algorithm 1 can be computed in O(logn + 28ρ) rounds, that is, for constant doubling dimension, the time complexity can be reduced to O(logn).

Proof By Corollary 5.3, the number of rounds of an iteration of the while loop of Algorithm 1 is O(logn + 24ρ). Nodes therefore need to collect information from distance at most O(r(logn + 24ρ)). To obtain the distance from which we need information in order to be able to locally compute the results of all iterations of the while loop, we have to sum up the distances for all iterations. We do not

Document info
Document views17
Page views17
Page last viewedSat Oct 22 16:32:23 UTC 2016