X hits on this document

48 views

0 shares

0 downloads

0 comments

10 / 12

160000

130

0.6

140000

A* Hier Adapt

120

110

A* Adapt Hier

0.5

A* Hier Adapt

120000

nodes

100000

80000

60000

travel time (minutes)

100

90

80

70

60

cpu time (seconds)

0.4

0.3

0.2

40000

20000

0

  • 0

    0.1

0.2

0.3 query length

0.4

0.5

0.6

50

40

30

  • 0

    0.1

0.2

0.3 0.4 query length

0.5

0.6

0.1

0

  • 0

    0.1

0.2

0.3 query length

0.4

0.5

0.6

Figure 5: Query Length vs. Ex- panded Nodes. Depth: 2

Figure 6: Query Length vs. Travel time. Depth: 2

Figure 7: Query Length vs. CPU time. Depth: 2

nodes

160000

140000

A* Hier Adapt

120000

100000

80000

60000

40000

20000

0

  • 0

    0.1

0.2 0.3 0.4 upgraded areas

0.5

0.6

travel time (minutes)

130 125 120 115 110 105 100 95 90 85 80 75

  • 0

    0.1

0.2

0.3 0.4 upgraded areas

A* Adapt Hier

0.5

0.6

cpu time (seconds)

0.55

0.5 0.45

0.4 0.35

0.3 0.25

0.2 0.15

0.1 0.05

0

A* Hier Adapt

  • 0

    0.1

0.2 0.3 0.4 upgraded areas

0.5

0.6

Figure 8: Upgraded paths vs. Ex- panded Nodes. Depth: 2

Figure 9: Upgraded Travel time. Depth: 2

paths

vs.

Figure 10: Upgraded paths vs. CPU time. Depth: 2

only a fraction of the cost. Hier suffers significantly in terms of path travel time, the reason is that it completely ignores roads internal to areas that are faster than border roads. This experiment shows the power of our algorithm, not only in terms of efficiency, but also accuracy, and highlights the relevance of considering small fast roads in addition to large roads. In Figure 7, which presents the average CPU time per query using the 3 algorithms, we observe the same pattern as in the expanded nodes figure.

8.3

Upgraded paths

is in Figure 9, we see that when no edges are upgraded both Hier and Adapt perform equally, as we increase the num- ber of upgraded edges Adapt starts closing the gap with A. This is expected as we have ever more options to find a good path, while the quality of paths found by Hier continues to decrease. This experiment is important because we see that we can use a fairly aggressive edge updating strategy to improve path quality without incurring any significant per- formance penalty. We could consider, for example, interior edges as long as they are 80% as fast as border edges to improve path quality.

In this set of experiments we vary the percentage of lowest level areas that contain a path that is faster than the border paths and thus needs to be upgraded (Given that the speed of the edges in the areas along paths that need upgrading is changed, Amay need to expand slightly different edges). For this experiment we used the San Francisco road network. Average path length is 50% of the area diameter, and all the conditions are the same as the ones used for the query length experiments, except for upgraded paths which we vary.

We can see in Figures 8 and 10 that neither Aor Hier are significantly affected in terms of the number of expanded nodes, or the CPU speed. This is because Aalways con- siders the entire network, and Hier disregards upgraded edges completely. The performance of Adapt suffers as we have more upgraded edges that need to be considered in the search process. But we can see that the degrade in per- formance is quite gradual we go from around an average of 18,000 expanded nodes when no edges are upgraded to just around 20,000 when 60% of the areas contain an upgraded path. Where we really see the importance of upgraded edges

8.4

Area Pre-computation

In this experiment we examine the performance gain for different levels of pre-computation. We use the San Fran- cisco road network, partitioned using roads of classes 1 and 2. The average path length used is 50% of the area diame- ter. The percentage of areas with an upgraded path is 20%. We compare two methods, Adapt which is our algorithm, and Adapt nopre which is the same algorithm but without using pre-computed areas. For this experiment we select a percentage of the lowest level areas, and pre-compute every fastest path in it. We vary the number of pre-computed ar- eas at the lowest level from 0% to 100%. We can see that the performance improvement is very significant as we can use more pre-computed areas, i.e., the algorithm is basically reaching for the destination taking very long jumps. In this experiment we did not pre-compute any higher level area, if we had done it, the performance gains would have been even more noticeable.

803

Document info
Document views48
Page views48
Page last viewedMon Jan 23 06:37:03 UTC 2017
Pages12
Paragraphs474
Words12190

Comments