of 17
Journal of Graph Algorithms and Applications
http://jgaa.info/
vol.7,no.3,pp.287–303(2003)
Finding Shortest Paths With Computational
Geometry
Po-ShenLoh
California Institute of Technology
http://www.caltech.edu/
po@caltech.edu
Abstract
We present a heuristic search algorithm for the
R
d
Manhattan shortest
path problem that achieves front-to-front bidirectionality in subquadratic
time. In the study of bidirectional search algorithms, front-to-front heuris-
tic computations were thought to be prohibitively expensive (at least
quadratic time complexity); our algorithm runs in
O
(
n
log
d
n
)timeand
O
(
n
log
d
1
n
) space, where
n
is the number of visited vertices. We achieve
this result by embedding the problem in
R
d
+1
and identifying heuristic
calculations as instances of a dynamic closest-point problem, to which we
then apply methods from computational geometry.
Communicated by Joseph S.B. Mitchell: submitted October 2002; revised June 2003.
Research supported by Axline and Larson fellowships from the California Institute of
Technology.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
288
1 Introduction
Let
E
be a finite set of axis-parallel line segments in
R
d
intersecting only at
endpoints, let
V
be the set of all endpoints in
E
, and let
G
be the graph defined
by
V
and
E
. The Manhattan shortest path problem is to find the shortest path
in
G
between two distinct terminals
T
1
,T
2
∈V
. We will assume that the graph
has already been processed and stored in memory; our algorithm is meant for
applications in which multiple runs will be conducted on the same graph.
1.1 Notation
In this paper, we will use the following conventions.
The symbol
XY
denotes the Manhattan distance between two points
X
and
Y
in space.
The symbol
XY
denotes the edge connecting two adjacent vertices
X, Y
G
.
The symbol
XY
denotes the length of a shortest path in
G
between two
vertices
X, Y
∈V
.
The symbol
S
(
XY
)
denotes the set of all shortest paths in
G
between two
vertices
X, Y
∈V
.
The function
l
(
P
) denotes the length of a path
P
in
G
. A path is a set of
vertices and edges that trace a route between two vertices in a graph.
The direct sum
S
(
XY
)
S
(
YZ
)
forms all pairwise concatenations between
paths in
S
(
XY
)
and
S
(
YZ
)
.Thatis,
S
(
XY
)
S
(
YZ
)
=
{P ∪ Q
:
P∈
S
(
XY
)
,
Q∈
S
(
YZ
)
}
.
1.2 Previous Work
Dijkstra’s algorithm solves this problem in
O
(
n
log
n
+
dn
) time, where
n
is the
number of vertices visited by an algorithm
1
, which is a less expensive function
of
n
than that of our algorithm. However, the purpose of our algorithm’s search
heuristic is to reduce
n
itself, thereby recapturing the additional log-power com-
plexity factor.
Our algorithm is an extension of
A* heuristic search
[4, 5], which is a priority-
first search in
G
from
T
1
to
T
2
. A* maintains two dynamic point sets, a
wavefront
W⊆V
and a set Ω
⊆V
of
visited
points, which begin as
W
=
{
T
1
}
and Ω =
.
The priority of a vertex
V
is a conservative estimate of the length of a shortest
path connecting the terminals via
V
; specifically:
priority(
V
)=“
T
1
V
”+
VT
2
,
1
Note: we do not define
n
to be the number of edges on the shortest path.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
289
where “
T
1
T
1
”=0,
T
1
V
” = min
U
∈A
{
T
1
U
+
UV
}
,
(1)
and
A
is the set of all visited vertices that are adjacent to
V
. At each iteration,
A*
visits
a point
P
∈W
of minimal priority: it transfers
P
from
W
to Ω
and inserts
P
’s unvisited neighbors into
W
. This technique is called
wave-
propagation
because the wavefront
W
grows like a shock wave emitted from
T
1
.
The proofs of our algorithm’s validity in Section 3 will show that “
T
1
V
approximates
T
1
V
and is in fact equal to
T
1
V
by the time
V
is visited, hence
the quotation marks. Once
T
2
is visited, the length of the shortest path is then
priority(
T
2
). To recover the shortest path,
predecessor
information is stored:
the predecessor of a point
V
is defined to be the point
V
∈A
that yields the
minimum in equation (1). If there are multiple minimal points, then one of
them is arbitrarily designated as the predecessor. Since
A
is a dynamic set, the
predecessor of a point may vary as A* progresses, but it turns out that once
a point
P
has been visited, its predecessor is always adjacent to it along some
path in
S
(
T
1
P
)
. Thus, after
T
2
is visited, a shortest path from
T
1
to
T
2
can be
obtained by tracing back through the predecessors from
T
2
to
T
1
.
The purpose of a search heuristic is to help the wavefront grow in the di-
rection of
T
2
. When no shortest paths from
T
1
to
T
2
approach
T
2
directly,
however, unidirectional search heuristics become misleading. This is not a rare
occurrence; in computer chip design, one may wish to route wires between ter-
minals that are accessible only through certain indirect channels. In order to
develop a more intelligent heuristic, one must therefore explore the search space
from both terminals. This motivates bidirectional search.
We concentrate here on bidirectional searches that are based on wave propa-
gation, which expand one wavefront from each terminal. Such searches come in
two types:
front-to-end
and
front-to-front
. In front-to-end searches, points are
assigned heuristics without regard to information about points on the other
wavefront. In contrast, the wavefronts cooperate in front-to-front searches;
heuristic calculations for points on a given wavefront incorporate information
from points on the opposing wavefront. This is illustrated in Figure 1.
The paper [6] by Kaindl and Kainz surveys many approaches to bidirec-
tional search, and advocates front-to-end heuristics because of the apparently
prohibitive computational complexity required for front-to-front calculations.
In particular, the fastest such heuristic ran in time proportional to wavefront
size, which would yield a worst-case running time that was at least quadratic.
In this paper, we present a Manhattan shortest path algorithm that attains
front-to-front bidirectionality for only a nominal log-power complexity cost over
front-to-end search.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
290
P
Q
1
Q
2
Q
3
wavefronts
P
wavefronts
Front-to-front
searches consider points
Q
k
on the
other wavefront when computing
P
's heuristic.
Front-to-end
searches only consider
T
2
when computing a heuristic for
P.
T
1
T
2
T
2
T
1
Figure 1: Two varieties of bidirectional heuristics.
1.3 Front-to-Front Bidirectionality
The natural front-to-front generalization of A* uses a two-parameter search
heuristic that we shall call the
estimated total length
,or
ETL
for short. For any
two points
P, Q
∈G
, the ETL function
λ
is defined as follows:
λ
(
P, Q
)=“
T
1
P
”+
PQ
+“
QT
2
,
where “
T
1
P
” and “
QT
2
” are defined along the lines of equation (1). This search
algorithm, which we shall call
FF
(Front-to-Front), maintains two wavefronts,
W
1
and
W
2
, and two visited sets, Ω
1
and Ω
2
. Initially,
W
1
=
{
T
1
}
,
W
2
=
{
T
2
}
,
and Ω
1
=Ω
2
=
, and at each iteration, a pair of vertices
P
1
∈W
1
and
P
2
∈W
2
with minimal
λ
(
P
1
,P
2
) is visited just as in A*. If several pairs tie with minimal
ETL, then FF visits a pair with minimal
P
1
P
2
among those tied. This helps to
steer the wavefronts toward each other.
FF’s termination condition is quite delicate, as it is possible for the wave-
fronts to intersect before a shortest path is found. Since our algorithm has
a similar termination condition, we postpone the details until we present our
algorithm in Figure 2.
2 Proposed Algorithm
Since FF finds the minimal
λ
(
P
1
,P
2
) at each iteration, it computes an accurate
but costly heuristic. Our algorithm, which we name
FFF
(Fast Front-to-Front),
accelerates FF by relaxing the minimality constraint on
λ
(
P
1
,P
2
). To motivate
our approximation, we embed the wavefronts into
R
d
+1
and apply computational
geometry.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
291
2.1 Dimensional Augmentation
The first step is to identify our ETL minimization as a
dynamic bichromatic
closest-pair
problem. To accomplish this, we enter
R
d
+1
space, embedding our
points
P
∈W
1
∪W
2
as follows:
P
(
x
1
,x
2
,...,x
d
)

P

(
x
1
,x
2
,...,x
d
,
+
σ
1
)
,
if
P
∈W
1
,
P

(
x
1
,x
2
,...,x
d
,
0)
,
if
P
is
T
1
or
T
2
,
P

(
x
1
,x
2
,...,x
d
,
σ
2
)
,
if
P
∈W
2
,
where
σ
k
= min
V
∈A
{
T
k
V
+
VP
}
,
and
A
is the set of vertices in Ω
k
that are adjacent to
P
.
In this paper, we will use the convention that all primed points are in
R
d
+1
,
all unprimed points are in
R
d
, and if two points or sets in the same context have
the same letter, a prime indicates passage between
R
d
and
R
d
+1
. If some point
P

R
d
+1
is not in either embedded wavefront, then its unprimed form
P
will
still represent the projection of
P

onto its first
d
coordinates. Finally, we will
refer to the last coordinate of a point
P

R
d
+1
by the shorthand
σ
(
P

).
2.2 Identification and Approximation
Since we are working in a Manhattan metric,
λ
(
P
1
,P
2
) is equal to the distance
between
P

1
and
P

2
in
R
d
+1
. Therefore, finding a minimal pair is equivalent to
finding a closest pair between
W

1
and
W

2
in
R
d
+1
. At each iteration of FF, the
point sets only change slightly, so each minimization is an instance of a dynamic
bichromatic closest-pair problem.
Using the method of Eppstein in [3], one can solve the
R
d
+1
Manhattan
dynamic bichromatic closest-pair problem with
O
(log
d
+2
n
) amortized time per
insertion and
O
(log
d
+3
n
) amortized time per deletion. In each of our iterations,
we will need to perform at least one insertion and one deletion, so that method
would give us an algorithm that ran in
O
(
n
log
d
+3
n
) amortized time.
We are only looking for a search heuristic, though, so we can content our-
selves with
almost-closest pairs
without compromising the validity of our algo-
rithm. This in fact speeds up our algorithm by a factor of log
3
n
and changes
the complexity bounds from amortized to worst-case; in low dimensions (
d
3),
it provides a significant performance boost. To accomplish this, we take a cue
from the method of successive approximations: starting from a well-chosen seed
point
P

0
R
d
+1
, not necessarily in
W

1
or
W

2
, we find a closest point
X

1
∈W

1
,
and then find an
X

2
∈W

2
that is closest to
X

1
. We then use
X
1
and
X
2
instead
of the
P
1
and
P
2
of FF. This pair may not have minimal
λ
, but it turns out
to be close enough (cf. Theorems 4 and 5). Therefore, we only need
dynamic
closest-point search
.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
292
2.3 Dynamic Closest-Point Search
This problem can be reduced to that of
dynamic range search for minimum
(dis-
cussed in [7, 8, 10]) by adapting the technique [1] of Gabow, Bentley, and Tarjan
in [9]. Let
S

be a set of points in
R
d
+1
,let
m
and
M
be the respective minimum
and maximum (
d
+ 1)-st coordinates in
S

, and let
P

(
p
1
,p
2
,...,p
d
+1
)
R
d
+1
be a
query point
with
p
d
+1
m
or
p
d
+1
M
. We wish to find the point in
S

that is closest to
P

. The constraint on the last coordinate of
P

is introduced
because FFF does not require the general case and this speeds up the search by
a factor of log
n
. In this section, we will only discuss the case when
p
d
+1
m
;
the other case can be treated similarly.
Let
I
be the set of all ordered (
d
+ 1)-tuples of the form (
±
1
,...,
±
1), and
for any

(

1
,...,
d
+1
)
∈I
, let Ω


(
P
) refer to the following octant in
R
d
+1
:
X

(
x
1
,...,x
d
+1
)


(
P
)
⇐⇒ ∀
i
∈{
1
,
2
,...,d
+1
}
,

x
i
<p
i
,
if

i
=
1
,
x
i
p
i
,
if

i
=+1
.
Also, define the family of functions
f

:
R
d
+1
R
for all

∈I
:
f

:
X

(
x
1
,...,x
d
+1
)

d
+1

i
=1

i
x
i
,
where the parameter

is defined in the same way as used in Ω


(
P
). Then when
X

∈S



(
P
), its distance from
P

is equal to
d
+1

i
=1
|
x
i
p
i
|
=
d
+1

i
=1

i
(
x
i
p
i
)=
f

(
X

)
f

(
P

)
,
(2)
but since
p
d
+1
m
, we must have
S



(
P
)=
for all

(

1
,...,
d
+1
) that
have

d
+1
=
1. All of our computations on
X

∈S



(
P
) in equation (2)
must then turn out as
d
+1

i
=1
|
x
i
p
i
|
=

d

i
=1

i
x
i

+
x
d
+1

d

i
=1

i
p
i

p
d
+1
,
so if we define the family of functions
g

:
R
d
+1
R
for all

∈I
:
g

:
X

(
x
1
,...,x
d
+1
)


d

i
=1

i
x
i

+
x
d
+1
,
then the distance from
P

to any
X

∈S



(
P
)is
g

(
X

)
g

(
P

).
Now since the only Ω


(
P
) that contain points in
S

are those with

d
+1
=+1,
we can repartition
R
d
+1
into the 2
d
distinct Θ


(
P
), defined as follows:
X

(
x
1
,...,x
d
+1
)
Θ


(
P
)
⇐⇒ ∀
i
∈{
1
,
2
,...,d
}
,

x
i
<p
i
,
if

i
=
1
,
x
i
p
i
,
if

i
=+1
,
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
293
and the point in
S

closest to
P

can be found by looking for the minimum
g

(
X

)
g

(
P

)ineachofthe2
d
distinct Θ


(
P
). Within each region,
g

(
P

)
is constant, and we can compute and store all 2
d
distinct
g

(
X

) whenever we
insert a point
X

into
S

; thus, since the partition of
R
d
+1
into

Θ


(
P
) ignores
all (
d
+ 1)-st coordinates, we will have reduced our
R
d
+1
dynamic closest-point
problem to a
R
d
dynamic range search for minimum. According to [2, 3], range
trees can solve this in
O
(
n
log
d
1
n
) space with
O
(log
d
n
) query/update time.
2.4 Implementation
FFF stores
W

1
and
W

2
in a pair of
Dynamic Closest-Point Structures
,or
DCPS’s
, each of which holds a set of points in
R
d
+1
and supports the following
operations:
1.
insert(
P

)
. Inserts
P

into the DCPS if the structure does not yet con-
tain any point
Q

with
Q
=
P
. If it already has such a point
Q

, then
Q

is replaced by
P

if
|
σ
(
P

)
|
<
|
σ
(
Q

)
|
; otherwise,
P

is ignored.
2.
delete(
P

)
. Deletes
P

from the DCPS.
3.
query(
P

)
.Let
m
and
M
be the minimum and maximum (
d
+ 1)-st
coordinates in the DCPS, respectively. Then, for a point
P

R
d
+1
with
σ
(
P

)
m
or
σ
(
P

)
M
, this function performs a closest-point query;
it returns a point
X

in the DCPS that is rectilinearly closest to
P

.If
several points
{
X

i
}
yield that minimal distance, then one with minimal
PX
i
is returned.
The special features of our DCPS insertion and query can be added through
simple modifications of implementation details, and do not affect its overall
complexity.
We use a priority queue to select seed points for the successive approxima-
tions of Section 2.2. Our priority queue stores elements of the form
P

(
λ, δ
),
where
P

R
d
+1
is the data and the ordered pair (
λ, δ
) is its associated priority.
The
λ
estimates the length of a shortest path through
P
and the
δ
estimates
how close
P
is to
W
2
. We compare priorities by interpreting the (
λ, δ
) as sort-
ing keys; that is, (
λ
1
1
)
>
(
λ
2
2
) if and only if
λ
1
2
or
λ
1
=
λ
2
and
δ
1
2
. When we pop an element off the queue, we retrieve the one with
minimal priority.
2.5 Pseudocode
The algorithm is presented in Figure 2.
3 Justification
Throughout this section,
k
will be an index from the set
{
1
,
2
}
. Also, the asterisk
will be a shorthand for the predecessor of a point; for instance,
P
will denote
the point in
R
d
that prompted the insertion (line 43) of
P

into its
W

k
.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
294
1: algorithm FFF {
2: insert
T

1
and
T

2
into
W

1
and
W

2
, respectively;
3:
min
:=
;
4: push
T

1
(
T

1
T

2
,T
1
T
2
)
onto priority queue;
5: while (priority queue,
W

1
, and
W

2
are all nonempty) {
6: pop
P

(
λ, δ
)
off priority queue;
7: if (
λ
min
) { end algorithm; }
8: if (
P

∈W
1
) { return to line 5; }
9:
P

0
:=
P

with last coordinate set to zero;
10: query:
X

1
:= point in
W

1
that is closest to
P

0
;
11: if (
P

=
X
1
) { PriorityQueueInsert(
P

); }
12: if (WavefrontsCrossed(
X
1
,
2
)) { return to line 5; }
13: Visit(
X

1
,
1
);
14: query:
X

2
:= point in
W

2
that is closest to
X

1
;
15: if (WavefrontsCrossed(
X
2
,
1
)) { return to line 5; }
16: Visit(
X

2
,
2
);
17: }
18: Assertion 1:
min

=
∞⇔
S
(
T
1
T
2
)

=
S
(
T
1
S
)
S
(
ST
2
)
S
(
T
1
T
2
)
;
19: }
20:
21: procedure PriorityQueueInsert(
P

){
22: if (
P
2
){
23: Assertion 2:
PT
2
is known;
24:
Y

:=
P

with last coordinate set to
PT
2
;
25: }
26: else { query:
Y

:= point in
W

2
that is closest to
P

;}
27: push
P

(
P

Y

,PY
)
into priority queue;
28: }
29:
30: function WavefrontsCrossed(
X
,
){
31: if (
X
){
32: Assertion 3:
T
1
X
+
XT
2
is known;
33: if (
T
1
X
+
XT
2
<
min
){
S
:=
X
;
min
:=
T
1
X
+
XT
2
;}
34: return
true
;
35: }
36: else { return
false
;}
37: }
38:
39: procedure Visit(
X

,
k
){
40: Assertion 4:
T
k
X
=
|
σ
(
X

)
|
;
41: move
X
from
W
k
to
k
;
42: for (all
{
V

k
:
XV
∈E}
){
43: insert
V

into
W

k
, where we use
T
k
X
+
XV
for
|
σ
(
V

)
|
;
44: if (
k
=1
) { PriorityQueueInsert(
V

); }
45: }
46: }
Figure 2: FFF pseudocode
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
295
T
k
Y
Z
X
wavefront
visited points
Figure 3: Lemma 1
Lemma 1
At the beginning of some iteration after the first, let
X
be a point
not in
k
and let
P
be a path in
G
connecting
T
k
and
X
. Then there exist two
points
Y, Z
∈P
such that
YZ
∈P
,
Y
k
,and
Z
∈W
k
.
Proof:
The first iteration moves
T
k
into Ω
k
,so
P∩
k
=
in every iteration
thereafter. Let
Y
be the point in that intersection whose distance from
X
along
edges in
P
is minimal, and let
Z
∈P
be the point adjacent to
Y
that is closer
to
X
along
P
than
Y
was (fig. 3). From the minimality of
Y
, we know that
Z
k
, so when
Y
was inserted into Ω
k
(line 41), FFF must have inserted
Z
into
W
k
(lines 42–43). Again by the minimality of
Y
,
Z
cannot currently be in
k
, so this
Y
and this
Z
satisfy the statement of the lemma.

Theorem 1
In every iteration after the first, when FFF visits (lines 13, 16) a
point
X
∈W
k
,
|
σ
(
X

)
|
=
T
k
X
+
X
X
= min
M
∈W
k
{
T
k
M
+
M
M
+
MX
}
=
T
k
X
,
(3)
and a path in
S
(
T
k
X
)
can be obtained by tracing back through predecessors from
X
.
Proof:
We proceed by induction.
Base Case.
FFF begins with Ω
k
=
,
W
k
=
{
T
k
}
,and
|
σ
(
T

k
)
|
=0,so
in the second iteration,
X
=
T
k
=
M
,
T
k
X
=
T
k
X
,and
T
k
M
=0. Now
|
σ
(
X

)
|
=
T
k
T
k
+
T
k
X
=
T
k
X
and by the triangle inequality
T
k
M
+
M
M
+
MX
T
k
X
=
T
k
X
; our base case is true.
Induction Step.
By line 43 and the induction hypothesis,
|
σ
(
X

)
|
=
T
k
X
+
X
X
, so the left equality is true. To prove the center equality, we use indirect
proof; suppose, for the sake of contradiction, that
T
k
X
+
X
X
= min
M
∈W
k
{
T
k
M
+
M
M
+
MX
}
.
Then there must currently exist some point
M

∈W

k
for which
T
k
M
+
M
M
+
MX <
|
σ
(
X

)
|
,and
T
k
M
+
M
M
=
|
σ
(
M

)
|
by line 43. Let
Q

be the query
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
296
point (the
P

0
or the
X

1
) that led to
X

in line 10 or 14. Since
σ
(
Q

)and
σ
(
M

)
are not of the same sign,
Q

M

=
|
σ
(
Q

)
|
+
QM
+
|
σ
(
M

)
|
. Combining all of
our information:
Q

M

=
|
σ
(
Q

)
|
+
QM
+
|
σ
(
M

)
|
<
|
σ
(
Q

)
|
+
QM
MX
+
|
σ
(
X

)
|
≤|
σ
(
Q

)
|
+
QX
+
|
σ
(
X

)
|
=
Q

X

.
Thus
X

is not the point in
W

k
that is closest to
Q

; contradiction.
Next, we prove the rightmost equality in equation (3). Let
P
be a path
in
S
(
T
k
X
)
. Since
X
∈W
k
and
W
k
k
=
, Lemma 1 implies that there
exists some point
K
in
P∩W
k
with adjacent point
K
0
∈P∩
k
. Since both
K
0
and
K
areinΩ
k
and are adjacent to
K
, they each must have attempted
to insert an embedding of
K
into
W

k
by lines 42–43. Our DCPS insertion
operation has the special property that it retains only the best embedding, so
T
k
K
+
K
K
T
k
K
0
+
K
0
K
.Yet
K
0
is adjacent to
K
along
P∈
S
(
T
k
K
)
,
so
T
k
K
0
+
K
0
K
=
T
k
K
. Furthermore,
K
K
∈E
,so
K
K
=
K
K
and thus
T
k
K
+
K
K
T
k
K
. Hence
T
k
K
+
K
K
=
T
k
K
so
min
M
∈W
k
{
T
k
M
+
M
M
+
MX
}≤
T
k
K
+
K
K
+
KX
=
T
k
K
+
KX
T
k
K
+
KX
=
l
(
P
)
=
T
k
X
,
and
T
k
X
T
k
X
+
X
X
= min
M
∈W
k
{
T
k
M
+
M
M
+
MX
}
by the center equality. Therefore, the rightmost equality is true.
Only the last claim of our theorem remains to be proven. By equation (3),
T
k
X
+
X
X
=
T
k
X
, so since
X
X
∈E
, we know that
S
(
T
k
X
)
S
(
X
X
)
S
(
T
k
X
)
.Now
X
k
, so it must have been visited at some earlier time.
From the induction hypothesis, a path in
S
(
T
k
X
)
can be obtained by following
predecessors back from
X
; if we append the edge
X
X
to this path, then we
will have a path in
S
(
T
k
X
)
, so we are done.

Corollary 1
At the start of every iteration, for every
O
k
, a path in
S
(
T
k
O
)
can be obtained by tracing back through predecessors from
O
.
Proof:
This follows immediately from Theorem 1.

Corollary 2
Assertions 2, 3, and 4 are true.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
297
Proof:
These follow from Theorem 1 and Corollary 1.

Corollary 3
Let
W

k
be one of the embedded wavefronts at the beginning of
some iteration, and let
P

be a point that is not on the same side of the
σ
=0
hyperplane. That is, if
k
=1
,
σ
(
P

)
should be nonpositive, and if
k
=2
,
σ
(
P

)
should be nonnegative. Furthermore, suppose that
P
k
and let
Q

be the
point in
W

k
that is closest to
P

.Then
QQ
+
Q
T
k
=
|
σ
(
Q

)
|
=
QT
k
and
PT
k
PQ
+
QT
k
.
Proof:
The first claim follows from the induction step of Theorem 1, since
that proof only used the fact that
X
resulted from a query by a point that
was not on the same side of the
σ
= 0 hyperplane. To prove the second claim,
observe that if we replace
X
with
P
in the proof of the rightmost equality in
Theorem 1, we find a point
K

∈W

k
for which
T
k
K
+
K
K
+
KP
T
k
P
.
Yet
|
σ
(
K

)
|
=
T
k
K
+
K
K
by line 43, and
|
σ
(
Q

)
|
=
QT
k
by the first claim,
so since
Q

∈W

k
is closer to
P

than
K

∈W

k
,
Q

P

K

P

,
|
σ
(
Q

)
|
+
QP
+
|
σ
(
P

)
|≤|
σ
(
K

)
|
+
KP
+
|
σ
(
P

)
|
,
|
σ
(
Q

)
|
+
QP
≤|
σ
(
K

)
|
+
KP,
QT
k
+
QP
T
k
K
+
K
K
+
KP
T
k
P
as desired.

Lemma 2
Let
X

be a point in
W

1
at the start of some iteration after the first.
If
T
1
X
+
X
X
+
XT
2
<
min
, then the priority queue contains an element
X

(
λ, δ
)
with
λ
T
1
X
+
X
X
+
XT
2
.
Proof:
We first show that when a point
X
is visited and a neighbor
V

is
inserted into
W

1
, the priority queue element
V

(
λ, δ
) that FFF inserts (line 44)
has
λ
T
1
X
+
XV
+
VT
2
.
In the call to
priorityQueueInsert
, if line 24 is used, then by Corollary
2,
λ
=
|
σ
(
V

)
|
+
VT
2
=
T
1
X
+
XV
+
VT
2
. If line 26 is used, we can apply
Corollary 3 to see that
VT
2
VY
+
YT
2
=
VY
+
|
σ
(
Y

)
|
,so
T
1
X
+
XV
+
VT
2
T
1
X
+
XV
+
VY
+
|
σ
(
Y

)
|
=
|
σ
(
V

)
|
+
VY
+
|
σ
(
Y

)
|
=
V

Y

=
λ.
Thus the lemma’s condition holds at the moment of insertion, and we are
left to deal with the case where an element
P

(
λ, δ
) is popped off the priority
queue but
P

is not removed from
W

1
.
Note that we may pop off an element that is not in
W

1
. This happens when
a point
V
is inserted into
W
1
more than once, with different
V

embeddings. In
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
298
such cases, our DCPS only retains the one with lesser
|
σ
(
V

)
|
, although both
remain in the priority queue. We wish to ignore all other
V

since they represent
longer paths from
T
k
to
V
; such is the purpose of line 8.
If our iteration passes line 8 but
P
is not visited, then we may need to reinsert
P

into the priority queue to satisfy the lemma. If
P
=
X
1
, this job is success-
fully accomplished by line 11, since the above analysis of
priorityQueueInsert
applies here. The only case left to consider is when the iteration terminates via
line 12 with
P
=
X
1
2
.Now,
T
1
X
1
+
X
1
X
1
+
X
1
T
2
=
T
1
X
1
+
X
1
T
2
by
Corollary 3 since
X

1
resulted from a DCPS query by
P

0
. Yet line 33 makes this
at least
min
and the lemma makes no claim when
T
1
X
1
+
X
1
X
1
+
X
1
T
2
min
,
so we are done.

Lemma 3
After each iteration except the last,
1
has grown or the number of
elements in the priority queue with
λ<
min
has decreased by at least one.
Proof:
If an iteration makes it to line 13, it inserts a point into Ω
1
, and if the
iteration does not satisfy line 11, then the size of the priority queue decreases
by one. Therefore, the only nontrivial case is when the iteration satisfies line
11 but fails to reach line 13; this is when
P
=
X
1
2
. The iteration will pass
lines 22 and 33, setting
λ
=
T
1
X
1
+
X
1
T
2
min
, but by line 7,
λ
was originally
less than
min
, so the lemma is true.

Lemma 4
FFF terminates; it eventually reaches assertion 1.
Proof:
We have a finite graph, so Lemma 3 implies that each iteration that
does not decrease the number of elements with
λ<
min
must grow Ω
1
and
insert finitely many elements into the priority queue. The set Ω
1
cannot grow
indefinitely, however, because
|V|
<
; hence only a finite number of insertions
occur. Once the priority queue runs out of elements with
λ<
min
, FFF will
terminate via line 7. Therefore, we cannot loop forever.

Theorem 2
FFF finds a shortest path when one exists; assertion 1 is true.
Proof:
By Lemma 4, FFF terminates, so it suffices to show that assertion 1 is
true. We begin with the first equivalence. The reverse implication is trivial, so
we move on to prove the forward direction. Suppose for the sake of contradiction
that
S
(
T
1
T
2
)
=
but FFF terminates with
min
=
.Let
P
be a path in
S
(
T
1
T
2
)
; lines 12 and 15 ensure that Ω
1
2
=
, so since
T
1
1
and
T
2
2
, Lemma 1 applies to
P
. Thus both wavefronts must be nonempty. By
Lemma 2, the priority queue contains an element with finite
λ
, so FFF could not
have terminated via line 5. Yet
min
=
, so FFF could not have terminated
via line 7 and we have a contradiction.
Next we prove the second equivalence. Note that
S
is undefined until a path
is found, so if
S
(
T
1
S
)
S
(
ST
2
)
S
(
T
1
T
2
)
, then a shortest path exists. It
remains to prove the forward implication. If
S
(
T
1
T
2
)
=
, then from the first
equivalence,
min
must be eventually be set to some finite value, at which point
S
is defined. We must show that
S
(
T
1
S
)
S
(
ST
2
)
S
(
T
1
T
2
)
. Once we reach
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
299
this stage, we will indeed know a shortest path in
S
(
T
1
T
2
)
because Corollaries
1 and 3 apply to
S
.
We proceed by contradiction; suppose that FFF does not find a minimal
path, terminating with
min
>
T
1
T
2
. Consider the situation at the beginning
of the last iteration. Let
P
be a path in
S
(
T
1
T
2
)
; by Lemma 1, we can find
X
1
∈W
1
∩P
and
Y
1
1
∩P
with
X
1
Y
1
∈P
. This is illustrated in Figure 4.
T
1
Y
1
X
1
wavefront
visited points
T
2
X
1
*
The portion of this shortest path that is within the
wavefront is derived from predecessor information.
Figure 4: Theorem 2: the dashed curves represent two different shortest paths
between
T
1
and
T
2
,and
P
is the curve with longer dashes. Wavefront
W
2
is not
shown.
Since
Y
1
1
and
Y
1
X
1
∈P∈
S
(
T
1
T
2
)
, the portion of
P
between
T
1
and
X
1
is a shortest path, and hence the embedding
X

1
with
|
σ
(
X

1
)
|
=
T
1
X
1
is in
W

1
. Then
T
1
X
1
+
X
1
X
1
+
X
1
T
2
=
T
1
X
1
+
X
1
T
2
=
T
1
T
2
and by Lemma 2,
there must be an element in the priority queue with
λ
T
1
T
2
<
min
.Wehave
a priority queue that looks for minimal
λ
, so the loop could not have terminated
via line 7. Furthermore, the wavefronts are nonempty by the reasoning in the
proof of the first equivalence; thus the loop could not have terminated via line
5. Hence we have a contradiction, and FFF is indeed valid.

4 Performance
Lemma 5
The main loop runs at most
(2
d
+1)
n
+1
times, where
d
is the
dimension of the search space and
n
is the number of visited vertices.
Proof:
By definition of
G
, the degree of each vertex is at most 2
d
, because each
vertex only has 2
d
axis-parallel directions of approach and no two edges overlap
in more than one point. We can apply this fact to the argument used in the
proof of Lemma 4 to classify iterations into three types: either an iteration adds
apointtoΩ
1
and inserts up to 2
d
priority queue elements with
λ<
min
,orit
reduces the number of such priority queue elements by at least one, or it is the
last iteration.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
300
Since
n
=
|
1
|
+
|
2
|
,Ω
1
can grow no more than
n
times, so we can have no
more than
n
iterations of the first type. The number of priority queue insertions
with
λ<
min
must then be bounded by 2
dn
, so we can have no more than 2
dn
iterations of the second type. We can only have one iteration of the third type,
and thus the number of iterations is bounded by (2
d
+1)
n
+1.

Theorem 3
FFF’s worst-case space and time complexities are
O
(
n
log
d
1
n
)
and
O
(
n
log
d
n
)
, respectively, where
n
is the number of vertices visited.
Proof:
Using the method in Section 2.3, our DCPS can achieve
O
(
n
log
d
1
n
)
space and
O
(log
d
n
) time for updates and queries, where
n
is the number of
points in the structure. In our algorithm, the DCPS’s determine the space
complexity, so FFF takes
O
(
n
log
d
1
n
) storage. As for speed, the dominant
operations in the main loop are the DCPS update/query, both of which take
O
(log
d
n
). Lemma 5 bounds the number of iterations by (2
d
+1)
n
+1, so the
overall time complexity is
O
(
n
log
d
n
).

Theorem 4
Suppose that in some graph,
S
(
T
1
T
2
)
=
and the A* search
described in Section 1.2 visits
N
vertices. Then FFF will visit at most
2
N
vertices.
Proof:
A* terminates after it visits all
N
vertices that are connected to
T
1
through paths in
G
. Similarly, FFF will have exhausted
W
1
by the time it
has visited all of these vertices, so it will have terminated via line 5. In each
iteration,
X
1
is visited before
X
2
, so the the number of vertices visited by FFF
must be no more than 2
N
.

Theorem 5
Suppose that in some graph,
S
(
T
1
T
2
)
=
.Let
N
0
be the set of
all vertices
X
∈G
for which
T
1
X
+
XT
2
<
T
1
T
2
,andlet
N
1
be the set of all
vertices
X
∈G
for which
T
1
X
+
XT
2
T
1
T
2
. Then A* will visit between
|N
0
|
and
|N
1
|
vertices, while FFF will visit no more than
2
|N
1
|
vertices.
Proof:
The set
N
0
consists of vertices with A* priority less than
T
1
T
2
, so since
the priority of
T
2
is
T
1
T
2
, the priority-first search must visit all vertices in
N
0
.
There is no estimate, however, of how many vertices with priority
T
1
T
2
are
visited by A*; the method could visit as few as one or as many as all, depending
on the search space.
We now prove the second claim. By the reasoning in the proof of the previous
theorem, it suffices to show that Ω
1
⊆N
1
. Suppose that
X
1
1
; by lines 10
and 9,
X
1
resulted from a query by some point
P

0
, which was derived from
some
P

. When
P

was inserted into
W

1
, it had a priority
λ
=
P

Y

=
|
σ
(
P

)
|
+
PY
+
|
σ
(
Y

)
|≥
T
1
P
+
PY
+
|
σ
(
Y

)
|
for some
Y
.If
Y
came from line 24, then
T
1
P
+
PY
+
|
σ
(
Y

)
|
=
T
1
P
+0+
PT
2
T
1
P
+
PT
2
.If
Y
came from line 26,
Corollary 3 implies:
T
1
P
+
PY
+
|
σ
(
Y

)
|
=
T
1
P
+
PY
+
YT
2
T
1
P
+
PY
+
YT
2
T
1
P
+
PT
2
.
Po-Shen Loh,
Finding Shortest Paths
,
JGAA
, 7(3) 287–303 (2003)
301
Therefore, in both cases we get
λ
T
1
P
+
PT
2
.
Corollary 3 applied to line 10 tells us that
T
1
X
1
+
X
1
P
T
1
P
,so
T
1
X
1
+
X
1
T
2
T
1
X
1
+
X
1
P
+
PT
2
T
1
P
+
PT
2
λ.
By the argument in the last paragraph of the proof of Theorem 2, until
min
is set to
T
1
T
2
, there always exists a priority queue element with
λ
T
1
T
2
.At
the beginning of every iteration thereafter (except for the last), line 7 ensures
that such a priority queue element exists. Since we popped off
P

and will visit
X
1
,
λ
T
1
T
2
T
1
X
1
+
X
1
T
2
T
1
T
2
. Hence
X
1
∈N
1
, as desired, and we
are done.

5 Future Work
The theoretical bounds of the previous section are all upper limits; however,
the purpose of front-to-front bidirectionality was to provide a more accurate
heuristic that would reduce
n
, the number of visited nodes. Since the complexity
is
O
(
n
log
d
n
), a significant reduction in
n
would justify the additional log-power
complexity factor.
This amounts to classifying the spaces on which front-to-front searches out-
perform other search algorithms. Unfortunately, that is beyond the scope of this
paper; instead, we just provide a simple thought-experiment that illustrates the
existence of such spaces.
Figures 5 and 6 are ideal spaces for front-to-front algorithms because the
terminals are only accessible through indirect channels that would confuse other
heuristics. In fact, they yield relative reductions in
n
that follow
O
(
n
). In
light of these simple examples, we can identify one class of spaces for which our
front-to-front search is favorable: if source and destination terminals are located
in intricate, separated networks that are linked by a large, simple network, then
the
d
-th root reduction in
n
can be attained.
In this paper, we have established the feasibility of subquadratic front-to-
front bidirectional heuristic search. We close by posing a few questions that
are opened by this result. Our algorithm is specific to lattice graphs; do there
exist analogous tricks that produce subquadratic front-to-front searches in other
situations? What types of graphs favor front-to-front heuristics? Do these types
of graphs arise in “real-world” situations?
Acknowledgements
Special thanks to Alain Martin and Mika Nystr ̈
om for introducing this problem
to the author, and to Charles Leiserson for providing pointers toward related
literature. Thanks also to Po-Ru Loh for providing many valuable suggestions
that significantly improved the clarity of this paper.