The Case for Delay-based Congestion Control
Cheng Jin David X. Wei Steven H. Low
Engineering & Applied Science, Caltech
http://netlab.caltech.edu
September 12, 2003
Abstract
We argue that, in the absence of explicit feedback, delay-
based algorithms become the preferred
approach for end-
to-end congestion control as networks scale up in capacity.
Their advantage is small at low speed but decisive at high
speed. The distinction between packet-level and flow-level
problems of the current TCP exposes the difficulty of loss-
based algorithms at large congestion windows.
1 Introduction
Congestion control is a distributed algorithm to share net-
work resources among competing users. It is important in
situations where the availability of resources and the set
of competing users vary over time unpredictably, yet effi-
cient sharing is desired. These constraints, unpredictable
supply and demand and efficient operation, necessarily
lead to feedback control as the only approach, where traf-
fic sources dynamically adapt their rates to congestion in
their paths. On the Internet, this is performed by the
Transmission Control Protocol (TCP) in source and des-
tination computers involved in data transfers.
The congestion control algorithm in the current TCP,
which we refer to as Reno in this paper, was developed
in 1988 [10] and has gone through several enhancements
since, e.g., [20, 11, 8]. It has performed remarkably well
and is generally believed to have prevented severe con-
gestion as the Internet scaled up by six orders of mag-
nitude in size, speed, load, and connectivity. It is also
well-known, however, that as bandwidth-delay product
continues to grow, TCP Reno will eventually become a
performance bottleneck itself. The following four difficul-
ties contribute to the poor performance of TCP Reno in
networks with large bandwidth-delay products:
1. At the packet level, linear increase by one packet per
Round-Trip Time (RTT) is too slow, and multiplica-
tive decrease per loss event is too drastic.
2. At the flow level, maintaining large average conges-
tion windows
requires
extremely small equilibrium
loss probability.
3. At the packet level, oscillation is unavoidable because
of the binary nature of the congestion signal (packet
loss).
4. At the flow level, the dynamics is unstable, leading
to severe oscillations that can only be reduced by the
accurate estimation of packet loss probability and a
stable design of the flow dynamics.
In this paper, we argue that the solution to these prob-
lems requires delay-based algorithms that are scalable to
large capacity.
Delay-based congestion control has been proposed, e.g.,
in [12, 25, 2]. Its advantage over loss-based approach
is small at low speed, but decisive at high speed, as we
will see below. As pointed out in [19], delay can be a
poor or untimely predictor of packet loss, and therefore
using a delay-based algorithm to
augment
the basic AIMD
(Additive Increase Multiplicative Decrease) algorithm of
TCP Reno is the wrong approach to resolve problems
at large windows. Instead, a new approach that fully
exploits delay as a congestion measure, augmented with
loss information, is needed [13].
2 Problems at large windows
A congestion control algorithm can be designed at two
levels.
The
flow
-level (macroscopic) design aims to
achieve high utilization, low queueing delay and loss, fair-
ness, and stability. The
packet
-level design implements
these flow level goals within the constraints imposed by
end-to-end control. Historically for TCP Reno, packet-
level implementation was designed first. The resulting
flow-level properties, such as fairness, stability, and the
relationship between equilibrium window and loss proba-
bility, were then understood as an afterthought. In con-
trast, the packet-level designs of HSTCP [7], STCP [16],
and FAST TCP [13] are explicitly guided by flow level
goals.
We elaborate in this section on the four difficulties of
TCP Reno listed in Section 1. It is important to dis-
tinguish between packet-level and flow-level difficulties,
because they must be addressed by different means.
2.1 Packet and flow level modeling
The congestion avoidance algorithm of TCP Reno and its
variants have the form of AIMD [10]. The pseudo code
for window adjustment is:
Ack: w
←−
w
+
1
w
Loss: w
←−
w
−
1
2
w
0-7803-8239-0/03/$17.00 ©2003 IEEE.
99
This is a packet-level model, but it induces certain flow-
level properties such as throughput, fairness, and stabil-
ity.
These properties can be understood with a flow-level
model of the AIMD algorithm, e.g., [15, 9, 18]. In each
RTT, the window
w
i
(
t
)ofsource
i
increases by 1 packet,
1
and decreases by
x
i
(
t
)
p
i
(
t
)
·
1
2
·
4
3
w
i
(
t
) packets
where
x
i
(
t
):=
w
i
(
t
)
/T
i
(
t
) pkts/sec
(1)
T
i
(
t
) is the round-trip time, and
p
i
(
t
) is the (delayed)
end-to-end loss probability, in period
t
.
2
Here, 4
w
i
(
t
)
/
3
is the peak window size that gives the “average” window
of
w
i
(
t
). Hence, a flow-level model of AIMD is:
̇
w
i
(
t
)=
1
T
i
(
t
)
−
2
3
x
i
(
t
)
p
i
(
t
)
w
i
(
t
)
(2)
Setting ̇
w
i
(
t
) = 0 in (2) yields the well-known 1
/
√
p
for-
mula for TCP Reno discovered in [21, 17], which relates
loss probability to window size in equilibrium:
p
∗
i
=
3
2
w
∗
2
i
(3)
In summary, (2) and (3) describe the flow-level dynamics
and the equilibrium, respectively, for TCP Reno.
Remarks
1. From (2),
p
∗
i
w
∗
i
=
3
2
w
∗
i
i.e., on average, there are 3
/
2
w
∗
i
packet losses per
round trip time. In particular, this number decreases
in inverse proportion of the equilibrium window size.
2. Defining
κ
i
(
w
i
,T
i
)=
1
T
i
and
u
i
(
w
i
,T
i
)=
1
.
5
w
2
i
and noting that
w
i
=
x
i
T
i
, we can express (2) as:
̇
w
i
(
t
)=
κ
(
t
)
(
1
−
p
i
(
t
)
u
i
(
t
)
)
(4)
where we have used the shorthand
κ
i
(
t
)=
κ
i
(
w
i
(
t
)
,T
i
(
t
)) and
u
i
(
t
)=
u
i
(
w
i
(
t
)
,T
i
(
t
)). It will
turn out that different variants of TCP all have the
same dynamic structure (4) at the flow level. They
differ in their choices of the gain function
κ
i
and
marginal utility function
u
i
, and whether the conges-
tion measure
p
i
is loss probability or queueing delay.
1
It should be (1
−
p
i
(
t
)), where
p
i
(
t
) is the end-to-end loss prob-
ability. This is roughly 1 when
p
i
(
t
) is small.
2
This model assumes that window is halved on each packet loss.
It can be modified to model the case, where window is halved at
most once in each RTT. This does not qualitatively change the
following discussion.
We next illustrate the equilibrium and dynamics prob-
lems of TCP Reno, at both the packet and flow levels, as
bandwidth-delay product increases.
2.2 Equilibrium problem
The equilibrium problem at the flow level is expressed in
(3): the end-to-end loss probability must be exceedingly
small to sustain a large window size, making the equilib-
rium difficult to maintain in practice, as bandwidth-delay
product increases.
Even though equilibrium is a flow-level notion, this
problem manifests itself at the packet level, where a
source increments its window too slowly and decrements
it too drastically. When the peak window is 80,000-packet
(corresponding to an “average” window of 60,000 pack-
ets), which is necessary to sustain 7.2Gbps using 1,500-
byte packets with a RTT of 100ms, it takes 40,000 RTTs,
or almost 70 minutes, to recover from a single packet loss.
This is illustrated in Figure 1a, where the size of window
increment per RTT and decrement per loss, 1 and 0
.
5
w
i
,
respectively, are plotted as functions of
w
i
.
To address the difficulties of TCP Reno at large window
sizes, HSTCP and STCP increase more aggressively and
decrease more gently, as discussed in Section 3 (see [3, 24,
14] for other approaches).
2.3 Dynamic problem
The cause of the oscillatory behavior in TCP Reno lies
in its design at both the packet and flow levels. At the
packet level, the choice of binary congestion signal nec-
essarily leads to oscillation, and the parameter setting in
Reno worsens the situation as bandwidth-delay product
increases. At the flow level, the system dynamics (2) is
unstable at large bandwidth-delay product [9, 18]. These
must be addressed by different means, as we now elabo-
rate.
Figure 2(a) illustrates the operating points chosen by
various existing TCP congestion control algorithms, using
the single-link single-flow case. It shows queueing delay
as a function of window size. Queueing delay starts to
build up after point
C
where window equals bandwidth-
propagation-delay product, until point
R
where the queue
overflows. Since Reno oscillates around point
R
,thepeak
window size goes beyond point
R
, and the amount of
overshoot depends on the feedback delay. The minimum
window in steady state is half of the peak window. This
is the basis for the rule of thumb that bottleneck buffer
should be at least one bandwidth-delay product: the min-
imum window will then be above point
C
, and buffer will
not empty in steady operation, yielding full utilization.
Full utilization, even if achievable, comes at the cost of
severe oscillations and potentially large queueing delay.
The DUAL scheme in [25] proposes to oscillate around
point
D
, the midpoint between
C
and
R
. DUAL increases
congestion window linearly by one packet per RTT, as
long as queueing delay is less than half of the maxi-
mum value, and decreases multiplicatively by a factor of
1/8, when queueing delay exceeds half of the maximum
value. The scheme CARD (Congestion Avoidance using
Round-trip Delay) of [12] proposes to oscillate around
point
C
through AIMD with the same parameter (1
,
1
/
8)
0-7803-8239-0/03/$17.00 ©2003 IEEE.
100
0
1
2
3
4
5
6
7
8
x 10
4
−10000
−9000
−8000
−7000
−6000
−5000
−4000
−3000
−2000
−1000
0
window (pkts)
window adjustment (pkts)
Reno
S−TCP
HSTCP
w*
O
inc per RTT
O
dec per loss
(a) Reno, HSTCP, and STCP
−100
−80
−60
−40
−20
0
20
40
60
80
100
−100
−80
−60
−40
−20
0
20
40
60
80
100
distance from equilibrium p
i
/u
i
−1
window adjustment (pkts)
w = w*
(b) FAST
Figure 1: Packet-level implementation: (a) Window increment per RTT and decrement per loss, as functions of the
current window size, for TCP Reno, HSTCP, and STCP. The increment functions for TCP Reno and HSTCP are
barely distinguishable at this scale. (b) Window update as a function of
distance from equilibrium
for FAST.
as DUAL, based on the ratio of round-trip delay and de-
lay gradient, to maximize power. In all these schemes, the
congestion signal is binary, and hence congestion window
must
oscillate.
Congestion window can be stabilized only if multi-bit
feedback is used. This is the approach taken by the
equation-based algorithm in [6], where congestion win-
dow is adjusted based on the estimated loss probability
in an attempt to stabilize around a target value given
by (3). Its operating point is
T
in Figure 2(b), near the
overflowing point. This approach eliminates the oscilla-
tion due to packet-level AIMD, but two difficulties remain
on the flow level.
First, equation-based control requires the explicit es-
timation of end-to-end loss probability. This is difficult
when the loss probability is small. Second, even if loss
probability can be perfectly estimated, Reno’s flow dy-
namics (2) leads to a feedback system that becomes un-
stable as feedback delay increases, and more strikingly,
as network capacity increases [9, 18]. The instability at
the flow level can lead to severe oscillations that can be
reduced
only
by stabilizing the flow level dynamics. We
will return to both points in Section 3.3.
3 Loss-based approach
In this section, we first describe HSTCP [7] and STCP
[16] at both the packet and flow levels, and then discuss
how they address the problems discussed in Section 2.
3.1 HSTCP
The design of HSTCP proceeded almost in the opposite
direction to that of TCP Reno [7]: the system equilibrium
at the flow level is first designed, and then, the parame-
ters of the packet level implementation are determined to
implement the flow level equilibrium.
The first design choice decides the relation between
window
w
∗
i
and end-to-end loss probability
p
∗
i
in equi-
librium for each source
i
:
p
∗
i
=
0
.
0789
w
∗
1
.
1976
i
(5)
The second design choice determines how to achieve
the equilibrium defined by (5) through packet level im-
plementation. The (congestion avoidance) algorithm is
AIMD, as in TCP Reno, but with parameters
a
(
w
i
)and
b
(
w
i
) that vary with source
i
’s current window
w
i
.The
pseudocode for window adjustment is:
Ack: w
←−
w
+
a
(
w
)
w
Loss: w
←−
w
−
b
(
w
)
w
The design of
a
(
w
i
)and
b
(
w
i
) functions is as follows.
From an argument about the single-flow behavior, this
algorithm yields an equilibrium where the following holds
[5, 7]:
a
(
w
∗
i
)
b
(
w
∗
i
)
·
(
1
−
b
(
w
∗
i
)
2
)
=
p
∗
i
w
∗
2
i
=0
.
0789
w
∗
0
.
8024
i
(6)
where the last equality follows from (5). This motivates
the design that, when loss probability
p
i
and the window
w
i
are
not
in equilibrium, one chooses
a
(
w
i
)and
b
(
w
i
)to
force the relation (6) “instantaneously”:
a
(
w
i
)
b
(
w
i
)
·
(
1
−
b
(
w
i
)
2
)
=0
.
0789
w
0
.
8024
i
(7)
0-7803-8239-0/03/$17.00 ©2003 IEEE.
101
Queue Delay
Window
CD
R
delay
loss
(a) Binary signal: oscillatory
Queue Delay
Window
F
delay
loss
T
R
(b) Multi-bit signal: stabilizable
Figure 2: Operating points of TCP algorithms:
R
: Reno [10], HSTCP [7], STCP [16];
D
: DUAL [25];
C
:CARD
[12];
T
:TFRC[6];
F
: Vegas [2], FAST [13]
The relation (7) defines a family of
a
(
w
i
)and
b
(
w
i
) func-
tions. Picking
a
(
w
i
)or
b
(
w
i
) function uniquely deter-
mines the other function. The next design choice made
in [7] is to pick a
b
(
w
i
), hence also fixing
a
(
w
i
).
The choice of
b
(
w
i
) in [7] is, for
w
i
between 38 and
83,333 packets,
b
(
w
i
)=
−
k
1
log
e
w
i
+
k
2
(8)
where
k
1
=
−
0
.
0520 and
k
2
=0
.
6892. This fixes
a
(
w
i
)to
be, from (7),
a
(
w
i
)=0
.
1578
w
0
.
8024
i
b
(
w
i
)
2
−
b
(
w
i
)
where
b
(
w
i
) is given by (8). For
w
i
≤
38 packets,
a
(
w
i
)=
1and
b
(
w
i
)=0
.
5 and HSTCP reduces to TCP Reno. For
w
i
(38 to 83,000 packets),
b
(
w
i
) varies between [0
.
1
,
0
.
5].
The flow level model of HSTCP can be modeled using
a similar argument to derive (2) for TCP Reno:
̇
w
i
(
t
)=
a
(
w
i
(
t
))
T
i
(
t
)
−
2
b
(
w
i
(
t
))
2
−
b
(
w
i
(
t
))
x
i
(
t
)
p
i
(
t
)
w
i
(
t
)
=
2
b
(
w
i
(
t
))
T
i
(
t
)(2
−
b
(
w
i
(
t
)))
·
(
a
(
w
i
(
t
))
b
(
w
i
(
t
))
(
1
−
b
(
w
i
(
t
))
2
)
−
p
i
(
t
)
w
2
i
(
t
)
)
Using (7) to replace the first term in the parentheses, we
have
̇
w
i
(
t
)=
2
b
(
w
i
(
t
))
T
i
(
t
)(2
−
b
(
w
i
(
t
)))
·
(
0
.
0789
w
0
.
8024
i
(
t
)
−
p
i
(
t
)
w
2
i
(
t
)
)
(9)
In summary, the model of HSTCP is given by (5), (9)
and and (8).
Remarks
1. The equilibrium relation (5) implies that, on average,
there are
p
∗
i
w
∗
i
=
0
.
0789
w
∗
0
.
1976
i
many packet losses in a window; in particular, the
average number of packet losses per round-trip time
decreases with the equilibrium window, but more
slowly than for TCP Reno.
2. Algorithm (9) can also be expressed in the form of
(4) with the gain and marginal utility functions:
κ
i
(
w
i
,T
i
)=
0
.
1578
b
(
w
i
)
w
0
.
8024
i
(2
−
b
(
w
i
))
T
i
u
i
(
w
i
,T
i
)=
0
.
0789
w
1
.
1976
i
3.2 Scalable TCP
The (congestion avoidance) algorithm of Scalable TCP is
MIMD [16]:
Ack: w
←−
w
+
a
Loss: w
←−
w
−
b
w
for some constants 0
<a,b<
1. Note that in each round-
trip time without packet loss, the window increases by
a multiplicative factor of
a
. The recommended values in
[16] are
a
=0
.
01 and
b
=0
.
125.
As for HSTCP, the flow model of Scalable TCP is
̇
w
i
=
aw
i
(
t
)
T
i
−
2
b
2
−
b
x
i
(
t
)
p
i
(
t
)
w
i
(
t
)
where
x
i
(
t
):=
w
i
(
t
)
/T
i
. In equilibrium, we have
p
∗
i
w
∗
i
=
a
b
(
1
−
b
2
)
=:
ρ
(10)
This implies that, on average, there are
ρ
loss events per
round-trip time, independent of the equilibrium window
size.
We can rewrite (10) in the form of (4) with the gain
and marginal utility functions:
κ
i
(
w
i
,T
i
)=
aw
i
T
i
u
i
(
w
i
,T
i
)=
ρ
w
i
0-7803-8239-0/03/$17.00 ©2003 IEEE.
102
3.3 Disadvantages
The increment and decrement functions of HSTCP and
STCP are plotted in Figure 1(a). Both upper bound those
of Reno: they increment more aggressively and decrement
less drastically. At the flow level, this means that, in
equilibrium, both HSTCP and STCP can tolerate larger
loss probabilities than TCP Reno (compare (5) and (10)
with (3)). This alleviates the first two problems listed
in Section 1. It does not, however, solve the dynamic
problems at the packet and the flow levels.
At the packet level, as mentioned above, a natural way
to avoid oscillation is to use multi-bit, as opposed of bi-
nary, congestion signal.
3
Without explicit feedback, this
means adjusting window based either on loss
probability
,
as in [6], or on queueing delay, as in [13].
4
Queueing
delay can be more accurately estimated than loss proba-
bility both because packet losses in networks with large
bandwidth-delay product are rare events (probability on
the order 10
−
8
or smaller), and because loss samples pro-
vide coarser information than queueing delay samples. In-
deed, each measurement of packet loss (whether a packet
is lost) provides one bit of information for the filtering
of noise, whereas each measurement of queueing delay
provides multi-bit information. This allows an equation-
based implementation to stabilize a network in a steady
state with a target fairness and high utilization.
At the flow level, the dynamics of the feedback system
must be stable in the presence of delay, as the network
capacity increases. Here, again, queueing delay has an ad-
vantage over loss probability as a congestion measure: the
dynamics of queueing delay seems to have the right scal-
ing with respect to network capacity. This helps maintain
stability as network speed grows [22, 4, 23].
4 Delay-based approach
As shown above, the congestion window in Reno, HSTCP
and STCP all evolve according to:
̇
w
i
(
t
)=
κ
i
(
t
)
·
(
1
−
p
i
(
t
)
u
i
(
t
)
)
(11)
where
κ
(
t
):=
κ
i
(
w
i
(
t
)
,T
i
(
t
)) and
u
i
(
t
):=
u
i
(
w
i
(
t
)
,T
i
(
t
)).
Moreover, the dynamics of FAST
TCP also takes the same form; see [13] for details.
They differ only in the choice of the gain function
κ
i
(
w
i
,T
i
), the marginal utility function
u
i
(
w
i
,T
i
), and
the end-to-end congestion measure
p
i
, as shown in the
following table:
κ
i
(
w
i
,T
i
)
u
i
(
w
i
,T
i
)
p
i
Reno
1
/T
i
1
.
5
/w
2
i
loss probability
HSTCP
0
.
16
b
(
w
i
)
w
0
.
80
i
(2
−
b
(
w
i
))
T
i
0
.
08
/w
1
.
20
i
loss probability
STCP
aw
i
/T
i
ρ/w
i
loss probability
FAST
γα
i
α
i
/x
i
queueing delay
3
Another option is to enlarge the equilibrium point to a set, as
TCP Vegas does by using
α<β
. This however makes fairness hard
to control; see [1].
4
TCP Vegas is also delay-based, but the
size
of its window ad-
justment does not depend on queueing delay. This is not important
at low speed but critical at high speed.
This common model (11) can be interpreted as follows:
the goal at the flow level is to equalize marginal utility
u
i
(
t
) with the end-to-end measure of congestion,
p
i
(
t
).
This interpretation immediately suggests an equation-
based packet-level implementation where
both
the direc-
tion and size of the window adjustment ̇
w
i
(
t
) are based
on the difference between the ratio
p
i
(
t
)
/u
i
(
t
) and the
target of 1. Unlike the approach taken by Reno, HSTCP,
and STCP, this approach requires the
explicit
estimation
of the end-to-end congestion measure
p
i
(
t
). The design
decision then reduces to the following two questions:
•
What congestion measure
p
i
(
t
)touse?
As argued in Section 3.3, in the absence of
explicit feedback, queueing delay seems the
only viable choice for congestion measure,
as network capacity increases.
•
How to choose
κ
i
(
w
i
,T
i
)and
u
i
(
w
i
,T
i
)?
At the flow level, the goal is to design
a class of function pairs,
u
i
(
w
i
,T
i
)
and
κ
(
w
i
,T
i
)
, so that the feedback system
described by (11), together with link
dynamics and their interconnection, has
an equilibrium that is fair and efficient,
and that the equilibrium is stable, in the
presence of feedback delay.
An example of such a design is described in [13].
This approach, with proper choice of flow and packet
level designs, can address the four difficulties of Reno at
large windows. First, by explicitly estimating how far the
current state
p
i
(
t
)
/u
i
(
t
) is from the equilibrium value of
1, the scheme can drive the system rapidly, yet in a fair
and stable manner, toward the equilibrium. The win-
dow adjustment is small when the current state is close
to equilibrium and large otherwise,
independent of where
the equilibrium is
, as illustrated in Figure 1(b). This is in
stark contrast to the approach taken by Reno, HSTCP,
and STCP, where window adjustment depends on just the
current window size and is independent of where the cur-
rent state is with respect to the target (compare Figures
1(a) and (b)). Like the equation-based scheme in [6], this
approach avoids the problem of slow increase and drastic
decrease in Reno, as the network scales up.
Second, by choosing a multi-bit congestion measure,
this approach eliminates the packet-level oscillation due
to binary feedback, avoiding Reno’s third problem.
Third, using queueing delay as the congestion measure
p
i
(
t
) allows the network to stabilize in the region below
the overflowing point, around point
F
in Figure 2(b),
when the queue size is sufficiently large. Stabilization
at this operating point eliminates large queueing delay
and unnecessary packet loss. More importantly, it makes
room for buffering “mice” traffic. To avoid the second
problem in Reno, where the required equilibrium con-
gestion measure (loss probability for Reno, and queueing
delay here) is too small to practically estimate, the algo-
rithm must adapt its parameter with capacity to maintain
small but sufficient queueing delay.
Finally, to avoid the fourth problem of Reno, the win-
dow control algorithm must be stable, in addition to being
fair and efficient, at the flow level. The use of queueing
delay as a congestion measure facilitates the design as
queueing delay naturally scales with capacity [22, 4, 23].
0-7803-8239-0/03/$17.00 ©2003 IEEE.
103
References
[1] C. Boutremans and J. Y. Le Boudec. A note on the fair-
ness of TCP Vegas. In
Proceedings of International Zurich
Seminar on Broadband Communications
, pages 163–170,
February 2000.
[2] Lawrence S. Brakmo and Larry L. Peterson. TCP Ve-
gas: end-to-end congestion avoidance on a global Inter-
net.
IEEE Journal on Selected Areas in Communications
,
13(8):1465–80, October 1995.
http://cs.princeton.
edu/nsg/papers/jsac-vegas.ps
.
[3] C. Casetti, M. Gerla, S. Mascolo, M. Sansadidi, and
R. Wang. TCP Westwood: end-to-end congestion control
for wired/wireless networks.
Wireless Networks Journal
,
8:467–479, 2002.
[4] Hyojeong Choe and Steven H. Low. Stabilized Vegas.
In
Proc. of IEEE Infocom
, April 2003.
http://netlab.
caltech.edu
.
[5] S. Floyd, M. Handley, J. Padhye, and J. Widmer. A com-
parison of equation-based and AIMD congestion control.
Preprint,
http://www.aciri.org/tfrc/
, May 2000.
[6]S.Floyd,M.Handley,J.Padhye,andJ.Widmer.
Equation-based congestion control for unicast applica-
tions. In
Proc. ACM SIGCOMM’00
, September 2000.
[7] Sally Floyd. HighSpeed TCP for large congestion win-
dows. Internet draft draft-floyd-tcp-highspeed-02.txt,
work in progress,
http://www.icir.org/floyd/hstcp.
html
, February 2003.
[8] Janey Hoe. Improving the startup behavior of a conges-
tion control scheme for tcp. In
ACM Sigcomm’96
, Au-
gust 1996.
http://www.acm.org/sigcomm/sigcomm96/
program.html
.
[9] C.V. Hollot, V. Misra, D. Towsley, and W.B. Gong. Anal-
ysis and design of controllers for AQM routers supporting
TCP flows.
IEEE Transactions on Automatic Control
,
47(6):945–959, 2002.
[10] V. Jacobson. Congestion avoidance and control.
Proceed-
ings of SIGCOMM’88, ACM
, August 1988. An updated
version is available via
ftp://ftp.ee.lbl.gov/papers/
congavoid.ps.Z
.
[11] V. Jacobson, R. Braden, and D. Borman. TCP extensions
for high performance. RFC 1323,
ftp://ftp.isi.edu/
in-notes/rfc1323.txt
, May 1992.
[12] Raj Jain. A delay-based approach for congestion avoid-
ance in interconnected heterogeneous computer networks.
ACM Computer Communication Review
, 19(5):56–71,
Oct. 1989.
[13] Cheng Jin, David X. Wei, and Steven H. Low. TCP
FAST: motivation, architecture, algorithms, perfor-
mance. submitted for publication,
netlab.caltech.
edu/
, July 2003.
[14] D. Katabi, M. Handley, and C. Rohrs. Congestion con-
trol for high-bandwidth delay product networks. In
Proc.
ACM Sigcomm
, August 2002.
[15] Frank P. Kelly. Mathematical modelling of the Inter-
net. In B. Engquist and W. Schmid, editors,
Mathematics
Unlimited - 2001 and Beyond
, pages 685–702. Springer-
Verlag, Berlin, 2001.
[16] Tom Kelly.
Scalable TCP: Improving performance
in highspeed wide area networks.
Submitted for
publication,
http://www-lce.eng.cam.ac.uk/~ctk21/
scalable/
, December 2002.
[17] T. V. Lakshman and Upamanyu Madhow.
The
performance of TCP/IP for networks with high
bandwidth–delay products and random loss.
IEEE/ACM
Transactions on Networking
,
5(3):336–350,
June
1997.
http://www.ece.ucsb.edu/Faculty/Madhow/
Publications/ton97.ps
.
[18] S. H. Low, F. Paganini, J. Wang, and J. C. Doyle.
Linear stability of TCP/RED and a scalable control.
Computer Networks Journal, to appear,
, 2003.
http:
//netlab.caltech.edu
.
[19] Jim Martin, Arne Nilsson, and Injong Rhee. Delay-based
congestion avoidance for TCP.
IEEE/ACM Trans. on
Networking
, 11(3):356–369, June 2003.
[20] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP
Selective Acknowledgment Options. RFC 2018,
ftp://
ftp.isi.edu/in-notes/rfc2018.txt
, October 1996.
[21] Matthew Mathis, Jeffrey Semke, Jamshid Mahdavi, and
Teunis Ott. The macroscopic behavior of the TCP con-
gestion avoidance algorithm.
ACM Computer Communi-
cation Review
, 27(3), July 1997.
http://www.psc.edu/
networking/papers/model_ccr97.ps
.
[22] Fernando Paganini, John C. Doyle, and Steven H. Low.
Scalable laws for stable network congestion control. In
Proceedings of Conference on Decision and Control
,De-
cember 2001.
http://www.ee.ucla.edu/~paganini
.
[23] Fernando Paganini, Zhikui Wang, Steven H. Low, and
John C. Doyle. A new TCP/AQM for stability and per-
formance in fast networks. In
Proc. of IEEE Infocom
,
April 2003.
http://www.ee.ucla.edu/~paganini
.
[24] R. Shorten, D. Leith, J. Foy, and R. Kilduff. Analysis and
design of congestion control in synchronised communica-
tion networks. In
Proc. of 12th Yale Workshop on Adap-
tive and Learning Systems
, May 2003.
www.hamilton.
ie/doug_leith.htm
.
[25] Z. Wang and J. Crowcroft. Eliminating periodic packet
losses in the 4.3-Tahoe BSD TCP congestion control al-
gorithm.
ACM Computer Communications Review
, April
1992.
0-7803-8239-0/03/$17.00 ©2003 IEEE.
104