of 52
Chapter 3
Applications,
Opportunities, and
Challenges
In this chapter, we consider some of the opportunities and challenges for control
in different application areas. The Panel decided to organize the treatment of
applications around five main areas to identify the overarching themes that would
guide its recommendations. These are:
Aerospace and transportation
Information and networks
Robotics and intelligent machines
Biology and medicine
Materials and processing
In addition, several additional areas arose over the course of the Panel’s delibera-
tions, including environmental science and engineering, economics and finance, and
molecular and quantum systems. Taken together, these represent an enormous col-
lection of applications and demonstrate the breadth of applicability of ideas from
control.
The opportunities and challenges in each of these application areas form the
basis for the major recommendations in this report. In each area, we have sought
the advice and insights not only of control researchers, but also experts in the
application domains who might not consider themselves to be control researchers.
In this way, we hoped to identify the true challenges in each area, rather than
simply identifying interesting control problems that may not have a substantial
opportunity for impact. We hope that the findings will be of interest not only to
control researchers, but also to scientists and engineers seeking to understand how
control tools might be applied to their discipline.
There were several overarching themes that arose across all of the areas con-
sidered by the Panel. The use of systematic and rigorous tools is considered critical
to future success and is an important trademark of the field. At the same time, the
27
28
Chapter 3. Applications, Opportunities, and Challenges
next generation of problems will require a paradigm shift in control research and
education. The increased information available across all application areas requires
more integration with ideas from computer science and communications, as well as
improved tools for modeling, analysis, and synthesis for complex decision systems
that contain a mixture of symbolic and continuous dynamics. The need to continue
research in the theoretical foundations that will underly future advances was also
common across all of the applications.
In each section that follows we give a brief description of the background and
history of control in that domain, followed by a selected set of topics which are used
to explore the future potential for control and the technical challenges that must be
addressed. As in the rest of the report, we do not attempt to be comprehensive in
our choice of topics, but rather highlight some of the areas where we see the greatest
potential for impact. Throughout these sections, we have limited the references to
those that provide historical context, future directions, or broad overviews in the
topic area, rather than specific technical contributions (which are too numerous to
properly document).
3.1. Aerospace and Transportation
29
3.1 Aerospace and Transportation
Men already know how to construct wings or airplanes, which when driven through
the air at sufficient speed, will not only sustain the weight of the wings themselves,
but also that of the engine, and of the engineer as well. Men also know how to
build engines and screws of sufficient lightness and power to drive these planes at
sustaining speed ... Inability to balance and steer still confronts students of the flying
problem. ... When this one feature has been worked out, the age of flying will have
arrived, for all other difficulties are of minor importance.
Wilbur Wright, lecturing to the Western Society of Engineers in 1901 [30].
Aerospace and transportation encompasses a collection of critically important
application areas where control is a key enabling technology. These application areas
represent a very large part of the modern world’s overall technological capability.
They are also a major part of its economic strength, and they contribute greatly to
the well being of its people. The historical role of control in these application areas,
the current challenges in these areas, and the projected future needs all strongly
support the recommendations of this report.
The Historical Role
In aerospace, specifically, control has been a key technological capability tracing
back to the very beginning of the 20th Century. Indeed, the Wright brothers are cor-
rectly famous not simply for demonstrating powered flight—they actually demon-
strated
controlled
powered flight. Their early Wright Flyer incorporated moving
control surfaces (vertical fins and canards) and warpable wings that allowed the
pilot to regulate the aircraft’s flight. In fact, the aircraft itself was not stable, so
continuous pilot corrections were mandatory. This early example of controlled flight
is followed by a fascinating success story of continuous improvements in flight con-
trol technology, culminating in the very high performance, highly reliable automatic
flight control systems we see on modern commercial and military aircraft today (see
Fighter Aircraft and Missiles Vignette, page 17).
Similar success stories for control technology occurred in many other aerospace
application areas. Early World War II bombsights and fire control servo systems
have evolved into today’s highly accurate radar guided guns and precision guided
weapons. Early failure-prone space missions have evolved into routine launch oper-
ations, manned landings on the moon, permanently manned space stations, robotic
vehicles roving Mars, orbiting vehicles at the outer planets, and a host of commer-
cial and military satellites serving various surveillance, communication, navigation
and earth observation needs.
Similarly, control technology has played a key role in the continuing improve-
ment and evolution of transportation—in our cars, highways, trains, ships and air
transportation systems. Control’s contribution to the dramatic increases of safety,
reliability and fuel economy of the automobile is particularly noteworthy. Cars
have advanced from manually tuned mechanical/pneumatic technology to computer
controlled operation of all major functions including fuel injection, emission con-
trol, cruise control, braking, cabin comfort, etc. Indeed, modern automobiles carry
30
Chapter 3. Applications, Opportunities, and Challenges
dozens of individual processors to see to it that these functions are performed ac-
curately and reliably over long periods of time and in very tough environments. A
historical perspective of these advances in automotive applications is provided in
the following vignette.
Vignette: Emissions Requirements and Electronic Controls for Automo-
tive Systems (Mark Barron and William Powers, Ford Motor Company)
One of the major success stories for electronic controls is the development of sophis-
ticated engine controls for reducing emissions and improving efficiency. Mark Barron
and Bill Powers described some of these advances in an article written in 1996 for the
inaugural issue of the
IEEE/ASME Transactions on Mechatronics
[6].
In their article, Barron and Powers describe the environment that led up to the intro-
duction of electronic controls in automobile engines:
Except for manufacturing technology, the automobile was relatively benign
with respect to technology until the late 1960s. Then two crises hit the
automotive industry. The first was the environmental crisis. The environ-
mental problems led to regulations which required a reduction in automotive
emissions by roughly an order of magnitude. The second crisis was the oil
embargo in the early 1970s which created fuel shortages, and which lead to
legislation in the U.S. requiring a doubling of fuel economy. ...
Requirements for improved fuel efficiency and lower emissions demanded
that new approaches for controlling the engine be investigated. While today
we take for granted the capabilities which have been made possible by
the microprocessor, one must remember that the microprocessor wasn’t
invented until the early 1970s. When the first prototype of a computerized
engine control system was developed in 1970, it utilized a minicomputer
that filled the trunk of a car. But then the microprocessor was invented in
1971, and by 1975 engine control had been reduced to the size of a battery
and by 1977 to the size of a cigar box.
These advances in hardware allowed sophisticated control laws that could deal with the
complexities of maintaining low emissions and high fuel economy:
The introduction in the late 1970s of the platinum catalytic converter was
instrumental in reducing emissions to meet regulations. The catalytic con-
verter is an impressive passive device which operates very effectively under
certain conditions. One of the duties of the engine control system is to
maintain those conditions by patterning the exhaust gases such that there
are neither too many hydrocarbons nor too much oxygen entering the cata-
lyst. If the ratio of air to fuel entering the engine is kept within a very tight
range (i.e., a few percent) the catalyst can be over 90% efficient in remov-
ing hydrocarbons, carbon monoxide, and oxides of nitrogen. However, the
catalyst isn’t effective until it has reached a stable operating temperature
greater than 600
F (315
C), and a rule of thumb is that 80% of emissions
3.1. Aerospace and Transportation
31
which are generated under federal test procedures occur during the first two
minutes of operation while the catalyst is warming to its peak efficiency op-
erating temperature. On the other hand if the catalyst is operated for an
extended period of time much above 1000
F (540
C) it will be destroyed.
Excess fuel can be used to cool the catalyst, but the penalty is that fuel
economy gets penalized. So the mechatronic system must not only control
air-fuel ratios so as to maintain the catalyst at its optimum operating point,
it must control the engine exhaust so that there is rapid lightoff of the cat-
alyst without overheating, while simultaneously maintaining maximum fuel
efficiency.
The success of control in meeting these challenges is evident in the reduction of emissions
that has been achieved over the last 30 years [37]:
US, European and Japanese Emission Standard continue to require signif-
icant reductions in vehicle emissions. Looking closely at US passenger car
emission standards, the 2005 level of hydrocarbon (HC) emissions is less
than 2% of the 1970 allowance. By 2005, carbon monoxide (CO) will be
only 10% of the 1970 level, while the permitted level for oxides of nitrogen
will be down to 7% of the 1970 level.
Furthermore, the experience gained in engine control provided a path for using electronic
controls in many other applications [6]:
Once the industry developed confidence in on-board computer control, other
applications rapidly followed. Antilock brake systems, computer controlled
suspension, steering systems and air bag passive restraint systems are ex-
amples. The customer can see or feel these systems, or at least discern
that they are on the vehicle, whereas the engine control system is not an
application which is easily discernible by the customer. Computers are now
being embedded in every major function of the vehicle, and we are seeing
combinations of two or more of these control systems to provide new func-
tions. An example is the blending of the engine and antilock brake system
to provide a traction control system, which controls performance of the
vehicle during acceleration whereas antilock brakes control performance of
the vehicle during deceleration.
An important consequence of the use of control in automobiles was its suc-
cess in demonstrating that control provided safe and reliable operation. The cruise
control option introduced in the late 1950s was one of the first servo systems receiv-
ing very broad public exposure. Our society’s inherent trust in control technology
traces back to the success of such early control systems.
Certainly, each of these successes owes its debt to improvements in many
technologies, e.g. propulsion, materials, electronics, computers, sensors, navigation
instruments, etc. However, they also depend in no small part on the continuous
32
Chapter 3. Applications, Opportunities, and Challenges
(a)
(b)
Figure 3.1.
(a) The F-18 aircraft, one of the first production military
fighters to use “fly-by-wire” technology, and (b) the X-45 (UCAV) unmanned aerial
vehicle. Photographs courtesy of NASA Dryden Flight Research Center.
improvements that have occurred over the century in the theory, analysis methods
and design tools of control. As an example, “old timers” in the flight control engi-
neering community still tell the story that early control systems (circa World War
II) were designed by manually tuning feedback gains in flight—in essence, trial-
and-error design performed on the actual aircraft. Dynamic modeling methods for
aircraft were in their infancy at that time, and formal frequency-domain design
theories to stabilize and shape single-input single-output feedback loops were still
only subjects of academic study. Their incorporation into engineering practice rev-
olutionized the field, enabling successful feedback systems designed for ever more
complex applications, consistently, with minimal trial-and-error, and with reason-
able total engineering effort.
Of course, the formal modeling, analysis and control system design meth-
ods described above have advanced dramatically since mid-century. As a result
of significant R&D activities over the last fifty years, the state of the art today
allows controllers to be designed for much more than single-input single-output sys-
tems. The theory and tools handle many inputs, many outputs, complex uncertain
dynamic behavior, difficult disturbance environments, and ambitious performance
goals. In modern aircraft and transportation vehicles, dozens of feedback loops are
not uncommon, and in process control the number of loops reaches well into the
hundreds. Our ability to design and operate such systems consistently, reliably,
and cost effectively rests in large part on the investments and accomplishments of
control over the latter half of the century.
Current Challenges and Future Needs
Still, the control needs of some engineered systems today and those of many in the
future outstrip the power of current tools and theories. This is so because current
tools and theories apply most directly to problems whose dynamic behaviors are
3.1. Aerospace and Transportation
33
smooth and continuous, governed by underlying laws of physics and represented
mathematically by (usually large) systems of differential equations. Most of the
generality and the rigorously provable features of existing methods can be traced
to this nature of the underlying dynamics.
Many new control design problems no longer satisfy these underlying char-
acteristics, at least in part. Design problems have grown from so-called “inner
loops” in a control hierarchy (e.g. regulating a specified flight parameter) to various
“outer loop” functions which provide logical regulation of operating modes, vehicle
configurations, payload configurations, health status, etc [3]. For aircraft, these
functions are collectively called “vehicle management.” They have historically been
performed by pilots or other human operators and have thus fallen on the other
side of the man-machine boundary between humans and automation. Today, that
boundary is moving!
There are compelling reasons for the boundary to move. They include eco-
nomics (two, one or no crew members in the cockpit versus three), safety (no opera-
tors exposed to dangerous or hostile environments), and performance (no operator-
imposed maneuver limits). A current example of these factors in action is the
growing trend in all branches of the military services to field unmanned vehicles.
Certain benign uses of such vehicles are already commonplace (e.g. reconnaissance
and surveillance), while other more lethal ones are in serious development (e.g.
combat UAVs for suppression of enemy air defenses) [29]. Control design efforts
for such applications must necessarily tackle the entire problem, including the tra-
ditional inner loops, the vehicle management functions, and even the higher-level
“mission management” functions coordinating groups of vehicles intent on satisfying
specified mission objectives.
Today’s engineering methods for designing the upper layers of this hierarchy
are far from formal and systematic. In essence, they consist of collecting long lists
of logical if-then-else rules from experts, programming these rules, and simulating
their execution in operating environments. Because the logical rules provide no
inherent smoothness (any state transition is possible) only simulation can be used
for evaluation and only exhaustive simulation can guarantee good design proper-
ties. Clearly, this is an unacceptable circumstance—one where the strong system-
theoretic background and the tradition of rigor held by the control community can
make substantial contributions.
One can speculate about the forms that improved theories and tools for non-
smooth (hybrid) dynamical systems might take. For example, it may be possible to
impose formal restrictions on permitted logical operations, to play a regularizing role
comparable to laws of physics. If rigorously obeyed, these restrictions could make
resulting systems amenable to formal analyses and proofs of desired properties.
This approach is similar to computer language design, and provides support for
one of the recommendations of this report, namely that the control and computer
science disciplines need to grow their intimate interactions. It is also likely that the
traditional standards of formal rigor must expand to firmly embrace computation,
algorithmic solutions, and heuristics.
However, one must not ever lose sight of the key distinguishing features of the
control discipline, including the need for hard real time execution of control laws and
34
Chapter 3. Applications, Opportunities, and Challenges
Figure 3.2.
Battle space management scenario illustrating distributed com-
mand and control between heterogeneous air and ground assets. Figure courtesy of
DARPA.
the need for ultra-reliable operation of all hardware and software control compo-
nents. Many controlled systems today (auto-land systems of commercial transports,
launch boosters, F-16 and B-2 aircraft, certain power plants, certain chemical pro-
cess plants, etc.) fail catastrophically in the event of control hardware failures, and
many future systems, including the unmanned vehicles mentioned above, share this
property. But the future of aerospace and transportation holds still more complex
challenges. We noted above that changes in the underlying dynamics of control
design problems from continuous to hybrid are well under way. An even more dra-
matic trend on the horizon is a change in dynamics to large collections of distributed
entities with local computation, global communication connections, very little reg-
ularity imposed by laws of physics, and no possibility to impose centralized control
actions. Examples of this trend include the national airspace management problem,
automated highway and traffic management, and command and control for future
battlefields (Figure 3.2).
The national airspace problem is particularly significant today, with eventual
gridlock and congestion threatening the integrity of the existing air transportation
system. Even with today’s traffic, ground holds and airborne delays in flights due
to congestion in the skies have become so common that airlines automatically pad
their flight times with built-in delays. The structure of the air traffic control (ATC)
system is partially blamed for these delays: the control is distributed from airspace
3.1. Aerospace and Transportation
35
region to airspace region, yet within a region the control is almost wholly centralized,
with sensory information from aircraft sent to a human air traffic controller who
uses ground-based navigation and surveillance equipment to manually route aircraft
along sets of well-traveled routes. In today’s system, bad weather, aircraft failure,
and runway or airport closure have repercussions throughout the whole country.
Efforts are now being made to improve the current system by developing cockpit
“sensors ” such as augmented GPS navigation systems and datalinks for aircraft
to aircraft communication. Along with these new technologies, new hierarchical
control methodologies are being proposed, which automate some of the functionality
of ATC. This opens up a set of new challenges: the design of information-sharing
mechanisms and new, distributed,
verified
embedded control schemes for separation
assurance between aircraft, and the design of dynamic air traffic network topologies
which aid in the safe routing of aircraft from origin to destination and which adapt
to different traffic flows, are two areas which provide a tremendous opportunity to
researchers in the control community.
Finally, it is important to observe that the future also holds many applications
that fall under the traditional control design paradigm, yet are worthy of research
support because of their great impact. Conventional “inner loops” in automobiles,
but for non-conventional power plants, are examples. Hybrid cars combining elec-
trical drives with low-power internal combustion engines and fuel cell powered cars
combining electrical drives with fuel cell generation both depend heavily of well-
designed control systems to operate efficiently and reliably. Similarly, increased
automation of traditional transportation systems such as ships and railroad cars,
with added instrumentation and cargo-tracking systems will rely on advanced con-
trol and schedule optimization to achieve maximum economic impact. Another
conventional area is general aviation, where control systems to make small aircraft
easy and safe to fly and increased automation to manage them are essential needs.
Other Trends in Aerospace and Transportation
In addition to the specific areas highlighted above, there are many other trends
in aerospace and transportation that will benefit from and inform new results in
control. We briefly describe a few of these here.
Automotive Systems
With 60 million vehicles produced each year, automotive
systems are a major application area for control. Emission control regulations
passed in the 1970s created a need for more sophisticated engine control systems that
could provide clean and efficient operation in a variety of operating environments
and over the lifetime of the car. The development of the microprocessor at that same
time allowed the implementation of sophisticated algorithms that have reduced the
emissions in automobiles by as much as a factor of 50 from their 1970 levels.
Future automobile designs will rely even more heavily on electronic con-
trols [37]. Figure 3.3 shows some of the components that are being considered
for next generation vehicles. Many of these components will build on the use of
control techniques, including radar-based speed and spacing control systems, chassis
control technologies for stability enhancement and improved suspension characteris-
36
Chapter 3. Applications, Opportunities, and Challenges
Brake Actuator
Wheel Speed Sensors
Steering Actuator & Position
and Effort Sensors
Driver Controls &
Dis
p
la
y
s
Transmission Gear
Selector
Video Camera
Active Belt Pretensioners
Suspension Control for
Damping and Height
Supplemental Inflatable
Restraints
Data Bus
Radar
Engine Spark
Control
Throttle Actuator
Supplemental Inflatable
Restraints
Inertial Sensors for
Rotational/Angular, Lateral,
&
Longitudinal Acceleration
GPS Receiver
Individual Wheel
Brake Actuators
Digital Radio
Communications
Antenna
Map Database
Control
Computer &
Interface
GPS Antenna
Figure 3.3.
Major future components for basic automotive vehicle functions [37].
tics, active control of suspension and braking, and active restraint systems for safety.
In addition, more sophisticated use of networking and communications devices will
allow enhanced energy management between components and vehicle diagnostics
with owner/dealer notification.
These new features will require highly integrated control systems that combine
multiple components to provide overall stability and performance. Systems such as
chassis control will require combining steering, braking, powertrain and suspension
subsystems, along with adding new sensors. One can also imagine increased in-
teraction between vehicles and the roadway infrastructure, as automated highways
and self-controlled vehicles move from the research lab into applications. These lat-
ter applications are particularly challenging since they begin to link heterogeneous
vehicles through communications systems that will experience varying bandwidths
and latency (time delays) depending on the local environment. Providing safe, re-
liable, and comfortable operation for such systems is a major challenge for control
and one that will have application in a variety of consumer, industrial, and military
applications.
3.1. Aerospace and Transportation
37
Aircraft Propulsion Systems
Much more effective use of information in propul-
sion systems is possible as the price/performance ratio of computation and sensing
continues to drop. Intelligent turbine engines will ultimately lower lifetime operat-
ing and maintenance costs, similar to current and upcoming automotive systems.
They will provide advanced health, performance, and life management by embed-
ding models of their operation and optimizing based on condition and mission. They
will be more flexible and more tolerant of component faults, and will integrate into
the owners asset management system, lowering maintenance and fleet management
costs by making engine condition information available to the owner on demand
and ensuring predictable asset availability.
Detection of damage (diagnostics) and prediction of the implications (prog-
nostics) are the heart of an intelligent engine. Detailed modeling of the thermofluid,
structural, and mechanical systems, as well as the operational environment, is
needed for such assessments. To allow on-product use accounting for system in-
teractions, physics-based models will be constructed using advanced techniques in
reduced-order modeling . This approach significantly extends recent engine compo-
nent modeling.
Embedded models can also be used for online optimization and control in
real time. The benefit is the ability to customize engine performance to changes
in operating conditions and the engine’s environment through updates in the cost
function, onboard model, and constraint set. Many of the challenges of designing
controllers that are robust to a large set of uncertainties can thus be embedded in
the online optimization, and robustness through a compromise design is replaced
by always-optimal performance.
Flow Control
Flow control involves the use of reactive devices for modifying fluid
flow for the purposes of enhanced operability. Sample applications for flow control
include increased lift and reduced drag on aircraft wings, engine nacelles, compressor
fan blades, and helicopter rotor blades; higher performance diffusers in gas turbines,
industrial heaters and chillers, and engine inlets; wake management for reduction of
resonant stress and blade vortex interaction; and enhanced mixing for combustion
and noise applications. A number of devices have been explored in the past several
years for actuation of flow fields. These range from novel air injection mechanisms
for control of rotating stall and separation, to synthetic jets developed for mixing
enhancement and vectoring, to MEMS devices for modulating boundary layers and
flow around stagnation points. In addition, new sensing technology, such as micro
anemometers, is also becoming available.
These changes in sensing and actuation technology are enabling new applica-
tions of control to unstable shear layers and separated flow, thermoacoustic instabil-
ities, and compression system instabilities such as rotating stall and surge (see [10]
for a recent survey). An emerging area of interest in hypersonic flight systems,
where flow control techniques could provide a larger toolbox for design of vehicles,
including drag reduction, novel methods for producing control forces, and better
understanding of the complex physical phenomena at these speeds.
38
Chapter 3. Applications, Opportunities, and Challenges
Space Systems
1
The exploitation of space systems for civil, commercial, defense,
scientific, or intelligence purposes gives rise to a unique set of challenges in the
area of control. For example, most space missions cannot be adequately tested on
the ground prior to flight, which has a direct impact on many dynamics and con-
trol problems. A three-pronged approach is required to address these challenging
space system problems: (1) detailed modeling, including improved means of char-
acterizing, at a very small scale, the fundamental physics of the systems; (2) flight
demonstrations to characterize the behavior of representative systems; and (3) de-
sign of navigation and control approaches that maintain performance (disturbance
rejection and tracking) even with uncertainties, failures, and changing dynamics.
There are two significant areas that can revolutionize the achievable perfor-
mance from future space missions: flexible structure analysis and control, and space
vehicle formation flying. These both impact the allowable size of the effective aper-
ture, which influences the “imaging” performance, whether it is optical imaging or
the collection of signals from a wide range of wavelengths. There are fundamental
limitations that prevent further developments with monolithic mirrors (with the
possible exception of inflatable and foldable membranes, which introduce their own
extreme challenges) and the various segmented approaches—deployed arrays, teth-
ered or freeflyer formations—provide the only solution. However, these approaches
introduce challenging problems in characterizing the realistic dynamics and devel-
oping sensing and control schemes to maintain the necessary optical tolerances.
A significant amount of work has been performed in the area of flexible struc-
ture dynamics and control under the auspices of the Strategic Defense Initiative
Organization (SDIO) in the 1970s and 80s. However, at the performance levels
required for future missions (nanometers), much research remains to develop mod-
els at the micro-dynamics level and control techniques that can adapt to system
changes at these small scales.
Similar problems exist with formation control for proposed imaging interferom-
etry missions. These will require the development of control algorithms, actuators,
and computation and communications networks. Sensors will also have to be de-
veloped to measure deflections on the scale of nanometers over distances hundreds
of meters through kilometers. Likewise, actuation systems of various types must
be developed that can control on the scale of nanometers to microns with very low
noise levels and fine resolution. The biases and residuals generally accepted due to
particular approximations in navigation and control algorithms will no longer be
acceptable. Furthermore, the simulation techniques used for verification must, in
some cases, maintain precision through tens of orders of magnitude separation in
key states and parameters, over both long and short time-scales, and with stochas-
tic noise inputs. In summary, in order to enable the next generations of advanced
space systems, the field must address the micro- and nanoscale problems in analysis,
sensing, control, and simulation, for individual elements and integrated systems.
1
The Panel would like to thank Jonathan How and Jesse Leitner for their contributions to this
section.
3.2. Information and Networks
39
3.2 Information and Networks
A typical congested gateway looks like a fire hose connected to a soda straw through a
small funnel (the output queue). If, on average, packets arrive faster than they can
leave, the funnel will fill up and eventually overflow. RED [Random Early Detection]
is [a] simple regulator that monitors the level in the funnel and uses it to match the
input rate to the output (by dropping excess traffic). As long as its control law is
monotone non-decreasing and covers the full range of 0 to 100% drop rate, RED
works for
any
link,
any
bandwidth,
any
type of traffic.
Van Jacobson, North American Network Operators’ Group meeting, 1998 [20].
The rapid growth of communications networks provides several major oppor-
tunities and challenges for control. Although there is overlap, we can divide these
roughly into two main areas: control of networks and control over networks.
Control of Networks
Control of networks is a large area, spanning many topics, a few of which are
briefly described here. The basic problems in control of networks include controlling
congestion across network links, routing the flow of packets through the network,
caching and updating data at multiple locations, and managing power levels for
wireless networks.
Several features of these control problems make them very challenging. The
dominant feature is the extremely large scale of the system; the Internet is probably
the largest feedback control system man has ever built. Another is the decentralized
nature of the control problem: local decisions must be made quickly, and based only
on local information. Stability is complicated by the presence of varying time lags,
as information about the network state can only be observed or relayed to controllers
after a time delay, and the effect of a local control action can be felt throughout the
network after substantial delay. Uncertainty and variation in the network, through
network topology, transmission channel characteristics, traffic demand, available
resources, etc., may change constantly and unpredictably. Another complicating
issue is the diverse traffic characteristics, in terms of arrival statistics at both the
packet and flow time scales, and different requirements for quality of service, in
terms of delay, bandwidth, and loss probability, that the network must support.
Resources that must be managed in this environment include computing, stor-
age and transmission capacities at end hosts and routers. Performance of such sys-
tems is judged in many ways: throughput, delay, loss rates, fairness, reliability, as
well as the speed and quality with which the network adapts to changing traffic
patterns, changing resource availability, and changing network congestion.
To illustrate these characteristics, we briefly describe the control mechanisms
that can be invoked in serving a file request from a client: network caching, con-
gestion control, routing and power control. Figure 3.4 shows a typical map for the
networking infrastructure that is used to process such a request.
The problem of optimal network caching is to copy documents (or services)
that are likely to be accessed often, from many different locations, on multiple
servers. When the document is requested, it is returned by the nearest server.
40
Chapter 3. Applications, Opportunities, and Challenges
Figure 3.4.
UUNET network backbone for North America. Figure cour-
tesy WorldCom.
Here, proximity may be measured by geographical distance, hop count, network
congestion, server load or a combination. The goal is to reduce delay, relieve server
load, balance network traffic, and improve service reliability. If changes are made
to the source document, those changes (at a minimum) must be transmitted to the
servers, which consume network bandwidth.
The control problem is to devise a decentralized scheme for how often to
update, where to cache copies of documents, and to which server a client request is
directed, based on estimation and prediction of access patterns, network congestion,
and server load. Clearly, current decisions affect the future state, such as future
traffic on links, future buffer levels, delay and congestion, and server load. Thus a
web of caches is a decentralized feedback system that is spatially distributed and
interconnected, where control decisions are made asynchronously based on local and
delayed information.
When a large file is requested, the server that is selected to return the file
breaks it into a stream of packets and transports them to the client in a rate-
adaptive manner. This process is governed by the Transport Control Protocol
(TCP). The client acknowledges successful reception of each packet and the stream
of acknowledgment carries congestion information to the server. Congestion control
is a distributed algorithm to share network resources among competing servers. It
consists of two components: a source algorithm that dynamically adjusts the server
rate in response to congestion in its path, and a router algorithm that updates
a congestion measure and sends it back to sources that go through that router.
Examples of congestion measures are loss probability and queuing delay. They are
implicitly updated at the routers and implicitly fed back to sources through delayed
3.2. Information and Networks
41
end-to-end observations of packet loss or delay. The equilibrium and dynamics of
the network depends on the pair of source and router algorithms.
A good way to understand the system behavior is to regard the source rates as
primal variables and router congestion measures as dual variables, and the process of
congestion control as an asynchronous distributed primal-dual algorithm carried out
by sources and routers over the Internet in real time to maximize aggregate source
utility subject to resource capacity constraints. Different protocols all solve the
same prototypical problem, but they use different utility functions and implement
different iterative rules to optimize them. Given any source algorithm, it is possible
to derive explicitly the utility function it is implicitly optimizing.
While TCP controls the rate of a packet flow, the path through the network
is controlled by the Internet Protocol (IP). In its simplest form, each router must
decide which output link a given packet will be sent to on its way to its final
destination. Uncertainties include varying link congestion, delays, and rates, and
even varying network topology (e.g., a link goes down, or new nodes or links become
available), as well as future traffic levels. A routing algorithm is an asynchronous
distributed algorithm executed at routers that adapts to node and link failures,
balances network traffic and reduces congestion. It can be decomposed into several
time scales, with very fast decisions made in hardware using lookup tables, which
in turn are updated on a slower time scale. At the other extreme in time scale from
the routing problem, we have optimal network planning, in which new links and
nodes are proposed to meet predicted future traffic demand.
The routing problem is further exacerbated in wireless networks. Nodes with
wireless modems may be mobile, and the address of a node may neither indicate
where it is located nor how to reach it. Thus the network needs to either search
for a node on demand, or it must keep track of the changing locations of nodes.
Further, since link capacities in wireless networks may be scarce, routing may have
to be determined in conjunction with some form of load balancing. This gives rise
to the need for distributed asynchronous algorithms which are adaptive to node
locations, link failures, mobility, and changes in traffic flow requirements.
Finally, if the client requesting the file accesses it through an
ad hoc
wireless
network, then there also arises the problem of power control: at what transmis-
sion power level should each packet broadcast be made? Power control is required
because
ad hoc
networks do not come with ready made links; the topology of the
network is formed by individual nodes choosing the power levels of their broadcasts.
This poses a conceptual problem in the current protocol hierarchy of the Internet
since it simultaneously affects the physical layer due to its effect on signal quality,
the network layer since power levels determine which links are available for traffic
to be routed, and the transport layer since power levels of broadcasts affect conges-
tion. Power control is also a good challenge for multi-objective control since there
are many cost criteria involved, such as increasing the traffic carrying capacity of
the network, reducing the battery power used in relaying traffic, and reducing the
contention for the common shared medium by the nodes in geographical vicinity.
Control of networks extends beyond data and communication networks. Opti-
mal routing and flow control of commercial aircraft (with emphasis on guaranteeing
safe inter-vehicle distances) will help maximize utilization of airports. The (network
42
Chapter 3. Applications, Opportunities, and Challenges
and software) infrastructure for supply chain systems is being built right now, and
simple automated supply chain management systems are beginning to be deployed.
In the near future, sophisticated optimization and control methods can be used to
direct the flow of goods and money between suppliers, assemblers and processors,
and customers.
Control over Networks
While the advances in information technology to date have led to a global Inter-
net that allows users to exchange information, it is clear that the next phase will
involve much more interaction with the physical environment. Networks of sen-
sory or actuator nodes with computational capabilities, connected wirelessly or by
wires, can form an orchestra which controls our physical environment. Examples
include automobiles, smart homes, large manufacturing systems, intelligent high-
ways and networked city services, and enterprise-wide supply and logistics chains.
Thus, this next phase of the information technology revolution is the convergence of
communication, computing and control. The following vignette describes a major
architectural challenge in achieving this convergence.
Vignette: The importance of abstractions and architecture for the con-
vergence of communications, computing, and control (P. R. Kumar,
Univ. of Illinois, Urbana-Champaign)
Communication networks are very diverse, running over copper, radio, or optical links,
various computers, routers, etc. However, they have an underlying architecture which
allows one to just plug-and-play, and not concern oneself with what lies underneath.
In fact, one reason for the anarchic proliferation of the Internet is precisely this
architecture—a hierarchy of layers together with peer-to-peer protocols connecting the
layers at different nodes. On one hand, nodes can be connected to the Internet without
regard to the physical nature of the communication link, whether it be infrared or cop-
per, and this is one reason for the tremendous growth in the number of nodes on the
Internet. On the other hand, the architecture allows plug-and-play at all levels, and thus
each layer can be designed separately, allowing a protocol at one level to be modified
over time without simultaneously necessitating a redesign of the whole system. This
has permitted the Internet protocols to evolve and change over time.
This raises the issue: What is the right architecture for the convergence of communica-
tion, control, and computing? Is there an architecture which is application and context
independent, one which allows proliferation, just as the Open Systems Interconnect
(OSI) architecture did for communication networks? What are the right abstraction
layers? How does one integrate information, control, and computation? If the over-
all design allows us to separate algorithms from architecture, then this convergence of
control with communication and computation will rapidly proliferate.
As existing networks continue to build out, and network technology becomes
cheaper and more reliable than fixed point-to-point connections, even in small lo-
calized systems, more and more control systems will operate over networks. We
3.2. Information and Networks
43
can foresee sensor, actuator, diagnostic, and command and coordination signals all
traveling over data networks. The estimation and control functions can be dis-
tributed across multiple processors, also linked by data networks. (For example,
smart sensors can perform substantial local signal processing before forwarding rel-
evant information over a network.)
Current control systems are almost universally based on synchronous, clocked
systems, so they require communications networks that guarantee delivery of sen-
sor, actuator, and other signals with a known, fixed delay. While current control
systems are robust to variations that are included in the design process (such as a
variation in some aerodynamic coefficient, motor constant, or moment of inertia),
they are not at all tolerant of (unmodeled) communication delays, or dropped or lost
sensor or actuator packets. Current control system technology is based on a sim-
ple communication architecture: all signals travel over synchronous dedicated links,
with known (or worst-case bounded) delays, and no packet loss. Small dedicated
communication networks can be configured to meet these demanding specifications
for control systems, but a very interesting question is:
Can one develop a theory and practice for control systems that operate
in a distributed, asynchronous, packet-based environment?
It is very interesting to compare current control system technology with cur-
rent packet-based data networks. Data networks are extremely robust to gross,
unpredicted changes in topology (such as loss of a node or a link); packets are sim-
ply re-sent or re-routed to their destination. Data networks are self-configuring: we
can add new nodes and links, and soon enough packets are flowing through them.
One of the amazing attributes of data networks is that, with good architecture and
protocol design, they can be far more reliable than their components. This is in
sharp contrast with modern control systems, which are only as reliable as their
weakest link. Robustness to component failure must be designed in, by hand (and
is, for safety critical systems).
Looking forward, we can imagine a marriage of current control systems and
networks. The goal is an architecture, and design and analysis methods, for dis-
tributed control systems that operate in a packet-based network. If this is done
correctly, we might be able to combine the good qualities of a robust control system,
i.e., high performance and robustness to parameter variation and model mismatch,
with the good qualities of a network: self-configuring, robust to gross topology
changes and component failures, and reliability exceeding that of its components.
One can imagine systems where sensors asynchronously burst packets onto the
network, control processors process the data and send it out to actuators. Packets
can be delayed by varying amounts of time, or even lost. Communication links
can go down, or become congested. Sensors and actuators themselves become un-
available or available. New sensors, actuators, and processors can be added to the
system, which automatically reconfigures itself to make use of the new resources. As
long as there are enough sensors and actuators available, and enough of the packets
are getting though, the whole system works (although we imagine not as well as
with a dedicated, synchronous control system). This is of course very different from
any existing current high performance control system.
44
Chapter 3. Applications, Opportunities, and Challenges
It is clear that for some applications, current control methods, based on syn-
chronous clocked systems and networks that guarantee arrival and bounded delays
for all communications, are the best choice. There is no reason not to configure
the controller for a jet engine as it is now, i.e., a synchronous system with guar-
anteed links between sensors, processors, and actuators. But for consumer appli-
cations not requiring the absolute highest performance, the added robustness and
self-reconfiguring abilities of a packet-based control system could make up for the
lost performance. In any case what will emerge will probably be something in be-
tween the two extremes, of a totally synchronous system and a totally asynchronous
packet-based system.
Clearly, several fundamental control concepts will not make the transition to
an asynchronous, packet-based environment. The most obvious casualty will be
the transfer function, and all the other concepts associated with linear time in-
variant (LTI) systems (impulse and step response, frequency response, spectrum,
bandwidth, etc.) This is not a small loss, as this has been a foundation of control en-
gineering since about 1930. With the loss goes a lot of intuition and understanding.
For example, Bode plots were introduced in the 1930s to understand and design
feedback amplifiers, were updated to handle discrete-time control systems in the
1960s, and were applied to robust MIMO control systems in the 1980s (via singular
value plots). Even the optimal control methods in the 1960s, which appeared at
first to be quite removed from frequency domain concepts, were shown to be nicely
interpreted via transfer functions.
So what methods will make the transition? Many of the methods related
to optimal control and optimal dynamic resource allocation will likely transpose
gracefully to an asynchronous, packet-based environment. A related concept that is
likely to survive is also one of the oldest: Lyapunov functions (which were introduced
in 1890). The following vignette describes some of the possible changes to control
that may be required.
Vignette: Lyapunov Functions in Networked Environments (Stephen
Boyd, Stanford)
Here is an example of how an “old” concept from control will update gracefully. The
idea is that of the Bellman value function, which gives the optimal value of some control
problem, posed as an optimization problem, as a function of the starting state. It was
studied by Pontryagin, Bellman, and other pioneers of optimal control in the 1950s, and
has recently had a resurgence (in generalized form) under the name of control Lyapunov
function. It is a key concept in dynamic programming.
The basic idea of a control Lyapunov function (or the Bellman value function) is this:
If you knew the function, then the best thing to do is to choose current actions that
minimize the value function in the current step, without any regard for future effects.
(In other words, we ignore the dynamics of the system.) By doing this we are actually
carrying out an optimal control for the problem. In other words, the value function is
the cost function whose greedy minimization actually yields the optimal control for the
original problem, taking the system dynamics into account. In the work of the 1950s
and 60s, the value function is just a mathematical stepping stone toward the solution
3.2. Information and Networks
45
of optimal control problems.
But the idea of value function transposes to an asynchronous system very nicely. If
the value function, or some approximation, were broadcast to the actuators, then each
actuator could take independent and separate action, i.e., each would do whatever it
could to decrease the value function. If the actuator were unavailable, then it would do
nothing. In general the actions of multiple actuators has to be carefully coordinated;
simple examples show that turning on two feedback systems, each with its own sen-
sor and actuator, simultaneously, can lead to disastrous loss of performance, or even
instability. But if there is a value or control Lyapunov function that each is separately
minimizing, everything is fine; the actions are automatically coordinated (via the value
function).
Another idea that will gracefully extend to asynchronous packet-based control
is model predictive control. The basic idea is to carry out far more computation at
run time, by solving optimization problems in the real-time feedback control law.
Model predictive control has played a major role in process control, and also in
supply-chain management, but not (yet) in other areas, mainly owing to the very
large computational burden it places on the controller implementation. The idea is
very simple: at each time step we formulate the optimal control problem, up to some
time horizon in the future, and solve for the whole optimal trajectory (say, using
quadratic programming). We then use the current optimal input as the actuator
signal. The sensor signals can be used to update the model, and carry the same
process out again. A major extension required to apply model predictive control
in networked environments would be the distributed solution of the underlying
optimization problem.
Other Trends in Information and Networks
While we have concentrated in this section on the role of control in communications
and networking, there are many problems in the broader field of information science
and technology for which control ideas will be important. We highlight a few here;
more information can also be found in a recent National Research Council report
on embedded systems [32].
Vigilant, high confidence software systems
Modern information systems are re-
quired to operate in environments where the users place high confidence on the
availability and correctness of the software programs. This is increasingly difficult
due to the networked and often adversarial environment in which these programs
operate. One approach that is being explored by the computer science community
is to provide confidence through
vigilance
. Vigilance refers to continuous, pervasive,
multi-faceted monitoring and correction of system behavior, i.e., control.
The key idea in vigilant software is to use fast and accurate sensing to monitor
the execution of a system or algorithm, compare the performance of the algorithm
to an embedded model of the computation, and then modify the operation of the
algorithm (through adjustable parameters) to maintain the desired performance.
46
Chapter 3. Applications, Opportunities, and Challenges
Sort
2
Sort
3
Select
Sort
1
Output
List
Input list
Feedback
Figure 3.5.
An example of a vigilant high confidence software system:
distributed sorting using feedback.
This “sense-compute-act” loop is the basic paradigm of feedback control and pro-
vides a mechanism for online management of uncertainty. Its power lies in the fact
that rather than considering every possible situation at design time, the system re-
acts to specific situations as they occur. An essential element of the strategy is the
use of either an embedded model, through which an appropriate control action can
be determined, or a predefined control strategy that is analyzed offline to ensure
stability, performance, and robustness.
As an indication of how vigilance might be used to achieve high confidence,
consider an example of feedback control for distributed sorting, as shown in Fig-
ure 3.5. We envision a situation in which we have a collection of partial sort algo-
rithms that are interconnected in a feedback structure. Suppose that each sorter
has multiple inputs, from which it chooses the best sorted list, and a single output,
to which it sends an updated list that is more ordered. By connecting these modules
together in a feedback loop, it is possible to get a completely sorted list at the end
of a finite number of time steps.
While unconventional from a traditional computer science perspective, this
approach gives robustness to failure of individual sorters, as well as self-reconfiguring
operation. Robustness comes because if an individual module
unsorts
its data, this
data will not be selected from the input streams by the other modules. Further,
if the modules have different loads (perhaps due to other processing being done
on a given processor), the module with the most time available will automatically
take on the load in performing the distributed sorting. Other properties such as
disturbance rejection, performance, and stability could also be studied by using
tools from control.
Verification and validation of protocols and software
The development of com-
plex software systems is increasing at a rapid rate and our ability to design such
systems so that they give provably correct performance is increasingly strained.
Current methods for verification and validation of software systems require large
amounts of testing and many errors are not discovered until late stages of develop-
ment or even product release. Formal methods for verification of software are used
for systems of moderate complexity, but do not scale well to large software systems.
Control theory has developed a variety of techniques for giving provably cor-
rect behavior by using upper and lower bounds to effectively break computational
3.2. Information and Networks
47
complexity bounds. Recent results in convex optimization of semialgebraic prob-
lems (those that can be expressed by polynomial equalities and inequalities) are
providing new insights into verification of a diverse set of continuous and combi-
natorial optimization problems [36]. In particular, these new techniques allow a
systematic search for “simple proofs” of mixed continuous and discrete problems
and offer ideas for combining formal methods in computer science with stability and
robustness results in control.
Real-time supply chain management
As increasing numbers of enterprise systems
are connected to each other across networks, there is an enhanced ability to perform
enterprise level, dynamic reconfiguration of high availability assets for achieving
efficient, reliable, predictable operations. As an example of the type of application
that one can imagine, consider the operation of a network of HVAC systems for a
regional collection of buildings, under the control of a single operating company. In
order to minimize overall energy costs for its operation, the company makes a long-
term arrangement with an energy broker to supply a specified amount of electrical
power that will be used to heat and cool the buildings. In order to get the best
price for the energy it purchases, the company agrees to purchase a fixed amount of
energy across its regional collection of buildings and to pay a premium for energy
usage above this amount. This gives the energy broker a fixed income as well as a
fixed (maximum) demand, for which it is willing to sell electricity at a lower price
(due to less uncertainty in future revenue as well as system loading).
Due to the uncertainty in the usage of the building, the weather in different
areas across the region, and the reliability of the HVAC subsystems in the build-
ings, a key element in implementing such an arrangement is a distributed, real-time
command and control system capable of performing distributed optimization of
interconnected assets. The power broker and the company must be able to commu-
nicate information about asset condition and mission between the control systems
for their electrical generation and HVAC systems and the subsystems must react
to sensed changes in the environment (occupancy, weather, equipment status) to
optimize the fleet level performance of the network.
Realization of enterprise-wide optimization of this sort will require substantial
progress in a number of technical areas: distributed, embedded modeling tools that
allow low resolution modeling of the external system combined with high resolution
modeling of the local system, resident at each node in the enterprise; distributed
optimization algorithms that make use of the embedded modeling architecture to
produce near optimal operations; fault tolerant, networked control systems that
allow control loops to operate across unreliable network connections; and low cost,
fault tolerant, reconfigurable hardware and software architectures.
A very closely related problem is that of C4ISR (command, control, com-
munications, computers, intelligence, surveillance, and reconnaissance) in military
systems. Here also, networked systems are revolutionizing the capabilities for con-
tinuous planning and asset allocation, but new research is needed in providing
robust solutions that give the required performance in the presence of uncertainty
and adversaries. The underlying issues and techniques are almost identical to enter-
48
Chapter 3. Applications, Opportunities, and Challenges
prise level resource allocation, but the environment in which they must perform is
much more challenging for military applications. Control concepts will be essential
tools for providing robust performance in such dynamic, uncertain, and adversarial
environments.
3.3. Robotics and Intelligent Machines
49
3.3 Robotics and Intelligent Machines
It is my thesis that the physical functioning of the living individual and the oper-
ation of some of the newer communication machines are precisely parallel in their
analogous attempts to control entropy through feedback. Both of them have sensory
receptors as one stage in their cycle of operation: that is, in both of them there exists
a special apparatus for collecting information from the outer world at low energy lev-
els, and for making it available in the operation of the individual or of the machine.
In both cases these external messages are not taken neat, but through the internal
transforming powers of the apparatus, whether it be alive or dead. The information
is then turned into a new form available for the further stages of performance. In
both the animal and the machine this performance is made to be effective on the
outer world. In both of them, their performed action on the outer world, and not
merely their intended action, is reported back to the central regulatory apparatus.
Norbert Wiener, from
The Human Use of Human Beings: Cybernetics and Society
,
1950 [42].
Robotics and intelligent machines refer to a collection of applications involv-
ing the development of machines with human-like behavior. While early robots
were primarily used for manufacturing, modern robots include wheeled and legged
machines capable of participating in robotic competitions and exploring planets,
unmanned aerial vehicles for surveillance and combat, and medical devices that
provide new capabilities to doctors. Future applications will involve both increased
autonomy and increased interaction with humans and with society. Control is a
central element in all of these applications and will be even more important as the
next generation of intelligent machines are developed.
Background and History
The goal of cybernetic engineering, already articulated in the 1940s and even be-
fore, has been to implement systems capable of exhibiting highly flexible or “in-
telligent” responses to changing circumstances. In 1948, the MIT mathematician
Norbert Wiener gave a widely read, albeit completely non-mathematical, account
of cybernetics [41]. A more mathematical treatment of the elements of engineering
cybernetics was presented by H. S. Tsien in 1954, driven by problems related to
control of missiles [40]. Together, these works and others of that time form much
of the intellectual basis for modern work in robotics and control.
The early applications leading up to today’s robotic systems began after World
War II with the development of remotely controlled mechanical manipulators, which
used master-slave mechanisms. Industrial robots followed shortly thereafter, start-
ing with early innovations in computer numerically controlled (CNC) machine tools.
Unimation, one of the early robotics companies, installed its first robot in a General
Motors plant in 1961. Sensory systems were added to allow robots to respond to
changes in their environment and by the 1960s many new robots were capable of
grasping, walking, seeing (through binary vision), and even responding to simple
voice commands.
The 1970s and 80s saw the advent of computer controlled robots and the
field of robotics became a fertile ground for research in computer science and me-
50
Chapter 3. Applications, Opportunities, and Challenges
(a)
(b)
Figure 3.6.
(a) The Mars Sojourner and (b) Sony AIBO robots. Pho-
tographs courtesy of Jet Propulsion Laboratory and Sony.
chanical engineering. Manufacturing robots became commonplace (led by Japanese
companies) and a variety of tasks ranging from mundane to high precision, were un-
dertaken with machines. Artificial intelligence (AI) techniques were also developed
to allow higher level reasoning, including attempts at interaction with humans. At
about this same time, new research was undertaken in mobile robots for use on the
factory floor and remote environments.
Two accomplishments that demonstrate the successes of the field are the Mars
Sojourner robot and the Sony AIBO robot, shown in Figure 3.6. Sojourner success-
fully maneuvered on the surface of Mars for 83 days starting in July 1997 and sent
back live pictures of its environment. The Sony AIBO robot debuted in June of
1999 and was the first “entertainment” robot that was mass marketed by a major
international corporation. It was particularly noteworthy because of its use of AI
technologies that allowed it to act in response to external stimulation and its own
judgment.
It is interesting to note some of the history of the control community in
robotics. The IEEE Robotics and Automation Society was jointly founded in the
early 1980s by the Control Systems Society and the Computer Society, indicating
the mutual interest in robotics by these two communities. Unfortunately, while
many control researchers were active in robotics, the control community did not
play a leading role in robotics research throughout much of the 1980s and 90s.
This was a missed opportunity since robotics represents an important collection
of applications that combines ideas from computer science, artificial intelligence,
and control. New applications in (unmanned) flight control, underwater vehicles,
and satellite systems are generating renewed interest in robotics and many control
researchers are becoming active in this area.
Despite the enormous progress in robotics over the last half century, the field
is very much in its infancy. Today’s robots still exhibit extremely simple behaviors
compared with humans and their ability to locomote, interpret complex sensory
3.3. Robotics and Intelligent Machines
51
inputs, perform higher level reasoning, and cooperate together in teams is limited.
Indeed, much of Wiener’s vision for robotics and intelligent machines remains unre-
alized. While advances are needed in many fields to achieve this vision—including
advances in sensing, actuation, and energy storage—the opportunity to combine
the advances of the AI community in planning, adaptation, and learning with the
techniques in the control community for modeling, analysis, and design of feedback
systems presents a renewed path for progress. This application area is strongly
linked with the Panel’s recommendations on the integration of computing, commu-
nication and control, development of tools for higher level reasoning and decision
making, and maintaining a strong theory base and interaction with mathematics.
Challenges and Future Needs
The basic electromechanical engineering and computing capabilities required to
build practical robotic systems have evolved over the last half-century to the point
where today there exist rapidly expanding possibilities for making progress toward
the long held goals of intelligence and autonomy. The implementation of principled
and moderately sophisticated algorithms is already possible on available computing
hardware and more capability will be here soon. The successful demonstration of
vision guided automobiles operating at high speed, the use of robotic devices in
manufacturing, and the commercialization of mobile robotic devices attest to the
practicality of this field.
Robotics is a broad field; the perspectives afforded by computer science, con-
trol, electrical engineering, mechanical engineering, psychology, and neuroscience
all yield important insights. Even so, there are pervasive common threads, such as
the understanding and control of spatial relations and their time evolution. The
emergence of the field of robotics has provided the occasion to analyze, and to at-
tempt to replicate, the patterns of movement required to accomplish useful tasks.
On the whole, this has been a sobering experience. Just as the ever closer exam-
ination of the physical world occasionally reveals inadequacies in our vocabulary
and mathematics, roboticists have found that it is quite awkward to give precise,
succinct descriptions of effective movements using the syntax and semantics in com-
mon use. Because the motion generated by a robot is usually its
raison d’etre
,it
is logical to regard motion control as being a central problem. Its study has raised
several new questions for the control engineer relating to the major themes of feed-
back, stability, optimization, and estimation. For example, at what level of detail in
modeling (i.e. kinematic or dynamic, linear or nonlinear, deterministic or stochas-
tic, etc.) does optimization enter in a meaningful way? Questions of coordination,
sensitivity reduction, stability, etc. all arise.
In addition to these themes, there is the need for development of appropriate
software for controlling the motion of these machines. At present there is almost no
transportability of robotic motion control languages. The idea of vendor indepen-
dent languages that apply with no change to a wide range of computing platforms
and peripherals has not yet been made to work in the field of robotics. The clear
success of such notions when applied to operating systems, languages, networks,
disk drives, and printers makes it clear that this is a major stumbling block. What
52
Chapter 3. Applications, Opportunities, and Challenges
is missing is a consensus about how one should structure and standardize a “motion
description language.” Such a language should, in addition to other things, allow
one to implement compliance control in a general and natural way.
Another major area of study is adaptation and learning. As robots become
more commonplace, they will need to become more sophisticated in the way they
interact with their environment and reason about the actions of themselves and
others. The robots of science fiction are able to learn from past experience, interact
with humans in a manner that is dependent on the situation, and reason about high
level concepts to which they have not been previously exposed. In order to achieve
the vision of intelligent machines that are common in our society, major advances in
machine learning and cognitive systems will be required. Robotics provides an ideal
testbed for such advances: applications in remote surveillance, search and rescue,
entertainment, and personal assistance are all fertile areas for driving forward the
state of the art.
In addition to better understanding the actions of individual robots, there
is also considerable interest and opportunity in cooperative control of teams of
robots. The U.S. military is considering the use of multiple vehicles operating in a
coordinated fashion for surveillance, logistical support, and combat, to offload the
burden of dirty, dangerous, and dull missions from humans. Over the past decade,
several new competitions have been developed in which teams of robots compete
against each other to explore these concepts. Perhaps the best known of these is
RoboCup, which is described briefly in the following vignette.
Vignette: RoboCup—A testbed for autonomous collaborative behavior
in adversarial environments (Raffaello D’Andrea, Cornell University)
RoboCup is an international collection of robotics and artificial intelligence (AI) compe-
titions. The competitions are fully autonomous (no human intervention) head-to-head
games, whose rules are loosely modeled after the human game of soccer; each team
must attempt to score more goals than the opponent, subject to well defined rules
and regulations (such as size restrictions, collision avoidance, etc.) The three main
competitions are known as the Simulation League, the F2000 League, and the F180
League,
The F180 League is played by 6 inch cube robots on a 2 by 3 meter table (see Figure 3.7,
and can be augmented by a global vision system; the addition of global vision shifts
the emphasis away from object localization and computer vision, to collaborative team
strategies and aggressive robot maneuvers. In what follows, we will describe Cornell’s
experience in the F180 League at the 1999 competition in Stockholm, Sweden and the
2000 competition in Melbourne, Australia.
Cornell was the winner of the F180 League in both 1999, the first year it entered the
competition, and 2000. The team’s success can be directly attributed to the adoption
of a systems engineering approach to the problem, and by emphasizing system dynamics
and control. The systems engineering approach was instrumental in the complete devel-
opment of a competitive team in only 9 months (for the 1999 competition). Twenty-five
students, a mix of first year graduate students and seniors representing computer sci-
3.3. Robotics and Intelligent Machines
53
Figure 3.7.
F180 league RoboCup soccer. Photograph courtesy Raffaello
D’Andrea.
ence, electrical engineering, and mechanical engineering, were able to construct two
fully operational teams by effective project management, by being able to capture the
system requirements at an early stage, and by being able to cross disciplinary boundaries
and communicate among themselves. A hierarchical decomposition was the means by
which the problem complexity was rendered tractable; in particular, the system was
decomposed into estimation and prediction, real time trajectory generation and control,
and high level strategy.
Estimation and prediction entailed relatively simple concepts from filtering, tools known
to most graduate students in the area of control. In particular, smoothing filters for
the vision data and feedforward estimators to cope with system latency were used to
provide an accurate and robust assessment of the game state. Trajectory generation
and control consisted of a set of primitives that generated feasible robot trajectories;
various relaxation techniques were used to generate trajectories that (1) could quickly
be computed in real time (typically less than 1000 floating point operations), and (2)
took full advantage of the inherent dynamics of the vehicles. In particular, feasible
but aggressive trajectories could quickly be generated by solving various relaxations of
optimal control problems. These primitives were then used by the high level strategy,
essentially a large state-machine.
The high-level strategy was by far the most ad-hoc and heuristic component of the
54
Chapter 3. Applications, Opportunities, and Challenges
Cornell RoboCup team. The various functions that determined whether passes and
interceptions were possible were rigorous, in the sense that they called upon the provably
effective trajectory and control primitives, but the high level strategies that determined
whether a transition from defense to offense should be made, for example, or what play
should be executed, relied heavily on human judgment and observation. As of March
2001, most of the efforts at Cornell have shifted to understanding how the design and
verification of high level strategies that respect and fully utilize the system dynamics
can take place.
Certain robotic applications, such as those that call for the use of vision sys-
tems to guide robots, now require the use of computing, communication and control
in an integrated way. The computing that is to be done must be opportunistic, i.e.
it must be tailored to fit the needs of the specific situation being encountered. The
data compression that is needed to transmit television signals to a computer must
be done with a view toward how the results will be used by the control system. It
is both technologically difficult and potentially dangerous to build complex systems
that are controlled in a completely centralized way. For this reason we need to de-
cide how to distribute the control function over the communication system. Recent
work on the theory of communication protocols has made available better methods
for designing efficient distributed algorithms. This work can likely be adapted in
such a way as to serve the needs of robotic applications.
Finally, we note the need to develop robots that can operate in highly unstruc-
tured environments. This will require considerable advances in visual processing and
understanding, complex reasoning and learning, and dynamic motion planning and
control. Indeed, a framework for reasoning and planning in these unstructured en-
vironments will likely require new mathematical concepts that combine dynamics,
logic, and geometry in ways that are not currently available. One of the major ap-
plications of such activities is in the area of remote exploration (of the earth, other
planets, and the solar system), where human proxies will be used for continuous
exploration to expand our understanding of the universe.
Other Trends in Robotics and Intelligent Machines
In addition to the challenges and opportunities described above, there are many
other trends that are important for advances in robotics and intelligent machines
and that will drive new research in control.
Mixed Initiative Systems and Human Interfaces
It seems clear that more exten-
sive use of computer control, be it for factories, automobiles or homes, will be most
effective if it comes with a natural human interface. Having this goal in mind, one
should look for interfaces which are not only suitable for the given application but
which are sufficiently general so that, with minor modification, they can serve in
related applications as well. Progress in this area will not only require new insights
into processing of visual data (described above), but a better understanding of the
interactions
of humans with machines and computer controlled systems.
3.3. Robotics and Intelligent Machines
55
One program underway in the United States is exploring the use of “variable
autonomy” systems , in which machines controlled by humans are given varying
levels of command authority as the task evolves. Such systems involve humans
that are integrated with a computer-controlled system in such a way that the hu-
mans may be simultaneously receiving instructions from and giving instructions to
a collection of machines. One application of this concept is a semi-automated air
traffic control system, in which command and control computers, human air traffic
controllers, flight navigation systems, and pilots have varying levels of responsibil-
ity for controlling the airspace . Such a system has the possibility of combining
the strengths of machines in rapid data processing with the strengths of humans
in complex reasoning , but will require substantial advances in understanding of
man-machine systems.
Control Using High Data-Rate Sensors
Without large expenditure, we are able
to gather and store more pictures and sounds, temperatures and particle counts,
than we know how to use. We continue to witness occasional catastrophic fail-
ures of our man-machine systems, such as those used for transportation, because
we do not correctly interpret or appropriately act on the information available to
us. It is apparent that in many situations collecting the information is the easy
part. Feedback control embodies the idea that performance can be improved by
coupling measurement directly to action. Physiology provides many examples at-
testing to the effectiveness of this technique. However, as engineers and scientists
turn their attention to the highly automated systems currently being built by the
more advanced manufacturing and service industries, they often find that the direct
application of feedback control is frustrated by a web of interactions which make
the smallest conceptual unit too complex for the usual type of analysis. In partic-
ular, vision guided systems are difficult to design and often fail to be robust with
respect to lighting conditions and changes in the environment. In order to proceed,
it seems, design and performance evaluation must make more explicit use of ideas
such as adaptation, self-configuration , and self-optimization.
Indications are that the solution to the problems raised above will involve
active feedback control of the perceptual processes, an approach which is common-
place in biology. One area that has received considerable attention is the area of
active vision in which the vision sensor is controlled on the basis of the data it gen-
erates. Other work involves tuning the vision processing algorithms on basis of the
data collected. The significant progress now being made toward the resolution of
some of the basic problems results, in large part, from the discovery and aggressive
use of highly nonlinear signal processing techniques. Examples include the varia-
tional theories that have been brought to bear on the image segmentation problem,
the theories of learning based on computational complexity, and information theo-
retic based approaches to perceptual problems. Attempts to incorporate perceptual
modules into larger systems, however, often raise problems about communication
and distributed computation which are not yet solved.
Related to this is the problem of understanding and interpreting visual data.
The technology for recognizing voice commands is now sophisticated enough to see
56
Chapter 3. Applications, Opportunities, and Challenges
use in many commercial systems. However, the processing and interpretation of
image data is in its infancy, with very few systems capable of decision making and
action based on visual data. One specific example is understanding of human mo-
tion, which has many applications in robotics. While it is possible for robots to react
to simple gestures, we do not yet have a method for describing and reasoning about
more complex motions, such as a person walking down the street, stooping to pick
up a penny, and being bumped by someone that did not see them stop. This sort of
interpretation requires representation of complex spatial and symbolic relationships
that are beyond currently available tools in areas such as system identification, state
estimation, and signal to symbol translation.
Medical Robotics
Computer and robotic technology is having a revolutionary im-
pact on the practice of medical surgery. By extending surgeons’ ability to plan and
carry out surgical interventions more accurately and in a minimally invasive manner,
computer-aided and robotic surgical systems can reduce surgical and hospital costs,
improve clinical outcomes, and improve the efficiency of health care delivery. The
ability to consistently carry out surgical procedures and to comprehensively log key
patient and procedure outcome data should also lead to long term improvements in
surgical practice.
Robotic technology is useful in a variety of surgical contexts. For example, the
“Robodoc” surgical assistant uses the precision positioning and drilling capabilities
of robots to improve the fit of implants during total hip replacement [4]. The
improved fit leads to significantly fewer complications and longer lasting implants.
Similarly, 3-dimensional imaging data can drive the precision movement of robot
arms during stereotactical brain surgery, thereby reducing the risk of collateral brain
damage. The DaVinci system from Intuitive Surgical uses teleoperation and force-
reflecting feedback methods to enable minimally invasive coronary procedures that
would otherwise require massively invasive chest incisions [31]. Figure 3.8 shows
the ZEUS system developed by Computer Motion, Inc. a modified version of which
was used in 2001 to allow a surgeon in New York to operate on a 68 year old woman
in Strasbourg, France [26]. These are only a few of the currently approved robotic
surgical systems, with many, many more systems in clinical trials and laboratory
development.
While medical robotics is becoming a reality, there are still many open research
and development questions. Clearly, medical robotics will benefit from the same
future advances in computing, communication, sensing, and actuation technology
that will broadly impact all future control systems. However, the issue of system and
software reliability is fundamental to the future of medical robotics. Formal methods
for system verification of these highly nonlinear, hybrid, and uncertain systems, as
well as strategies for extreme fault tolerance are clearly needed to ensure rapid
and widespread adoption of these technologies. Additionally, for the foreseeable
future, robotic medical devices will be assistants to human surgeons. Consequently,
their human/machine interfaces must be able to deal with the complex contexts of
crowded operating rooms in an absolutely reliable way, even during unpredictable
surgical events.
3.3. Robotics and Intelligent Machines
57
Figure 3.8.
The ZEUS (tm) Robotic Surgical System, developed by Com-
puter Motion Inc., is capable of performing minimally invasive microsurgery proce-
dures from a remote location. Photograph courtesy of Computer Motion Inc.
58
Chapter 3. Applications, Opportunities, and Challenges
3.4 Biology and Medicine
Feedback is a central feature of life. The process of feedback governs how we grow,
respond to stress and challenge, and regulate factors such as body temperature, blood
pressure, and cholesterol level. The mechanisms operate at every level, from the
interaction of proteins in cells to the interaction of organisms in complex ecologies.
Mahlon B. Hoagland and B. Dodson, from
The Way Life Works
,1995[17].
At a variety of levels of organization—from molecular to cellular to organismal—
biology is becoming more accessible to approaches that are commonly used in
engineering: mathematical modeling, systems theory, computation, and abstract
approaches to synthesis. Conversely, the accelerating pace of discovery in biologi-
cal science is suggesting new design principles that may have important practical
applications in man-made systems. This synergy at the interface of biology and
engineering offers unprecedented opportunities to meet challenges in both areas.
The principles of control are central to many of the key questions in biological
engineering and will play a enabling role in the future of this field.
A major theme identified by the Panel was the science of reverse (and eventu-
ally forward) engineering of biological control networks. There are a wide variety of
biological phenomena that provide a rich source of examples for control, including
gene regulation and signal transduction; hormonal, immunological, and cardiovas-
cular feedback mechanisms; muscular control and locomotion; active sensing, vision,
and proprioception; attention and consciousness; and population dynamics and epi-
demics. Each of these (and many more) provide opportunities to figure out what
works, how it works, and what can be done to affect it.
The Panel also identified potential roles for control in medicine and biomedical
research. These included intelligent operating rooms and hospitals, from raw data to
decisions; image guided surgery and therapy; hardware and soft tissue integration;
fluid flow control for medicine and biological assays; and the development of physical
and neural prosthesis. Many of these areas have substantial overlap with robotics
and some have been discussed already in Section 3.3.
We focus in this section on three interrelated aspects of biological systems:
molecular biology, integrative biology, and medical imaging. These areas are rep-
resentative of a larger class of biological systems and demonstrate how principles
from control can be used to understand nature and to build engineered systems.
Molecular Biology
2
The life sciences are in the midst of a major revolution that will have fundamental
implications in biological knowledge and medicine. Genomics has as its objective
the complete decoding of DNA sequences, providing a “parts list” for the proteins
present in every cell of the organism being studied. Proteomics is the study of
the three-dimensional structure of these complex proteins. The shape of a protein
determines its function: proteins interact with each other through “lego-like” fitting
2
The Panel would like to thank Eduardo Sontag for his contributions to this section, based on
his Reid Prize plenary lecture at the 2001 SIAM Annual Meeting.
3.4. Biology and Medicine
59
Figure 3.9.
The wiring diagram of the growth signaling circuitry of the
mammalian cell [16].
of parts in “lock and key” fashion, and their conformation also enhances or represses
DNA expression through selective binding.
One may view cell life as a huge “wireless” network of interactions among
proteins, DNA, and smaller molecules involved in signaling and energy transfer. As
a large system, the external inputs to a cell include physical signals (UV radiation,
temperature) as well as chemical signals (drugs, hormones, nutrients). Its outputs
include chemicals that affect other cells. Each cell can be thought of, in turn, as
composed of a large number of subsystems involved in cell growth, maintenance,
division, and death. A typical diagram describing this complex set of interactions
is shown in Figure 3.9.
The study of cell networks leads to the formulation of a large number of ques-
tions. For example, what is special about the information-processing capabilities,
or input/output behaviors, of such biological networks? Can one characterize these
behaviors in terms familiar to control theory (e.g., transfer functions or Volterra se-
ries)? What “modules” appear repeatedly in cellular signaling cascades, and what
are their system-theoretic properties? Inverse or “reverse engineering” issues in-
clude the estimation of system parameters (such as reaction constants) as well as
the estimation of state variables (concentration of protein, RNA, and other chemical
substances) from input/output experiments. Generically, these questions may be
viewed respectively as the identification and observer (or filtering) problems which
are at the center of much of control theory.
One can also attempt to better understand the stability properties of the
60
Chapter 3. Applications, Opportunities, and Challenges
various cascades and feedback loops that appear in cellular signaling networks. Dy-
namical properties such as stability and existence of oscillations in such networks are
of interest, and techniques from control theory such as the calculation of robustness
margins will play a central role in the future. At a more speculative (but increasingly
realistic) level, one wishes to study the possibility of using control strategies (both
open and closed loop) for therapeutic purposes, such as drug dosage scheduling.
The need for mathematical models in cellular biology has long been recognized,
and indeed many of the questions mentioned above have been studied for the last 20
or 30 years. What makes the present time special is the availability of huge amounts
of data—generated by the genomics and proteomics projects, and research efforts in
characterization of signaling networks—as well as the possibility for experimental
design afforded by genetic engineering tools (gene knock-outs and insertions, PCR)
and measurement technology (green fluorescent protein and other reporters, and
gene arrays). Control-oriented modeling and analysis of feedback interconnections
is an integral component of building effective models of biological systems.
Feedback and uncertainty
. From a theoretical perspective, feedback serves to min-
imize uncertainty and increase accuracy in the presence of noise. The cellular en-
vironment is extremely noisy in many ways, while at the same time variations in
levels of certain chemicals (such as transcriptional regulators) may be lethal to the
cell. Feedback loops are omnipresent in the cell and help regulate the appropriate
variations. It is estimated, for example, that in
E. coli
about 40% of transcription
factors self-regulate. One may ask whether the role of these feedback loops is in-
deed that of reducing variability, as expected from principles of feedback theory.
Recent work tested this hypothesis in the context of tetracycline repressor protein
(TetR) [7]. An experiment was designed in which feedback loops in TetR produc-
tion were modified by genetic engineering techniques, and the increase in variability
of gene expression was correlated with lower feedback “gains,” verifying the role
of feedback in reducing the effects of uncertainty. Modern experimental techniques
will afford the opportunity for testing experimentally (and quantitatively) other
theoretical predictions, and this may be expected to be an active area of study at
the intersection of control theory and molecular biology.
Necessity of embedded structures in regulation loops
. Another illustration of the
interface between feedback theory and modern molecular biology is provided by
recent work on chemotaxis in bacterial motion.
E. coli
moves, propelled by flagella,
in response to gradients of chemical attractants or repellents, performing two basic
types of motions:
tumbles
(erratic turns, with little net displacement) and
runs
.In
this process,
E. coli
carries out a stochastic gradient search strategy: when sensing
increased concentrations it stops tumbling (and keeps running), but when it detects
low gradients it resumes tumbling motions (one might say that the bacterium goes
into “search mode”).
The chemotactic signaling system, which detects chemicals and directs mo-
tor actions, behaves roughly as follows: after a transient nonzero signal (“stop
tumbling, run toward food”), issued in response to a change in concentration, the
system adapts and its signal to the motor system converges to zero (“OK, tum-
ble”). This adaptation happens for any constant nutrient level, even over large
3.4. Biology and Medicine
61
ranges of scale and system parameters, and may be interpreted as robust (struc-
turally stable) rejection of constant disturbances. The internal model principle of
control theory implies (under appropriate technical conditions) that there must be
an embedded integral controller whenever robust constant disturbance rejection is
achieved. Recent models and experiments succeeded in finding, indeed, this embed-
ded structure [5, 43].
This work is only one of the many possible uses of control theoretic knowledge
in reverse engineering of cellular behavior. Some of the deepest parts of the theory
concern the necessary existence of embedded control structures, and in this man-
ner one may expect the theory to suggest appropriate mechanisms and validation
experiments for them.
Genetic circuits
. Biomolecular systems provide a natural example of hybrid systems,
which combine discrete and logical operations (a gene is either turned on or off for
transcription) and continuous quantities (such as concentrations of chemicals) in a
given cell or in a cell population). Complete hybrid models of basic circuits have
been formulated, such as the lysogeny/lysis decision circuit in bacteriophage
λ
[28].
Current research along these lines concerns itself with the identification of
other naturally occurring circuits, as well as with the engineering goal of
designing
circuits to be incorporated into delivery vehicles (bacteria, for example), for ther-
apeutic purposes. This last goal is, in principle, mathematically in the scope of
realization theory, that branch of systems theory which deals with the synthesis of
dynamical systems which implement a specified behavior.
Integrative Biology
3
Control also has a role to play in understanding larger scale organisms, such as
insects and animals. The components of these
integrative
biological systems are be-
coming much better understood and, like molecular systems, it is becoming evident
that systems principles are required to build the next level of understanding. This
understanding of natural systems will enable new approaches to engineered systems,
as we begin to build systems with the efficiency, robustness, and versatility of the
natural world. We focus here on the problem of locomotion, for which there has
been substantial recent work (see [13] for a review).
Integrative studies of locomotion have revealed several general principles that
underly a wide variety of organisms. These include energy storage and exchange
mechanisms in legged locomotion and swimming, nonpropulsive lateral forces that
benefit stability and maneuverability, and locomotor control systems that combine
mechanical reflexes with multimodal sensory feedback and feedforward control. Lo-
comotion, especially at the extremes of what is found in nature, provides a rich set
of examples that have helped elucidate a variety of structure-function relationships
in biological systems.
Control systems and feedback play a central role in locomotion. A suite of
neurosensory devices are used within the musculoskeletal system and are active
throughout each cycle of locomotion. In addition, the viscoleastic dynamics of the
3
The Panel would like to thank Michael Dickinson for his contributions to this section.
62
Chapter 3. Applications, Opportunities, and Challenges
Figure 3.10.
Overview of flight behavior in a fruit fly, Drosophila. (a)
Cartoon of the adult fruit fly showing the three major sensor strictures used in
flight: eyes, antennae, and halteres (detect angular rotations). (b) Example flight
trajectories over a 1 meter circular arena, with and without internal targets. (c)
A schematic control model of the flight system. Figure and description courtesy of
Michael Dickinson.
musculoskeletal system play a critical role in providing rapid feedback paths that
enable stable operation. Rapid feedback from both mechanical and neural pathways
is integrated with information from eyes, ears, noses and other sensing organs used
to control the overall motion of an animal and provide robust operation in a wide
variety of environments.
The process that gives rise to locomotion is a complex one, as illustrated
in Figure 3.10 for the flight behavior of a fruit fly. Each element of the flight
control system has enormous complexity in itself, with the interconnection (grossly
simplified in the figure) allowing for a very rich set of behaviors. The sensors,
actuators, and control systems for insects such as the fly are highly evolved, so
that the dynamics of the system play strongly into the overall capabilities of the
organism.
From the perspective of control theory, the performance, robustness, and fault
tolerance of the fly’s flight control system represents a gold standard by which all