OUTformation: Distributed Data-Gathering with Feedback under
Unknown Environment and Communication Delay Constraints
SooJean Han
1
∗
, Michelle Effros
1
, Richard M. Murray
1
Abstract
— Towards the informed design of large-scale dis-
tributed data-gathering architectures under real-world assump-
tions such as nonzero communication delays and unknown envi-
ronment dynamics, this paper considers the effects of allowing
feedback communication from the central processor to exter-
nal sensors. Using simple but representative state-estimation
examples, we investigate fundamental tradeoffs between the
mean-squared error (MSE) of the central processor’s estimate
of the environment state, and the total power expenditure
per sensor under more conventional architectures without
feedback (
INformation
) versus those with broadcast feedback
(
OUTformation
). The primary advantage of enabling feedback
is that each sensor’s understanding of the central processor’s
estimate improves, which enables each sensor to determine
when and what parts of its current observations to transmit. We
use theory to demonstrate conditions in which OUTformation
maintains the same MSE as INformation with less power
expended on average, and conditions in which OUTformation
obtains less MSE than INformation at additional power cost.
These performance tradeoffs are also considered under settings
where environments undergo less variation, and sensors imple-
ment random backoff times to prevent transmission collisions.
Our results are supported via numerical studies, which show
that the properties derived in theory still hold even when some
of the simplifying assumptions are removed.
I. I
NTRODUCTION
In previous literature, a variety of architectures for dis-
tributed data-gathering in large-scale cyberphysical network
applications has been proposed with the aim of reducing
unnecessary transmissions of redundant information and
excess computation; see [1], [2] for surveys. However,
many methods consider only structural variations in the
communication among lower-level nodes such as external
sensors (e.g., hierarchical clustering [3], [4]), and have rarely
challenged the imposition that data flows only in a single
direction from sensors on the network’s edge to the higher-
level processor at the network’s center. Some architectures
allowing
feedback communication
in the reverse direction,
from the central processor to the external sensors, are de-
veloped under a diversity of simplifying assumptions. In
some feedback schemes for redundant transmission reduction
(e.g. [5]), communication delays are omitted; this disregards
issues pertaining to outdated data, which is important to dis-
tributed state estimation in real-world large-scale networks.
In others (e.g. [6]), communication delays are included, but
1
SooJean Han, Michelle Effros, and Richard M. Murray are with the
Division of Engineering and Applied Science at California Institute of
Technology, Pasadena CA.
∗
Corresponding author. Email:
soojean@caltech.edu
This paper is based on work supported by the National Science Founda-
tion Graduate Research Fellowship under Grant No. DGE-1745301.
the approach for redundant transmission reduction assumes
full knowledge of the environment dynamics.
While in practice, the best choice of architecture to use
for distributed data-gathering varies by setting, imposing
simplifying but impractical assumptions makes it difficult
to properly motivate feedback, and make fair comparisons
between architectures which employ feedback and those
which do not. Feedback can provide at least the following
two advantages in large-scale distributed systems. First, by
nature of the distributed architecture, data gathering and
data processing are sequential actions; having a larger delay
between the sensor transmission times and central processor
receipt times can potentially lead to large estimation errors.
Second, for large-scale systems, a large number of sensors
may be needed to survey the environment. But independent
sensors surveying a shared environment are likely to collect
data which exhibits redundancies, and transmitting redundant
information to the central processor is inefficient.
With these motivations in mind, the contributions of our
paper are as follows. We employ a modular framework
for distributed state estimation under two important real-
world constraints: nonzero communication delays and lim-
ited knowledge of the environment dynamics. Because the
type of feedback we consider broadcasts data “out” from the
central processor to the sensors, we refer to this architecture
as the
OUTformation architecture
for data-gathering, as op-
posed to the
INformation architecture
where all information
flows from the sensors “in” to the central processor. We
use a modular framework representation of the INformation
and OUTformation architectures to take a first step towards
characterizing their relative performances with respect to two
specific metrics: 1) the mean-squared error (MSE) of the
central processor’s estimate of the environment state, and 2)
the total power expenditure of each sensor. The importance
of such a characterization arises from allowing users to
determine which type of architecture is more suitable for
use based on hardware specifications and limited knowledge
of the environment, as well as providing essential insights to
make parameter design choices for optimal performance in
both MSE and power expenditure.
We theoretically analyze simple but insightful case studies
demonstrating fundamental tradeoffs between the two per-
formance metrics, and consider how the tradeoffs vary under
different environment statistics and parameter design choices.
In particular, we show that the main advantage of enabling
feedback is improving each sensor’s estimate of the central
processor’s estimate, which allows each sensor to make
more informed decisions about when and what to transmit,
1
arXiv:2208.06395v1 [eess.SY] 12 Aug 2022
potentially reducing both 1) its uplink power expenditure,
and 2) the MSE by minimizing the delays after which
the central processor receives its transmissions. We employ
numerical simulations to break some of the simplifying
assumptions made in our theoretical analysis, and show that
the properties derived in theory still hold, which suggests
that the implications of OUTformation architectures extend
beyond the framework treated rigorously by the theory.
II. IN
FORMATION AND
OUT
FORMATION
A. Problem Setup
We consider environments which evolve as a random walk
given by
x
((
c
+ 1)∆
t
) =
x
(
c
∆
t
) +
n
∑
i
=1
δ
i
(
c
)
d
(
c
)
(1)
Here,
c
∈
N
, and
∆
t
∈
R
+
is the environment’s transition
period. For each
i
∈{
1
,
···
,n
}
and
c
∈
N
, random variable
δ
i
(
c
)
takes value
1
with probability
p/
2
,
−
1
with probability
p/
2
, and
0
with probability
1
−
p
for some
p
∈
(0
,
1)
, and
d
(
c
)
∈
R
n
such that
d
i
(
c
)
∼U
[
d
,
d
]
is a uniformly-distributed
random variable stepsize between constant values
0
< d
<
d
.
Definition 1
(Unknown Environment)
.
The titular “unknown
environment” refers to the fact that the parameters
∆
t,p,d
,
d
are unknown to the central processor and all the sensors.
We focus on architectures where a single central processor
communicates with
M
∈
N
independent external sensors.
Each sensor
j
∈ {
1
,
···
,M
}
employs a linear measure-
ment equation perturbed by additive, white Gaussian noise
w
j
(
t
)
∼N
(
0
,σ
2
I
n
)
where
I
n
∈
R
n
×
n
is identity:
y
j
(
aτ
j
) =
C
j
x
(
aτ
j
) +
w
j
(
aτ
j
)
(2)
Here,
C
j
∈{
0
,
1
}
m
j
×
n
,
m
j
∈
N
,m
j
<n
, such that each row
contains exactly one 1. Each sensor
j
makes one observation
of the environment with
sampling period
τ
j
timesteps, and
a
∈
N
is any number.
Over some experiment duration
[0
,T
sim
)
,T
sim
∈
R
+
, the
central processor’s objective is to use the transmissions of
each sensor to construct an estimate
ˆ
x
(
t
)
of the environment
state
x
(
t
)
. Each sensor
j
’s objective is to determine when and
which components
y
jk
(
t
)
,k
∈{
1
,
···
,m
j
}
of
y
j
(
t
)
it should
transmit; we show that this is contingent upon how accurately
the sensor can track the central processor’s estimate
ˆ
x
(
t
)
.
Assumption 1
(Framework Assumptions)
.
Each sensor
j
operates on limited power sources (e.g. batteries). Commu-
nications in both uplink and downlink channels are noise-
less and able to precisely encode/decode real numbers. To
prevent potential collisions of transmissions, each sensor
j
schedules
random backoff times
B
jk
(
t
)
d
∼
B
for some
random variable
B
∈
[0
,
∞
)
distributed according to the cdf
F
B
(
s
)
,
P
(
B
≤
s
)
for each component
k
∈{
1
,
···
,m
j
}
and
time
t
. The
B
jk
(
t
)
are distributed independently of each
other and of the noise process
w
j
(
t
)
. The central processor
performs
fusion
by averaging over the most recent sensor
. . .
Environment
Central Processor
Broadcast Rule
Transmission Rule
Sensor 1
Sensor M
Transmission Rule
Verification Rule
Verification Rule
Fig. 1:
A modular representation of an OUTformation architecture
for distributed state estimation under unknown environments and
communication delay constraints, using a single central processor
and a network of
M
external sensors. For each sensor, the verifica-
tion rule (Definition 5) is highlighted in green, and the transmission
rule (Definition 6) is in blue. The broadcast rule (Definition 4) is
in red; INformation architectures are depicted by removing the red
broadcast arrows.
observations it received to construct an estimate
ˆ
x
i
(
t
)
for
each component
i
∈{
1
,
···
,n
}
.
Remark 1.
We emphasize that our goal is not to model
any particular distributed data-gathering architecture in fine
detail, but to provide insights into the distinctions of architec-
tures with and without feedback from the central processor
to the sensors. Thus, while more complex functionalities
(e.g., optimized random backoff strategies [7], consensus-
based fusion [8], change-detection [9]) may be used than
those described in Assumption 1, we focus specifically on
functionalities which need to be implemented differently
based on whether an architecture has enabled feedback or
not.
B. Modular Framework
Under Assumption 1, we distinguish between traditional
INformation and OUTformation architectures in the follow-
ing way. The presence or absence of broadcast feedback
(which pushes data “outward” from the central processors to
the sensors) changes how accurately each sensor can track
the central processor’s estimate
ˆ
x
(
t
)
.
Definition 2
(INformation and OUTformation)
.
Under an
INformation architecture
, each sensor’s estimate of
ˆ
x
i
(
t
)
,i
∈
{
1
,
···
,n
}
is its own previous transmission to the cen-
tral processor. Under an
OUTformation architecture
, each
sensor’s estimate of
ˆ
x
i
(
t
)
is the more recent of either its
own previous transmission or the latest broadcast received.
A modular representation of the distributed data-gathering
OUTformation architecture, which we use throughout the
paper, is illustrated in Figure 1.
Notation 1.
Parameters are assigned the superscript
(
I
)
for INformation architectures, and
(
O
)
for OUTformation
2
architectures. Let
χ
be a placeholder variable such that
χ
=
I
for an INformation architecture, and
χ
=
O
for OUTforma-
tion. We henceforth let
ˆ
x
(
χ
)
∈
R
n
be the central processor’s
estimate under architecture
χ
∈{
I,O
}
.
Notation 2.
For each sensor
j
∈{
1
,
···
,M
}
and component
k
∈{
1
,
···
,m
j
}
, we let
i
k
∈{
1
,
···
,n
}
correspond to the
“full state vector component” from which observation
y
jk
(
t
)
is made, i.e.
x
i
k
(
t
)
,
C
j,
(
k,
·
)
x
(
t
)
, where
C
j,
(
k,
·
)
denotes the
k
th row of
C
j
from (2).
Definition 3
(Shared vs. Unshared Components)
.
For each
sensor
j
∈ {
1
,
···
,M
}
, let
k
∈{
1
,
···
,m
j
}
be a compo-
nent and
i
k
∈ {
1
,
···
,n
}
be the corresponding full-vector
component (see Notation 2). Then
k
is said to be a
shared
component if
x
i
k
(
t
)
is sensed by other sensors, and an
unshared
component if
x
i
k
(
t
)
is sensed by only
j
. Define
the set
U
j
⊆{
1
,
···
,m
j
}
to be the set of sensor
j
’s unshared
components, and
S
j
,
U
c
j
to be the set of sensor
j
’s shared
components.
Definition 4
(Broadcast Rule)
.
OUTformation architectures
employ the following
broadcast rule
(red in Figure 1): a
component
i
∈ {
1
,
···
,n
}
is scheduled to be broadcast if
ˆ
x
(
O
)
i
(
t
)
6
= ˆ
x
(
O
)
i
(
s
)
, where
s < t
is the time of the previous
broadcast. All scheduled values
ˆ
x
(
O
)
i
(
t
)
are collected to form
the dimension-reduced
broadcast vector
ˆ
x
(
b
)
(
t
)
∈ ∪
n
i
=1
R
i
and sent to the sensors at once.
Assumption 2
(Components Being Broadcast)
.
Broadcasts
are made only for components which are shared in the sense
of Definition 3.
Definition 5
(Verification Rule)
.
For
χ
∈{
I,O
}
as in No-
tation 1 and each sensor
j
∈{
1
,
···
,M
}
, we define
T
(
χ
)
j
,
a multiple of the sampling period
τ
j
from (2), to be the
verification period
under architecture
χ
. At time
t
,
aT
(
χ
)
j
,
a
∈
N
, the
verification rule
schedules the transmission of
y
jk
(
t
)
,
k
∈{
1
,
···
,m
j
}
for time
t
+
B
jk
(
t
)
if
β
(
χ
)
jk
(
t
) = 1
,
where
B
jk
(
t
)
is the random backoff time from Assumption 1.
Under an INformation architecture, we use:
β
(
I
)
jk
(
t
) =
1
{|
y
jk
(
s
jk
)
−
y
jk
(
t
)
|≥
}
(3)
while under an OUTformation architecture, we use:
β
(
O
)
jk
(
t
) =
{
1
{|
ˆ
x
(
b
)
i
k
(
t
)
−
y
jk
(
t
)
|≥
}
if
s
∗
≥
s
jk
β
(
I
)
jk
(
t
)
else
(4)
Here,
s
jk
<t
is the time of sensor
j
’s previous transmission
of component
k
, and
s
∗
<t
is the time the broadcast value
ˆ
x
(
b
)
i
k
(
t
)
was created. Threshold parameter
>
0
is chosen
by design. The special case of
event-triggered verification
occurs when the verification rule is checked with every
observation sensor
j
generates, i.e.
T
(
I
)
j
=
T
(
O
)
j
=
τ
j
.
Definition 6
(Transmission Rules)
.
Let
a
∈
N
, and let
χ
∈{
I,O
}
be the placeholder variable defined in Nota-
tion 1. For each sensor
j
, let
T
(
χ
)
j
be the verification period
from Definition 5. At time
t
+
B
jk
(
t
)
where
t
,
aT
(
χ
)
j
and
B
jk
(
t
)
is the random backoff time from Assumption 1,
the
transmission rule
transmits
y
jk
(
t
)
for every compo-
nent
k
∈{
1
,
···
,m
j
}
, if
β
(
χ
)
jk
(
t
) = 1
and
γ
(
χ
)
jk
(
t,B
jk
(
t
)) = 1
.
Essentially, the transmission rule is a re-checking of the
verification rule in case the sensor receives more up-to-
date data about the central processor’s estimate. Because
each sensor only has their own local observations under an
INformation architecture, we have:
γ
(
I
)
jk
(
t,B
jk
(
t
)) =
β
(
I
)
jk
(
t
)
(5)
while under an OUTformation architecture, we use:
γ
(
O
)
jk
(
t,B
jk
(
t
)) =
1
{
β
(
O
)
jk
(
t
) = 1
}∧
{
1
{|
ˆ
x
(
b
)
i
k
(
t
+
B
jk
(
t
))
−
y
jk
(
t
)
|≥
}
if
s
∗∗
≥
s
jk
1
{|
y
jk
(
s
jk
)
−
y
jk
(
t
)
|≥
}
else
(6)
Here,
s
jk
<t
is the time of sensor
j
’s previous transmission
of component
k
, and
s
∗∗
<t
+
B
jk
(
t
)
is the time the broad-
cast value
ˆ
x
(
b
)
i
k
(
t
+
B
jk
(
t
))
was created. Threshold parameter
>
0
is chosen by design.
Notation 3.
Under Assumption 1, the three types of archi-
tectures we compare are specified below:
1)
Pure INformation
architecture
IN
0
: each sensor
j
im-
plements verification rule (3) with verification period
T
(
I
)
j
, and transmission rule (5) with zero threshold
= 0
.
2)
Absolute-Difference INformation
architecture
IN
(
)
:
each sensor
j
implements verification rule (3) with
verification period
T
(
I
)
j
, and transmission rule (5) with
fixed threshold
>
0
.
3)
OUTformation
architecture
OUT
(
)
: each sensor
j
implements verification rule (4) with verification pe-
riod
T
(
O
)
j
, transmission rule (6) with fixed threshold
>
0
, and the central processor uses the broadcast rule
from Definition 4.
Definition 7
(Rates per Component)
.
For sensor
j
under
architecture
χ
∈{
I,O
}
, let
U
(
χ
)
jk
(
s
:
t
)
,D
(
χ
)
jk
(
s
:
t
)
∈
N
be the
cumulative
number of transmissions
and
number of broad-
casts received
, respectively, for component
k
∈{
1
,
···
,m
j
}
over the interval of time
[
s,t
)
. The
communication delay
incurred per component
k
is denoted
∆
t
(
u
)
∈
R
+
for uplink
transmission,
∆
t
(
d
)
∈
R
+
for downlink broadcast. For each
sensor, the
rate of power
expended per component
k
is
P
U
∈
R
+
for transmission and
P
D
∈
R
+
for receipt.
Definition 8
(Performance Metrics)
.
The
mean-squared er-
ror (MSE)
of the environment state vector over experiment
duration
[0
,T
sim
)
under architecture
χ
∈{
I,O
}
is given by
1
T
sim
T
sim
∑
t
=0
n
∑
i
=1
∣
∣
∣
x
i
(
t
)
−
ˆ
x
(
χ
)
i
(
t
)
∣
∣
∣
2
(7)
The
total amount of power expenditure per sensor
j
under
architecture
χ
is given by:
R
(
χ
)
j
(0 :
T
sim
) =
P
U
U
(
χ
)
j
(0 :
T
sim
) +
P
D
D
(
χ
)
j
(0 :
T
sim
)
(8)
3
where the power expenditure rates
P
U
,
P
D
are from Def-
inition 7, and
U
(
χ
)
j
(
s
:
t
)
,
∑
m
j
k
=1
U
(
χ
)
jk
(
s
:
t
)
, likewise
D
(
χ
)
j
(
s
:
t
)
,
∑
m
j
k
=1
D
(
χ
)
jk
(
s
:
t
)
.
III. T
HEORETICAL
A
NALYSIS
With the setup described above, we now characterize
the tradeoff space between the MSE and sensor power
expenditure metrics (see Definition 8) for the three architec-
tures in Notation 3. In this section, we introduce theory to
analyze
IN
(
)
and
OUT
(
)
under the following simplified
version of the original problem setup in Section II-A. Later,
in Section IV, we numerically study all three architectures
for the original setup of Section II-A.
Setting 1
(Two Sensors over One-Component Change En-
vironment)
.
We consider the true environment dynamics (1)
when the vector of indicators
[
δ
1
(
c
)
,
···
,δ
n
(
c
)]
T
can only
take values from the set
{
0
n
,
e
1
,
···
,
e
n
,
−
e
1
,
···
,
−
e
n
}
for all
c
∈
N
, where
0
n
denotes the zero vector in
R
n
.
Both types of architectures operate with
M
= 2
sensors,
where each sensor
j
∈{
1
,
2
}
has
m
j
∈
N
components.
Here,
m
′
<
min(
m
1
,m
2
)
components are shared in the
sense of Definition 3 (
S
,
S
1
=
S
2
,
|S|
=
m
′
) and the re-
maining
m
j
−
m
′
are unshared (
|U
j
|
=
m
j
−
m
′
). The sen-
sors have the same sampling rate
τ
,
t
1
=
t
2
(see (2))
such that
∆
t/τ
∈
N
, and use event-triggered verification
T
(
I
)
j
=
T
(
O
)
j
=
τ
(see Definition 5). Because there are only
two sensors in the network, sensor
1
transmits immediately
(
B
1
k
(
t
) = 0
) while sensor
2
performs random backoff ac-
cording to the cdf
F
B
prescribed in Assumption 1.
Notation 4.
For each sensor
j
∈{
1
,
2
}
and each component
k
∈ {
1
,
···
,m
j
}
, let
p
(
χ
)
jk
(
t,B
jk
(
t
))
,
P
(
γ
(
χ
)
jk
(
t,B
jk
(
t
)) =
1)
be the probability that component
k
is transmitted uplink
by sensor
j
at time
t
under architecture
χ
, using (5) for
χ
=
I
and (6) for
χ
=
O
. Furthermore, for simplicity of expression,
every interval
[
c
∆
t,
(
c
+ 1)∆
t
)
, where
c
∈
N
and
∆
t
is
from (1), is referred to as
environment interval
c
.
Lemma 1
(Power from Unshared Components)
.
For sensor
j
∈{
1
,
2
}
under Setting 1, the expected total power expended
by unshared components over environment interval
c
∈
N
is
equivalent under
IN
(
)
or
OUT
(
)
, and given by
E
[
R
(
χ
)
j,
U
j
(
c
∆
t
: (
c
+ 1)∆
t
)] =
P
U
∑
k
∈U
j
(
∆
t/τ
∑
h
=0
(1
−
F
B
(∆
t
−
hτ
−
∆
t
(
u
)
))
p
(
I
)
jk
(
t
c
−
1
(
h
)
,B
jk
(
t
c
−
1
(
h
)))
+
∆
t/τ
−
1
∑
h
=1
F
B
(∆
t
−
hτ
−
∆
t
(
u
)
)
p
(
I
)
jk
(
t
c
(
h
)
,B
jk
(
t
c
(
h
)))
)
(9)
where
t
c
(
h
)
,
c
∆
t
+
ht
for any
c,h
∈
N
is a placeholder, and
R
(
χ
)
j,
U
j
(
c
∆
t
: (
c
+1)∆
t
)
is the part of the power (8) contributed
by unshared components.
Proof.
Under Definition 4, no broadcasts are made for un-
shared components and (6) is equivalent to (5) when
k
∈U
j
.
Hence,
IN
(
)
and
OUT
(
)
have the same uplink commu-
nications for unshared components, and their contributed
total power expenditures are equivalent. For placeholder
a
∈{
c
−
1
,c
}
, let
X
(
χ
)
jk
(
a,h
)
,
h
∈{
0
,
···
,
∆
t/τ
−
1
}
, be the
indicator denoting whether or not
y
jk
(
a
∆
t
+
hτ
)
is received
during environment interval
c
under architecture
χ
∈{
I,O
}
.
By Assumption 1,
y
jk
((
c
−
1)∆
t
+
hτ
)
is received during
the interval with probability
1
−
F
B
(∆
t
−
hτ
−
∆
t
(
u
)
)
, and
y
jk
(
c
∆
t
+
hτ
)
with probability
F
B
(∆
t
−
hτ
−
∆
t
(
u
)
)
. The
result follows by definition of
p
(
χ
)
jk
(
t,B
jk
(
t
))
in Nota-
tion 4 and
E
[
U
(
χ
)
jk
(
c
∆
t
: (
c
+ 1)∆
t
)] =
∑
∆
t/τ
h
=0
E
[
X
jk
(
c
−
1
,h
)] +
∑
∆
t/τ
−
1
h
=1
E
[
X
jk
(
c,h
)]
.
Because both
IN
(
)
and
OUT
(
)
have the same uplink
communications for unshared components, the central pro-
cessor has the same amount of data about the environment
at each time
t
. Thus, we also have the following result.
Lemma 2
(MSE from Unshared Components)
.
Using the
setup of Lemma 1, suppose
ˆ
x
(
χ
)
(
t
)
is constructed via the
fusion rule from Assumption 1 under
IN
(
)
when
χ
=
I
and
OUT
(
)
when
χ
=
O
. Then
∑
i
∈N
∣
∣
∣
x
i
(
t
)
−
ˆ
x
(
I
)
i
(
t
)
∣
∣
∣
2
=
∑
i
∈N
∣
∣
∣
x
i
(
t
)
−
ˆ
x
(
O
)
i
(
t
)
∣
∣
∣
2
(10)
where
N
,
{
i
k
∈{
1
,
···
,n
}|
k
∈U
1
∪U
2
}
with
i
k
denoting
the full state vector component corresponding to component
k
(see Notation 2).
The more interesting comparison arises for shared com-
ponents
k
∈S
. To obtain concrete mathematical expressions,
we present the next three
results over a specific chosen
interval of communications
[
T
0
,T
f
)
⊂
[0
,T
sim
)
such that 1)
IN
(
)
and
OUT
(
)
begin from the same performance metric
values at time
T
0
, and 2) the sample path of communications
for shared component
k
behaves exactly like the noiseless
case. By deriving conditions of performance improvement
for
OUT
(
)
over
IN
(
)
during
[
T
0
,T
f
)
, we remark the
implication that this improvement cascades over the longer
interval
[
T
0
,T
sim
)
.
Setting 2
(Interval of Communications I)
.
Under Set-
ting 1, choose the specific interval
[
T
0
,T
f
)
to be environ-
ment interval
c
(i.e.,
T
0
,
c
∆
t
,
T
f
,
(
c
+ 1)∆
t
) such that
the following occurs. The environment evolves such that
[
δ
1
(
c
)
,
···
,δ
n
(
c
)] =
e
i
k
for full state vector component
i
k
∈{
1
,
···
,n
}
defined in Notation 2 for a component
k
∈S
which is shared in the sense of Definition 3. The sample
path of observations
Y
k
,
{
y
jk
(
t
)
,t
∈
[0
,
(
c
+ 1)∆
t
)
,j
∈
{
1
,
2
}}
for
k
is such that for
χ
∈ {
I,O
}
,
β
(
χ
)
jk
(
c
∆
t
) = 1
and
β
(
χ
)
jk
(
c
∆
t
+
hτ
) = 0
for all
h
≤
H
−
1
; here,
H
∈
N
is defined in Setting 1, and
β
(
χ
)
jk
is the verification rule
from Definition 5.
Theorem 1
(Power from Shared Components)
.
Suppose Set-
ting 2 holds for shared component
k
∈S
over environment
interval
c
∈
N
defined in Notation 4. Then the expected
4
difference between the power expended under
IN
(
)
and
OUT
(
)
is given by
E
[
R
(
I
)
(
c
∆
t
: (
c
+ 1)∆
t
)
−
R
(
O
)
(
c
∆
t
: (
c
+ 1)∆
t
)]
(11)
= (
P
U
−
P
D
)(1
−
F
B
(∆
t
(
u
)
+ ∆
t
(
d
)
))
P
(
|
W
1
−
W
2
|≥
)
where
W
1
,W
2
∼N
(0
,σ
2
)
are independent random variables
with
σ
defined in (2),
∆
t
(
u
)
,
∆
t
(
d
)
,
P
U
,
P
D
are the commu-
nication delays and power expenditure rates in Definition 7,
F
B
is the random backoff time distribution from Setting 1,
and
R
(
χ
)
(
c
∆
t
: (
c
+ 1)∆
t
)
is the power expended over both
sensors.
Proof.
By the random backoff strategy of Setting 1 and
communications of Setting 2, sensor
1
transmits
y
1
k
(
c
∆
t
)
at
time
c
∆
t
while sensor
2
observes
y
2
k
(
c
∆
t
)
and schedules
to transmit at time
c
∆
t
+
B
2
k
(
c
∆
t
)
. Sensor
2
’s scheduled
transmission is cancelled if only the central processor broad-
casts
y
1
k
(
c
∆
t
)
before time
c
∆
t
+
B
2
k
(
c
∆
t
)
, which occurs
with probability
1
−
F
B
(∆
t
(
u
)
+ ∆
t
(
d
)
)
under Setting 1, and
if
|
y
1
k
(
c
∆
t
)
−
y
2
k
(
c
∆
t
)
|≥
, which occurs with probability
P
(
|
W
1
−
W
2
|≥
)
by (2). The result follows from the fact
that
P
U
−
P
D
is the difference between transmitting and
broadcasting an extra component (see Definition 7).
Theorem 2
(MSE from Shared Components)
.
Suppose Set-
ting 2 holds for shared component
k
∈S
over envi-
ronment interval
c
∈
N
. Let
ˆ
x
(
χ
)
(
t
)
be constructed via
the fusion rule from Assumption 1, and suppose that
ˆ
x
i
k
(
c
∆
t
)
,
ˆ
x
(
I
)
i
k
(
c
∆
t
) = ˆ
x
(
O
)
i
k
(
c
∆
t
)
. Then the probability
that the MSE contributed by component
i
k
, defined in Nota-
tion 2, under
OUT
(
)
is less than that under
IN
(
)
is given
by:
(1
−
F
B
(∆
t
(
u
)
+ ∆
t
(
d
)
))
(12)
∗
P
(
1
2
W
1
W
2
+
W
2
2
−
3
4
W
2
1
>
0
,
|
W
1
−
W
2
|
<
)
where
∆
t
(
u
)
,
∆
t
(
d
)
are the communication delays in Defini-
tion 7,
F
B
is the random backoff time distribution from Set-
ting 1, and
W
1
,W
2
∼N
(0
,σ
2
)
are independent random
variables with
σ
from (2).
Proof.
For simplicity, we exclude the factor of
1
/T
sim
in (7)
throughout our proof. Under Setting 2, the MSE of compo-
nent
i
k
during environment interval
c
under
IN
(
)
is:
(ˆ
x
i
k
(
c
∆
t
)
−
x
i
k
(
c
∆
t
))
2
∆
t
(
u
)
+ (
y
1
k
(
c
∆
t
)
−
x
i
k
(
c
∆
t
))
2
(
c
∆
t
+
B
2
k
(
c
∆
t
))
+
(
1
2
(
y
1
k
(
c
∆
t
) +
y
2
k
(
c
∆
t
))
−
x
i
k
(
c
∆
t
)
)
2
∗
(∆
t
−
B
2
k
(
c
∆
t
)
−
∆
t
(
u
)
)
(13)
Under
OUT
(
)
, the MSE is the same as (13) if sensor
2
transmits
y
2
k
(
t
)
; this occurs when
{
B
2
k
(
c
∆
t
)
<
∆
t
(
u
)
+ ∆
t
(
d
)
}∪
(14)
{
B
2
k
(
c
∆
t
)
≥
∆
t
(
u
)
+ ∆
t
(
d
)
,
|
y
1
k
(
c
∆
t
)
−
y
2
k
(
c
∆
t
)
|≥
}
Abs-Diff IN
Communications between Sensors and Central Processor vs. Time
0
25
50
75
100
125
150
175
200
Time
OUT
Sensor 1
Sensor 2
Fig. 2:
A simple empirical illustration of the results of Theo-
rems 1 and 2 when
σ
= 0
over environment (1): time-evolution
of the communications between a central processor and
M
= 2
sensors under Absolute-Difference INformation architecture
IN
(
)
[Top] and OUTformation architecture
OUT
(
)
[Bottom] and event-
triggered verification. Sensor
1
transmissions are in blue, sen-
sor
2
in green, and central processor broadcasts in red. Under
OUT
(
)
, if the central processor broadcasts an update from sensor
1
, sensor
2
may cancel its scheduled backoff transmission (e.g.,
t
= 40
,
60
,
···
,
120
,
160
,
···
). The slopes of each line vary because
communication delays are proportional to the number of compo-
nents transmitted/broadcast (see Definition 7).
When sensor
2
does not transmit, the MSE is:
(ˆ
x
i
k
(
c
∆
t
)
−
x
i
k
(
c
∆
t
))
2
∆
t
(
u
)
+ (
y
1
k
(
c
∆
t
)
−
x
i
k
(
c
∆
t
))
2
((
c
+ 1)∆
t
−
∆
t
(
u
)
)
(15)
From (2),
(1
/
2)(
y
1
k
(
t
)+
y
2
k
(
t
))
−
x
i
k
(
t
) = (1
/
2)(
W
1
+
W
2
)
for any
t
and two independent
W
1
,W
2
∼N
(0
,σ
2
)
; likewise,
y
1
k
(
t
)
−
x
i
k
(
t
) =
W
1
. Hence, the difference between (15)
and (13) yields
(∆
t
−
B
2
k
(
c
∆
t
)
−
∆
t
(
u
)
)((1
/
4)(
W
1
+
W
2
)
2
−
W
2
1
)
, which is positive under Setting 1 if
(1
/
4)(
W
1
+
W
2
)
2
−
W
2
1
>
0
. Combining this with (14) yields our re-
sult.
Remark 2
(Implications of Theorems 1 and 2)
.
In Figure 2,
we demonstrate the cascading effect of the performance
improvements derived in Theorems 1 and 2 by considering
a longer interval of time than Setting 2. For significance, the
original setting of multiple simultaneous component changes
(see Section II-A) with no measurement noise
(
σ
= 0)
is
plotted. For a general number
M
≥
2
of sensors with small
σ
in (2), multiple sensors may decrease in both uplink
and downlink power at the expense of additional downlink
power for one sensor, and Theorem 1 suggests an overall
decrease in total power expenditure using
OUT
(
)
over
IN
(
)
. Consequently, Theorem 2 and Figure 2 suggest that
the difference in MSE performance between
OUT
(
)
and
IN
(
)
is always zero when
σ
= 0
in (2); when
σ
6
= 0
,
OUT
(
)
yields less MSE than
IN
(
)
with probability (12).
An interesting MSE advantage of
OUT
(
)
over
IN
(
)
5
arises when we use larger verification periods
T
(
I
)
j
>τ
j
,
defined in Definition 5. Here, instead of Setting 2, we use
the following interval of communications instead.
Setting 3
(Interval of Communications II)
.
We consider the
original dynamics (1) with nonzero random backoff time
for both sensors (see Assumption 1). For sampling time
τ
j
defined in (2) and verification periods defined in Def-
inition 5, let
T
(
I
)
j
=
T
(
O
)
j
=
T
j
>τ
j
for sensor
j
∈{
1
,
2
}
.
Choose interval
[
T
0
,T
f
)
,
T
0
,
(
a
1
−
1)
T
1
and
T
f
,
a
1
T
1
such that
(
a
1
−
1)
T
1
<a
2
T
2
<a
1
T
1
for some
a
1
,a
2
∈
N
. The
environment (1) evolves such that 1) one shared component
k
′
∈ S
changes value once by magnitude
d
′
during
((
a
1
−
1)
T
1
,a
2
T
2
]
and remains constant during
(
a
2
T
2
,a
1
T
1
]
, and
2) one unshared component
k
1
∈ U
1
of sensor
1
remains
constant during
((
a
1
−
1)
T
1
,a
2
T
2
]
and changes value once
by magnitude
d
1
during
(
a
2
T
2
,a
1
T
1
]
, and 3) no other full-
state components
i
∈{
1
,
···
,n
}
/
{
i
k
1
,i
k
′
}
(see Notation 2)
change value over
[
T
0
,T
f
)
. Here,
d
1
,d
′
∼U
[
d
,
d
]
via the
stepsize distribution from (1). The sample path of observa-
tions
Y
,
{
y
j
(
t
)
,t
∈
[0
,
(
a
1
−
1)
T
1
)
∪
[
T
0
,T
f
)
,j
∈{
1
,
2
}}
is
such that
β
(
χ
)
1
k
1
(
a
1
T
1
) = 1
and
β
(
χ
)
jk
′
(
a
j
T
j
) = 1
for
j
∈{
1
,
2
}
,
where
β
(
χ
)
jk
is the verification rule from Definition 5.
Theorem 3
(MSE with Longer Verification Period)
.
Con-
sider the interval defined in Setting 3. Let
x
(
χ
)
(
t
)
be con-
structed via the fusion rule from Assumption 1 such that
ˆ
x
i
k
(
T
0
)
,
ˆ
x
(
I
)
i
k
(
T
0
) = ˆ
x
(
O
)
i
k
(
T
0
)
where
T
0
and
k
∈ {
k
1
,k
′
}
are defined in Setting 3. Then the MSE contributed by
component
i
k
1
under
OUT
(
)
is less than that under
IN
(
)
during interval
[
T
0
,T
f
)
with probability:
P
(
|
d
1
|
>
0
,
|
W
′
2
−
W
′
1
|
<
)
∗
(16)
(1
−
G
B
1
,B
2
(
a
1
T
1
−
a
2
T
2
+ ∆
t
(
u
)
+ ∆
t
(
d
)
)
Here,
G
B
1
,B
2
(
s
)
,
P
(
B
1
−
B
2
≤
s
)
for two independent
B
1
,B
2
distributed according to
F
B
from Assumption 1,
W
1
,W
′
1
,W
2
,W
′
2
∼ N
(0
,σ
2
)
are independent random vari-
ables with
σ >
0
from (2), and the communication delays
∆
t
(
u
)
,
∆
t
(
d
)
are in Definition 7.
Proof.
As in the proof of Theorem 2, we exclude the factor
of
1
/T
sim
from (7). Under
OUT
(
)
in Setting 3, sensor
1
receives a broadcast update
y
2
k
′
(
a
2
T
2
)
about component
k
′
if
{
B
2
k
′
(
a
2
T
2
)
−
B
1
k
′
(
a
1
T
1
)
≥
a
1
T
1
−
a
2
T
2
+ ∆
t
(
u
)
+ ∆
t
(
d
)
}∩
{|
y
2
k
′
(
a
2
T
2
)
−
y
1
k
′
(
a
1
T
1
)
|
<
}
(17)
If (17) holds, sensor
1
only transmits for component
k
1
.
Under
IN
(
)
, sensor
1
transmits both components
k
′
and
k
1
.
The central processor receives
y
1
k
1
(
a
1
T
1
)
with an additional
uplink delay of
∆
t
(
u
)
compared to
OUT
(
)
. Thus, the
difference in MSE of component
i
k
1
between
OUT
(
)
and
IN
(
)
during interval
[
T
0
,T
f
)
is:
∆
t
(
u
)
((
y
1
k
1
(
a
1
T
1
)
−
x
i
k
1
(
c
∆
t
))
2
−
(ˆ
x
i
k
1
(
a
1
T
1
)
−
x
i
k
1
(
c
∆
t
))
2
)
(18)
IN
Communications between Sensors and Central Processor vs. Time
Abs-Diff IN
0
25
50
75
100
125
150
175
200
Time
OUT
Sensor 1
Sensor 2
−
5
0
X7
True and Estimated Trajectories vs. Time
−
7
.
5
−
5
.
0
−
2
.
5
0
.
0
X10
0
25
50
75
100
125
150
175
200
Time
−
5
0
X12
True
Pure IN
Abs-Diff IN
OUT
Fig. 3:
[Top] A version of Figure 2 under original dynamics (1)
with longer verification periods
T
(
χ
)
1
= 23
,T
(
χ
)
2
= 41
,
χ
∈{
I,O
}
.
All three architectures
IN
0
,
IN
(
)
, and
OUT
(
)
of Notation 3
are plotted. Note three times where the uplink communications of
OUT
(
)
are less than
IN
(
)
: 1) at
t
= 92
,
138
, where sensor
1
cancels its scheduled transmission and 2) at time
t
= 161
, where
the slope of sensor
1
’s uplink transmission is steeper. This suggests
that updates from sensor observations are received more quickly
on average under
OUT
(
)
than
IN
(
)
. [Bottom] The evolution
of components
7
,
10
,
12
of
x
(
t
)
and
ˆ
x
(
χ
)
(
t
)
over time. Since
the central processor receives transmissions more quickly,
ˆ
x
(
O
)
(
t
)
tracks
x
(
t
)
more accurately than
ˆ
x
(
I
)
(
t
)
.
where
c
∈
N
is such that
c
∆
t
≤
a
1
T
1
is the time of
the last true change in component
k
1
which was not
transmitted to the central processor, with
∆
t
defined
in (1). By Setting 3 and (2), we have
y
1
k
1
(
a
1
T
1
)
−
x
i
k
1
(
c
∆
t
) =
W
1
,
ˆ
x
i
k
1
(
a
1
T
1
)
−
x
i
k
1
(
c
∆
t
) =
d
1
+
W
1
, and
y
2
k
′
(
a
2
T
2
)
−
y
1
k
′
(
a
1
T
1
) =
W
′
2
−
W
′
1
for some independent
W
1
,W
′
1
,W
′
2
∼N
(0
,σ
2
)
. Combined with (17), the result
follows.
Remark 3
(Implications of Theorem 3)
.
In Figure 3, we
demonstrate the cascading effect of the MSE advantage
derived in Theorem 3 by considering a longer interval of time
6