Harnessing Fluctuations in Thermodynamic Computing
via
Time-Reversal Symmetries
One Sentence Summary:
Distinct distributions of thermodynamic work identify signatures of successful and failed information
processing in a microscale flux qubit
Gregory Wimsatt,
1,
∗
Olli-Pentti Saira,
2,
†
Alexander B. Boyd,
1,
‡
Matthew H.
Matheny,
2,
§
Siyuan Han,
3,
¶
Michael L. Roukes,
2,
∗∗
and James P. Crutchfield
1, 2,
††
1
Complexity Sciences Center and Physics Department,
University of California at Davis, One Shields Avenue, Davis, CA 95616
2
Condensed Matter Physics and Kavli Nanoscience Institute,
California Institute of Technology, Pasadena, CA 91125
3
Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045
(Dated: July 1, 2019)
We experimentally demonstrate that highly structured distributions of work emerge
during even the simple task of erasing a single bit. These are signatures of a re-
fined suite of time-reversal symmetries in distinct functional classes of microscopic
trajectories. As a consequence, we introduce a broad family of conditional fluctua-
tion theorems that the component work distributions must satisfy. Since they identify
entropy production, the component work distributions encode both the frequency of
various mechanisms of success and failure during computing, as well giving improved
estimates of the total irreversibly-dissipated heat. This new diagnostic tool provides
strong evidence that thermodynamic computing at the nanoscale can be constructively
harnessed. We experimentally verify this functional decomposition and the new class of
fluctuation theorems by measuring transitions between flux states in a superconducting
circuit.
Keywords: Jarzynski integral fluctuation theorem, Crooks detailed fluctuation theorem, Landauer’s Principle,
thermodynamics of computation, information thermodynamics
Physics dictates that all computing is subject to sponta-
neous error. These days, this truism repeatedly reveals
itself: despite the once-predictable miniaturization of
nanoscale electronics, computing performance increases
have dramatically slowed in the last decade or so. In large
measure, this is due to the concomitant rapid decrease
in the number of information-bearing physical degrees of
freedom, rendering information storage and processing
increasingly susceptible to corruption by thermal fluctu-
ations. All computing is thermodynamic. Controlling
the production of fluctuations and removing heat pose
key technological challenges to further progress. One
comparable setting that gives some optimism, though, is
the overtly functional behavior exhibited by biological
cells—presumably functional information processing by
∗
gwwimsatt@ucdavis.edu
†
osaira@caltech.edu
‡
abboyd@ucdavis.edu
§
matheny@caltech.edu
¶
han@ku.edu
∗∗
roukes@caltech.edu
††
chaos@ucdavis.edu; Corresponding author
small numbers of molecules subject to substantial thermal
fluctuations. Computing technologies are very far away
from this kind of robust information processing.
Only recently have tools appeared that precisely describe
what trade-offs exist between thermodynamic resources
and useful information processing—these are highly remi-
niscent of the centuries-old puzzle of how Maxwell’s “very
observant and neat-fingered” demon uses its “intelligence”
to convert disorganized heat energy to useful work [
1
]. In
our modern era, his demon has led to the realization that
information itself is physical [
2
–
4
]—or, most construc-
tively, that information is a thermodynamic resource [
5
].
This has opened up the new paradigm of
thermodynamic
computing
in which fluctuations play a positive role in
efficient information processing on the nanoscale. We now
conceptualize this via
information engines
: physical sys-
tems that are driven by, manipulate, store, and dissipate
energy, but simultaneously generate, store, lose, commu-
nicate, and transform information. In short, information
engines combine traditional engines comprised of heat,
work, and other familiar reservoirs with, what we now
call,
information reservoirs
[6, 7].
Reliable thermodynamic computing requires detecting
arXiv:1906.11973v1 [cond-mat.stat-mech] 27 Jun 2019
2
and controlling fluctuations in informational and energetic
resources and in engine functioning. For this, one appeals
to fluctuation theorems that capture exact time-reversal
symmetries and predict entropy production leading to
irreversible dissipation [
8
–
14
]. We are now on the door-
step of the very far-from-equilibrium thermodynamics
needed to understand the physics of computing. And,
in turn, this has started to reveal the physical principles
of how nature processes information in the service of
biological functioning and survival.
Proof-of-concept experimental tests have been carried out
in several substrates: probing biomolecule free energies
[
15
–
17
], work expended during elementary computing
(bit erasure) [
18
–
23
], and Maxwellian demons [
24
]. That
said, the suite of contemporary principles (Supplementary
Materials (SM) I) far outstrips experimental validation
to date.
To close the gap, we show how to diagnose thermody-
namic computing on the nanoscale by explaining the
signature structures in work distributions generated dur-
ing information processing. These structures track the
mesoscale evolution of a system’s
informational states
and
reveal classes of functional and nonfunctional microscopic
trajectories. We show that the informational-state evolu-
tions are identified by appropriate conditioning and that
they obey a suite of trajectory-class fluctuation theorems,
which give accurate bounds on work, entropy production,
and dissipation. The result is a new tool that employs
mesoscopic measurements to diagnose nanoscale thermo-
dynamic computing. For simplicity and to make direct
contact with previous efforts, we demonstrate the tools
on Landauer erasure of a bit of information in a super-
conducting flux qubit.
As a reference, we first explore the thermodynamics of bit
erasure in a simple model: a particle with position and mo-
mentum in a double-well potential
V
(
x,t
)
and in contact
with a heat reservoir at temperature
T
. (Refer to Fig. 1.)
An external controller adds or removes energy from a work
reservoir to change the form of the potential
V
(
·
,t
)
via a
predetermined
erasure protocol
{
(
β
(
t
)
,δ
(
t
))
: 0
≤
t
≤
τ
}
.
β
(
t
)
and
δ
(
t
)
change one at a time piecewise-linearly
through four protocol substages: (1)
drop barrier
, (2)
tilt
,
(3)
raise barrier
, and (4)
untilt
. (See SM VI.) The system
starts at time
t
= 0
in the equilibrium distribution for a
double-well
V
(
x,
0)
at temperature
T
. Being equiproba-
ble, the informational states associated with each of the
two wells thus contain
1
bit of information [25].
The default potential,
V
(
·
,
0) =
V
(
·
,τ
)
, has two sym-
metric wells separated by a barrier. Following common
practice we call the two wells, from negative to positive
position, the
Left
(
L
) and
Right
(
R
)
informational states
,
respectively.
The erasure protocol is designed so that the particle ends
in the
R
state with high probability, regardless of its initial
state. Conducting our simulation
3
.
5
×
10
6
times,
96
.
2%
of the particles were successfully erased into the
R
state.
Thus, as measured by the Shannon entropy, the initial
1
bit of information was reduced to
0
.
231
bits. Note that we
chose the protocol to give partially inaccurate erasure in
order to illustrate our main results on diagnosing success
and
failure.
At all other times
t
,
V
(
·
,t
)
has either one or two local
minima, naturally defining metastable regions for a parti-
cle to be constrained and gradually evolve towards local
equilibrium. We therefore define the informational states
at time
0
≤
t
≤
τ
to be the metastable regions, labeling
them
R
and, if two exist,
L
— from most positive to
negative in position.
Since the protocol is composed of four simple substages,
we coarse-grain the system’s response by its activity dur-
ing each substage at the level of its informational state.
Specifically, for each substage, we assign one of three
substage trajectory classes
: the system (i) was always in
the
R
state, (ii) was always in the
L
state, or (iii) spent
time in each. Sometimes there is only one informational
state and so the latter two classes are not achievable for
all substages.
We then focus on a single mesoscopic observable—the ther-
modynamic work expended during erasure. An individual
realization generates a trajectory of system microstates,
with
W
(
t,t
′
)
being the work done on the system between
times
0
≤
t < t
′
≤
τ
; see SM VI. Let
W
s
=
W
(
t
s
−
1
,t
s
)
denote the work generated during substage
s
and
C
s
the substage trajectory class. Figure 1 (Outer plot se-
quence) shows the corresponding substage work distribu-
tions
Pr
(
W
s
,C
s
)
obtained from our simulations. (See SM
VII.)
The drop-barrier and tilt substage work distributions are
rather simple, being narrow and unimodal. The raise-
barrier distributions have some asymmetry, but are also
similarly simple. However, the untilt work distributions
(farthest right in Fig. 1) exhibit unusual features that
are significant for understanding the intricacies of erasure.
Trajectories that spend all of the untilt substage in either
the
R
state or
L
state form peaks at the most positive (red)
and negative (orange) work values, respectively. This is
because the
R
-state well is always increasing in potential
energy while the
L
-state well is always decreasing during
untilt. In contrast, the other trajectories contribute a
log-linear ramp of work values (blue) dependent on the
time spent in each. The ramp’s positive slope signifies
that more time is typically spent in the
R
state.
Looking at the total work
W
total
=
W
(0
,τ
)
generated
for each trajectory over the course of the entire erasure
3
Tilt
Raise
Barrier
Reset
Drop
Barrier
~4k
B
T
7k
B
T
Untilt
~6k
B
T
V(x,t
0
)
V(x,t
1
)
V(x,t
2
)
V(x,t
3
)
V(x,t
4
)
Erased: ~Zero bits
Initial: Single bit
-6
0
6
10
2
10
-2
10
0
10
-4
W
1
-6
0
6
Pr(
W
1
,
C
1
)
10
2
10
-2
10
0
10
-4
-6
0
6
10
2
10
-2
10
0
10
-4
-6
0
6
10
2
10
-2
10
0
10
-4
Pr(x)
x
W
2
W
3
W
4
Pr(
W
2
,
C
2
)
Pr(
W
3
,
C
3
)
Pr(
W
4
,
C
4
)
FIG. 1. Inner plot sequence: Erasure protocol (Table S1) evolution of position distribution
Pr
(
x
)
. Potential
V
(
x,t
s
)
at substage
boundary times
t
s
,s
= 0
,
1
,
2
,
3
,
4
. Starting at
t
=
t
0
, the potential evolves clockwise, ending at
t
=
t
4
in the same configuration
as it starts:
V
(
x,t
0
) =
V
(
x,t
4
)
. However, the final position distribution
Pr
(
x
)
predominantly indicates the
R
state. The original
one bit of information in the distribution at time
t
=
t
0
has been erased. Outer plot sequence: Substage work distributions
Pr
(
W
s
,C
s
)
during substages
s
: (1) Barrier Drop, (2) Tilt, (3) Barrier Raise, (4) Untilt. During each substage
s
, distributions
are given for up to three substage trajectory classes
C
s
: red are of trajectories always in the
R
state, orange are of trajectories
always in the
L
state, and blue are of the rest, spending some time in each state.
protocol, we observe the strikingly complex and structured
distribution
Pr
(
W
total
)
shown in Fig. 2(Rear). There
are two clear peaks at the most positive and negative
work values separated by a ramp. This highly structured
work distribution, generated by bit erasure, contrasts
sharply with the unimodal work distributions common
in previous studies; see, for example, Fig. 2(inset) for
the work distribution generated by a thermodynamically-
driven simple harmonic oscillator translated in space or
Fig. 2 in Ref. [12].
We can understand the mechanisms behind this structure
when decomposing Fig. 2 (Rear)’s total work distribution
under the untilt substage trajectory classes
C
4
. We label
trajectories that spend all of the untilting substage in
the
R
state
Success
since, via the previous substages,
they reach the intended
R
state by the untilting substage
and remain there until the protocol’s end. Similarly,
trajectories that spend all of the untilt substage in the
L
state are labeled
Fail
. The remaining trajectories are
labeled
Transitional
, since they transition between the two
informational states during untilt, potentially succeeding
or failing to end in the
R
state. Figure 2 (Three front
plots) shows the work distribution for each of these three
trajectory classes. Together they recover the total work
distribution over all trajectories shown in Fig. 2(Rear).
Though, now the thermodynamic contributions to the
total from the functionally-distinct component trajectories
are made apparent.
Exploring the mesoscale dynamics of erasure revealed
signatures of a “thermodynamics” for each trajectory that
is closely associated with successful or failed information
processing. We now introduce the underlying fluctuation
theory from which the trajectory thermodynamics follow.
Key to this is comparing system behaviors in both forward
and reverse time [8–14]. (See SM III and IV.)
This suite of trajectory-class fluctuation theorems
(TCFTs) applies to arbitrary classes of system microstate
trajectories obtainable during a thermodynamic transfor-
mation. Importantly, they interpolate between Jarzynki’s
equality [
10
] and Crooks’ detailed fluctuation theorem
[
12
], as the trajectory class varies. This lower bounds the
average work
〈
W
〉
C
over any measurable subset
C
of the