of 8
Stability and Control of Biomolecular Circuits through Structure
Fangzhou Xiao
1
, Mustafa Khammash
2
and John C. Doyle
3
Abstract
— Due to omnipresent uncertainties and environmen-
tal disturbances, natural and engineered biological organisms
face the challenging control problem of achieving robust per-
formance using unreliable parts. The key to overcoming this
challenge rests in identifying structures of biomolecular circuits
that are largely invariant despite uncertainties, and building
feedback control through such structures. In this work, we
develop the tool of log derivatives to capture structures in
how the production and degradation rates of molecules depend
on concentrations of reactants. We show that log derivatives
could establish stability of fixed points based on structure,
despite large variations in rates and functional forms of models.
Furthermore, we demonstrate how control objectives, such
as robust perfect adaptation (i.e. step disturbance rejection),
could be implemented through the structures captured. Due to
the method’s simplicity, structural properties for analysis and
design of biomolecular circuits can often be determined by a
glance at the equations.
I. I
NTRODUCTION
Both natural and engineered cells face the challenge of
achieving robust performance using unreliable parts [1]–[3].
In particular, the regulatory biomolecular circuits used in a
cell to achieve desired behavior face large parameter uncer-
tainties due to environmental disturbances, and unknown or
unintended interactions with background cell circuits.
Although feedback control has been successfully applied
in electrical and mechanical engineering to achieve robust
performance [4], it faces a new challenge in biological
engineering that the parts are highly unreliable. Therefore
it is essential to identify key structures of the uncertain
behaviors in biomolecular circuits, so that feedback control
can be built on top of them.
Previous studies have identified several important struc-
tures of biomolecular circuits. While reaction rates tend to
vary due to environmental disturbances, the stoichiometry of
reactions are invariant, since they are results of atomic com-
positions of molecules. Indeed, stoichiometry could be ro-
bustly identified from experimental data and is often consid-
ered as structural information of a chemical reaction network
[5], [6]. In addition, although rate constants and reactant
concentrations vary due to disturbances, they often can be
reliably determined or controlled up to orders-of-magnitude
[7]. Lastly, the many reactions that happen in a cell often
1
Fangzhou Xiao is with Department of Biology and Biological Engineer-
ing, California Institute of Technology, 1200 E California Blvd., Pasadena,
CA 91125, USA.
xfz@caltech.edu
2
Mustafa Khammash is with Department of Biosystems Science and
Engineering, ETH Zurich, Mattenstrasse 26 4058 Basel, Switzerland.,
mustafa.khammash@bsse.ethz.ch
3
John C. Doyle is with Department of Control and Dynamical Systems,
California Institute of Technology, 1200 E California Blvd., Pasadena, CA
91125, USA.,
doyle@caltech.edu
happen at different time scales, making descriptions of a
circuit’s behavior amenable to time-scale separation [8]. The
most robust separation of time scales is the one between
binding reactions and catalysis reactions, as exemplified by
the Michaelis-Menten approximation, which has served as
the foundation of dynamic modeling of biochemical reactions
for over 100 years [9]–[11]. A simple physical argument
is that binding reactions are fast as they only involve low-
energy interactions such as hydrogen bonds, while catalysis
changes high-energy covalent bonds, therefore slower.
The structures mentioned above need to be synergistically
integrated in a cohesive mathematical framework in order to
analyze or design robustly performing biomolecular circuits
using unreliable parts. In particular, it needs to connect struc-
tures with dynamical properties of the system. This difficult
challenge, yet to be overcome, is the central cause for a major
gap between the mathematical languages theorists use, and
the mental pictures and diagrams that experimentalists use to
guide their circuit designs and implementations [12]–[14].
This work provides initial results that could serve as a
first attempt at filling this gap. In particular, we aim at
building mathematical concepts that are tailored for these
quintessentially biological structures.
In the following, we define a general class of systems,
named birth-death systems, that emphasize the production
and degradation of biomolecules, in Section II-A. In Section
II-B, we use log derivatives to capture the structure in
production and degradation fluxes’ dependence on reactant
concentrations. In Section III, we show how log derivatives
relate to a strong notion of stability of fixed points. Lastly,
in Section IV, we show how feedback control could be built
on top of structures to achieve control goals such as robust
perfect adaptation, biologists’ term for step disturbance re-
jection.
A companion work with a focus on studies of examples
that cater to a more biological audience is [15].
II. S
TRUCTURE IN
B
IOMOLECULAR
S
YSTEMS
We begin by introducing the definition of birth death sys-
tems. We do so through chemical reaction networks (CRNs)
[16] to have explicit biological interpretations, although
birth-death systems do not rely on the formalism of CRNs.
A. Chemical Reaction Networks and Birth Death Systems
A CRN is a collection of reactions of the form
α
j
1
X
1
+
···
+
α
kn
X
n
k
j
−−−−−−−→
β
j
1
X
1
+
···
+
β
jn
X
n
where
X
i
,
i
= 1
,...,n
denote chemical species,
j
=
1
,...,m
index reactions,
α
ji
ji
Z
0
denote the number
of
X
i
molecules needed as reactant or produced as product
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
in reaction
j
, and
k
j
R
>
0
is reaction rate constant of
reaction
j
. We denote
α
j
=
[
α
j
1
···
α
jn
]
as the
reactant stoichiometry vector for reaction
j
, and similarly
define
β
j
for product vector. We define
γ
j
=
β
j
α
j
as the
stoichiometric vector of reaction
j
, and
Γ
=
[
γ
1
···
γ
m
]
Z
n
×
m
is the stoichiometric matrix.
The deterministic rate equation of the CRN is
̇
x
=
Γ
Λ
k
ψ
(
x
)
,
(1)
where
x
i
R
0
is the concentration of species
X
i
,
Λ
k
:=
diag
{
k
}
is a diagonal matrix with reaction rate constants
k
j
as entries, and
ψ
(
x
) :
R
n
0
R
m
0
denote how the rate of
reactions depend on concentrations.
A commonly used specification for
ψ
j
(
x
)
is the law
of mass-action, which is applicable to a wide range of
scenarios [17]. It says
ψ
j
(
x
) =
x
α
j
, where we denote
x
α
j
=
n
i
=1
x
α
ji
i
.
Since concentrations of biomolecules change by produc-
tion and degradation reactions, we could re-write the dynam-
ics as follows:
̇
x
i
=
f
i
(
x
) =:
f
+
i
(
x
)
f
i
(
x
)
(2)
:=
j
:
γ
ji
>
0
k
j
ψ
j
(
x
)
j
:
γ
ji<
0
k
j
ψ
j
(
x
)
(3)
where we have collected terms from reactions producing
x
i
into
f
+
i
(
x
)
and terms from reactions degrading
x
i
into
f
i
(
x
)
.
The physical interpretation of the variables
x
i
as con-
centrations dictate that they remain non-negative, therefore
the positive orthant is forward invariant. A necessary and
sufficient condition is
f
i
(
x
)
0
whenever
x
i
= 0
. It is
also natural to assume that each species has at least one
production reaction and at least one degradation reaction.
This yields the following definition for birth-death systems.
Definition 1:
A birth-death system is a dynamical system
of the form (2) where the production and degradation rates
f
±
i
:
R
n
0
R
0
are analytic and globally non-negative,
and
f
i
(
x
)
0
whenever
x
i
= 0
.
The definition of a birth-death system emphasizes the
structure that each concentration variable is regulated by
two processes, production and degradation. Understanding
the dynamics of a birth-death system then comes down to
characterizing how production and degradation rates
f
±
i
(
x
)
depend on the concentrations
x
. In the following section,
we use a simple example to illustrate that the dependence of
f
±
i
(
x
)
on
x
is highly structured, and this structure could be
formalized through log derivatives.
B. Log derivatives formalize regimes of regulation under
time-scale separation
Production and degradation of molecules happen through
enzymatic catalysis [10]. In the following, we consider the
simplest regulation of enzymatic catalysis to illustrate the
structure in
f
±
i
(
x
)
’s dependence on
x
.
E
+
S
k
+
−−
−−
k
C
k
f
−→
E
+
P.
(4)
Here
E
is the enzyme,
S
is substrate,
C
is the complex
formed from
E
and
S
binding together, and
P
is the product
Fig. 1: The log-derivative polytope of the complex
C
with respect to
t
E
and
t
S
defined by steady state equations in Eq (5). A point in this
space represents the sensitivity of the steady-state
C
concentration
to changes in the total concentration of
E
or
S
. The green triangle
marks the possible sensitivity values the system can admit. The
edges of the triangle represent different assumptions about the
saturation of the species. The edge marked by the red line is the
range of log derivatives covered by the Michaelis-Menten formula.
Red dots mark the vertices. The expressions next to the vertices
correspond to the three regimes.
molecule formed.
To proceed, we use time-scale separation that binding
reactions tend to be much faster than catalysis reactions. This
entails the following equations:
t
E
=
E
+
C, t
S
=
C
+
S, C
=
ES
K
,
(5)
where
t
E
is the total concentration of enzyme
E
,
t
S
is total
concentration of substrate
S
in free or bound forms, and
K
is the dissociation constant
K
d
or its variants such as
K
M
,
based on details of which part of
C
dynamics is fast [18].
To connect with the notation of birth-death systems, we
denote
x
P
as the concentration of
P
,
x
E
=
t
E
as the total
concentration of
E
, and
x
S
=
t
S
as the total concentration
of
S
. Then
x
P
’s dynamics is
̇
x
P
=
f
+
P
f
P
=
k
f
C
(
x
E
,x
S
)
0
,
(6)
where
C
(
x
E
,x
S
)
is the steady state expression determined
by Eq (5).
In order to understand how
f
+
P
, the production rate of
x
P
,
depends on
x
E
and
x
S
, we need to solve for
C
in terms of
t
E
and
t
S
in Eq (5). A classical way to approach this is the
Michaelis-Menten approximation [18], which assumes the
total concentration of the substrate is much higher than that
of the enzyme, i.e.
t
S

t
E
. This implies
t
S
S
, therefore
Eq (5) solves to be
C
(
t
E
,t
S
)
t
E
t
S
t
S
+
K
.
(7)
This expression could be intuitively understood as containing
two regimes. One has
t
S

K
, so that
C
t
E
. This is
constant in
t
S
, therefore “substrate-saturated”. The other one
has
t
S

K
, so that
C
t
E
t
S
K
. This is linear in
t
S
, therefore
“substrate-sensitive”. We note that these two regimes have
distinct exponents in
t
E
and
t
S
:
(1
,
0)
for the saturated
regime, and
(1
,
1)
for the sensitive regime (see Figure 1).
Therefore, although
f
+
P
, the production rate of
x
P
, depend
on concentrations and rates that tend to be uncertain in Eq
(7), the fact that there are two regimes, each with distinct
exponents in the concentrations, is structural. Indeed, the
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
exponents fundamentally come from the stoichiometry of the
binding reaction in Eq (4). In addition, the condition such
that one regime is valid, such as
t
S

t
E
,K
, only depend
on the orders of magnitude of the concentrations and rates,
therefore could be reliably determined or controlled.
Now, we would like to describe these regimes and their
exponents in a formal way. For this purpose, we introduce
log derivatives as differential analogues of exponents. For
example, from Eq (7) we calculate
[
log
C
log
t
E
log
C
log
t
S
]
=
[
1
K
t
S
+
K
]
.
(8)
When
t
S

K
, we obtain log derivative
(1
,
0)
; while when
t
S

K
, we obtain
(1
,
1)
. So log derivatives exactly capture
the exponents of the regimes in a continuous way.
With the tool of log derivatives in mind, we could actually
go back and obtain more general results than the Michaelis-
Menten approximation. Indeed, due to the assumption that
t
S

t
E
, we missed the third “enzyme-saturated” regime:
when
t
E

t
S
,K
, we have
C
t
S
. Capturing this regime
is important if the
t
S

t
E
assumption does not hold all
the time, such as when
S
and
E
are molecules of similar
abundance in protein binding, or when the cellular circuit
has highly dynamic behavior during nutrient shifts or shock
responses [19], [20].
To capture all three regimes, we need to do away with as-
sumptions like Michaelis-Menten. Although the steady state
equations in Eq (5) can be directly solved in this simple case,
this procedure results in a cumbersome expression that does
not generalize to more complicated cases. More importantly,
the explicit solution obtained hides the structures in the
exponents of the regimes mentioned above. In contrast, the
differential description through log derivatives can capture
all three regimes while describing the exponents. Indeed,
applying implicit function theorem to Eq (5) to solve for
the expression of
E,S,C
in terms of
t
E
,t
S
,K
yields the
following result:
[
log
C
log
t
E
log
C
log
t
S
]
=
[
K
+
E
K
+
E
+
S
K
+
S
K
+
E
+
S
]
.
(9)
This shows that the log derivatives of
C
with respect to
t
E
and
t
S
take values inside a triangle (see Figure 1), and
the exact location in the triangle depends on the particular
values of
t
E
,t
S
and
K
. Completely in accordance with
our intuition, the vertices of this triangle correspond to the
three regimes described earlier. In particular, we see that
the Michaelis-Menten approximation is just one edge of this
triangle, a strict subset of the behaviors captured by the log
derivatives.
The fact that the log derivatives form a triangle, i.e. the set
of convex combinations of the three vertices, suggests that
the full behavior of the enzyme regulation could be seen
as combinations of the three regimes corresponding to the
three vertices. Indeed, when the corresponding asymptotic
conditions are satisfied, the behavior of the enzyme regula-
tion is essentially the same as the simple monomials at the
vertices. Extending this to all production and degradation
fluxes, we see that a general birth-death system could be seen
as having several regimes, each corresponding to a simple
system with constant exponents. Depending on the location
of the state, the system could be approximated by one or
another simple system corresponding to its closest regime.
Hence the following definition.
Definition 2:
A
simple birth-death system
is a birth-death
system with
f
±
i
(
x
) =
k
±
i
x
α
±
i
, where
α
±
i
R
n
is a constant
vector, and
k
±
i
>
0
is a positive constant.
Simple birth-death systems have the advantage that their
log derivatives can be directly read off from the exponent
vector in the rate functions. In contrast, obtaining the set of
log derivatives that emerges directly from binding networks
is nontrivial in general. Next, we show that log derivatives
do form easily-identified polytopes in most models of bio-
logical circuits, where Hill functions from Michaelis-Menten
approximations are used.
C. Basic facts about log derivatives
Here are some basic calculations to facilitate intuition
about log derivatives.
If
f
+
i
is a monomial, i.e.
f
+
i
(
x
) =
k
+
j
x
α
+
j
, then
H
+
i`
=
α
+
j`
for
`
= 1
,...,n
. In other words, the log derivative vector
H
+
i
for the production of
X
i
is the exponent vector
α
+
j
,
independent of the rate constant
k
+
j
or concentration
x
. This
case corresponds to simple birth death system. Physically,
this case could happen when
X
i
has only one production
reaction. Then
α
+
j
is the reactant stoichiometry vector for
that production reaction.
If
f
+
i
R
>
0
[
x
]
is a multivariate polynomial in
x
with
positive coefficients, i.e.
f
+
i
(
x
) =
j
J
i
k
j
x
α
j
for index
set
J
+
i
=
{
j
:
γ
ji
>
0
}
of all reactions producing
X
i
, then
H
+
i`
(
x
) =
j
J
+
i
λ
j
(
x
)
α
j`
, λ
j
(
x
) =
k
j
x
α
j
j
J
+
i
k
j
x
α
j
.
(10)
Since
λ
j
>
0
and sums to one, the log derivative vector
for the production rate of
X
i
is the convex combination
of the reactant vectors of all
X
i
-producing reactions. In
other words,
H
+
i
(
x
)
P
(
f
+
i
) := conv
{
α
j
:
j
J
+
i
}
,
where
P
(
f
+
i
)
is the Newton polytope of polynomial
f
+
i
.
Although the location in the polytope depends on
k
j
and
x
,
the polytope itself depends on reactant vectors
α
j
alone. We
note that newton polytopes are fruitful tools in analysis and
optimization of polynomial equations, dynamical systems,
and CRNs [21]–[23].
If
f
+
i
is a rational function with the numerator as one term
of the denominator polynomial, i.e.
f
+
i
(
x
) =
k
j
x
α
j
j
J
+
i
k
j
x
α
j
,
(11)
which typically arises from time-scale separations and
Michaelis-Menten type approximations, then
H
+
i`
(
x
) =
j
6
=
j
λ
j
(
x
)(
α
j
`
α
j`
)
,
(12)
where
λ
j
is the same as before. In this case, the log derivative
vector for production of
X
i
is the convex combination of all
reactions’ reactant vectors minus the numerator reactant vec-
tor:
H
+
i
(
x
)
conv
{
α
j
α
j
:
j
J
+
i
}
=
α
j
P
(
f
+
i
)
.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
The above calculations enable writing down log derivatives
immediately after a glance at the equation in many cases,
making log derivatives easy to use.
III. S
TRUCTURE AND
F
IXED
P
OINT
S
TABILITY
We have shown that the structures of biomolecular circuits
could be elegantly captured via log derivatives. In the follow-
ing, we discuss how log derivatives connect with the stability
of fixed points in birth-death systems. In particular, we show
that log derivatives could certify structural stability of a
fixed point: stability that is independent of concentrations
and rates.
A. Linearization and logarithmic derivatives
We first assume that the birth-death system has a positive
fixed point
x
R
n
>
0
such that
f
(
x
) = 0
, with positive
production and degradation fluxes:
f
±
(
x
)
R
n
>
0
.
Since the stability of a fixed point is determined by the
eigenvalues of the linearized dynamics at that fixed point,
we express the linearization of a birth-death system in terms
of log derivatives. We introduce the log derivative map
H
(
f
±
,
x
) :=
log
f
±
log
x
(
x
)
,
(13)
where
log
is applied component-wise. The log derivative map
takes a positive function
f
±
and a point
x
in its domain to
the function’s log derivative at this point. For simplicity, we
denote
H
+
:=
H
(
f
+
,
x
)
,
H
:=
H
(
f
,
x
)
, and
H
:=
H
+
H
.
We calculate that derivatives could be expressed in terms
of log derivatives as
∂f
±
i
(
x
)
∂x
j
=
log
f
±
i
(
x
)
log
x
j
f
±
i
(
x
)
x
j
=
H
±
ij
f
±
i
(
x
)
x
i
x
i
x
j
.
(14)
In matrix form, this is
f
(
x
)
x
=
Λ
x
(
Λ
1
τ
+
H
+
Λ
1
τ
H
)
Λ
1
x
,
(15)
where
τ
±
i
(
x
) :=
x
i
f
±
i
(
x
)
are time-scales of
X
i
’s production
or degradation fluxes [24].
At a fixed point
x
, we have
f
+
(
x
) =
f
(
x
)
, so
we could define
τ
:=
τ
+
(
x
) =
τ
(
x
)
as the vector of
steady-state time-scales at fixed point
x
. Therefore, we have
A
:=
f
x
(
x
) =
Λ
x
Λ
1
τ
H
Λ
1
x
,
(16)
relating linearized dynamics
A
to log derivative matrix
H
.
Let us define
M
:=
Λ
1
τ
H
. Since
A
and
M
are
similar to each other, they have the same eigenvalues. So we
immediately see that the fixed point
x
is stable if and only if
M
is Hurwitz. Therefore, the stability of the fixed point
x
depends only on
M
, which is nicely split into two parts: the
time-scales in
Λ
1
τ
, and the log derivatives in
H
. Since the
time-scales come from uncertain rates, while log derivatives
capture reliable structures of the system, this prompts the
following definition:
Definition 3:
A fixed point
x
of a birth-death system is
structurally stable
if it is stable for all positive analytic rate
functions
f
±
that leave the log derivative matrix
H
at
x
invariant.
As the variations in rates could only change time-scales
τ
in matrix
M
, the definition of structural stability exactly
corresponds to that the log derivative matrix
H
is (multi-
plicative)
D
-stable in matrix analysis. In other words, left
multiplication of
H
by arbitrary positive diagonal matrices
results in a Hurwitz matrix (see extensive review by [25]).
D
-stability has been extensively studied since the very be-
ginning of control theory, yet a clean necessary and sufficient
characterization has not been found. This is in part due to
the topological pathology of this property, that the set of
D
-
stable matrices is neither closed nor open. A good sufficient
condition to
D
-stability that characterizes a topologically
nice (open) set of matrices is diagonal stability:
Definition 4:
A matrix
H
is
diagonally stable
if there
exists a positive diagonal matrix
P
such that
PH
+
H
P
<
0
,
(17)
where
<
0
for matrices denote negative definiteness.
Since Eq (17) is a linear matrix inequality, scalable nu-
merical solution algorithms are available off-the-shelf.
A theorem summarizes above discussions.
Theorem 5:
H
is diagonally stable implies the fixed point
x
is structurally stable.
H
is diagonally stable if and only
if
A
is diagonally stable.
Proof:
We calculate that
PH
+
H
P
=
P
Λ
τ
Λ
1
τ
H
+ (
Λ
1
τ
H
)
Λ
τ
P
=
̃
PM
+
M
̃
P
.
Therefore,
H
is diagonally stable is equivalent to
M
is
diagonally stable, which is equivalent to
A
is diagonally
stable as they are similar through positive diagonal matrix
multiplications. Also, this calculation shows
H
is diagonally
stable implies that for any positive diagonal matrix
Λ
1
τ
,
M
satisfies
̃
PM
+
M
̃
P
<
0
for
̃
P
that is positive definite,
therefore
M
is Hurwitz by basic Lyapunov theory.
There are a few special cases of the above theorem that
is worth mentioning due to their simplicity.
Corollary 6:
Any of the following conditions imply
x
is
structurally stable.
1)
H
is triangular with negative diagonal entries.
2)
H
is symmetric and negative definite.
3) The symmetrization of
H
,
Re
H
:=
1
2
(
H
+
H
)
, is
negative definite.
Structural methods could be easily extended to certify
fixed points’ structural instability as well. This is because
when
H
has no purely imaginary eigenvalues, any symmetric
matrix
P
that satisfies Eq (17) has inertia (i.e. signs of
eigenvalues’ real parts) that are opposite to the inertia of
H
, as stated in the Ostroski-Schneider theorem in matrix
analysis [25], [26]. Therefore, if there exists a diagonal
matrix
P
with at least one negative entry satisfying Eq (17),
then
H
is not Hurwitz, and
Λ
1
τ
H
is not Hurwitz for all
positive vectors
τ
. We summarize this into the following
theorem.
Theorem 7:
Given a birth-death system as in (2) with log
derivative matrix
H
at a positive fixed point
x
, if
H
has
no purely imaginary eigenvalues and there exists a diagonal
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
matrix
P
with at least one negative entry such that Eq
(17) holds, then
x
is structurally unstable, i.e. its linearized
dynamics is unstable for all positive analytic functions
f
±
that keeps
H
invariant at
x
.
B. Examples
Below we demonstrate the power of the log derivative
approach by considering two examples commonly found
in biocontrol literature. We show stability properties could
often be obtained at a glance. Homeostasis and disturbance
rejection properties of such circuits are left to the next
section.
1) Incoherent feedforward circuit:
Incoherent feedfor-
ward (IFF) circuits are both widely found in natural circuits
[27], [28] and commonly used in synthetic circuit designs
[29]–[31] to achieve homeostasis. We consider a simple IFF
circuit below called the sniffer model [28]:
̇
x
1
=
w
αx
1
x
2
,
̇
x
2
=
βw
γx
2
,
(18)
where
x
2
catalyzes degradation of
x
1
, and
w
is a disturbance
to the circuit. The disturbance
w
is considered a constant
parameter for stability analysis.
To analyze this circuit’s structural stability, we first write
down its log derivative matrix using the basic calculation
rules established in Section II-C:
H
=
[
1
1
0
1
]
.
(19)
Note that this is a simple birth-death system, with constant
log derivatives. Immediately, from Corollary 6, as
H
is
triangular, we conclude that any positive fixed point of this
circuit is structurally stable.
We emphasize the significance of this observation. Simply
by a glance, we have determined that any positive fixed
point of the system in Eq (18) is stable for the parameters
α,β,γ
and
w
taking arbitrary positive values. Furthermore,
the functional forms of the production and degradation rates
in (18) could vary as long as the log dervatives matrix is
kept the same.
For example, that structural stability is robust to changes
in functional forms could be used to find alternative imple-
mentations of the same structure. A variant of the IFF motif
is the following (adapted from [31]):
̇
x
1
=
w
x
2
αx
1
,
̇
x
2
=
βw
γx
2
,
(20)
where
x
2
inhibits the production of
x
1
instead of degrading
it. We see that the log derivative matrix for this system is
the same as
H
in Eq (19), therefore sharing the same fixed
point stability properties. Note how the structural stability
property is robust to changes in the functional form of the
production and degradation rates of
x
1
here. In Section IV-D
we show that their homeostasis properties are the same as
well.
Above examples of IFF circuits are both simple birth-
death systems, where production and degradation rates have
constant log derivatives. When attempting to write more
realistic models, Hill functions that capture saturations are
often used. For example, the
x
1
dynamics in Eq (20) could
be modified as
̇
x
1
=
w
x
2
+
k
αx
1
,
(21)
where
k
signifies the threshold for saturation of
x
2
’s inhi-
bition of
x
1
.
k
can be physically interpreted as the binding
strength between molecules
X
2
and the enzyme that catalyze
the production of
X
1
, which may correspond to
w
.
From Section II-C, we know the log derivative of pro-
duction rate of
x
1
then satisfies
H
+
12
=
k
x
2
+
k
(0
,
1)
,
therefore the log derivative at a positive fixed point is
H
=
[
1
k
x
2
+
k
0
1
]
{[
1
a
0
1
]
:
a
(0
,
1)
}
.
(22)
Although the exact value of
H
depends on
x
2
and
k
, since
all
H
in the set in Eq (22) satisfies Corrolary 6, we still can
conclude that any positive fixed point is structurally stable.
2) Sequestration negative feedback circuit:
[32] proposed
a circuit design based on molecular sequestration that could
serve as a control module achieving homeostasis for general
classes of plants connected to it. This architecture and its
tradeoffs is analyzed in [33], [34], and [35] successfully
implemented it in bacterial cells. We consider a simple
example below containing the sequestration controller to
illustrate how structural stability could be used to to analyze
this control module with generic plants.
̇
x
1
=
μ
(
ηx
1
x
2
+
γ
1
x
1
)
,
̇
x
2
=
θx
3
(
ηx
1
x
2
+
γ
2
x
2
)
,
̇
x
3
=
α
x
1
x
1
+
k
γ
3
x
3
,
(23)
where
x
3
is to be controlled, and
x
1
and
x
2
sequester
each other into a complex to be degraded.
x
2
senses
x
3
’s
concentration as
x
3
catalyzes the production of
x
2
, while
x
1
actuates
x
3
to track reference
μ
by catalyzing the production
of
x
3
. Here we added self-degradation or dilution of
x
1
and
x
2
and used a Hill function for the production rate of
x
3
to
make the model more realistic.
Using facts from Section II-C, we directly write that the
log derivative at a positive fixed point satisfies
H
H
a
=
1
a
1
0
a
2
1
1
a
3
0
1
:
a
i
(0
,
1)
,
(24)
where
i
= 1
,
2
,
3
. The particular values of
a
i
depend on the
fixed point and the parameters, e.g.
a
1
=
ηx
1
x
2
ηx
1
x
2
+
γ
1
x
1
. Scan-
ning through values of
a
i
to find
H
a
that is structurally stable
could quickly characterize physical conditions that yield
structural stability, providing guidelines for circuit design and
implementation. For example, [32] argued that homeostasis
is guaranteed by the circuit when
a
1
=
a
2
= 1
, which
corresponds to no dilution or degradation of the controller
molecules
X
1
and
X
2
. However, under this condition,
H
a
is not diagonally stable for any values of
a
3
, as tested from
numerical computations. In contrast, if
a
1
,a
2
<
1
, such
as
0
.
99
, then
Re
H
a
could easily become negative definite
for a wide range of
a
3
around
1
. This simple computation
demonstrates that dilution of the controller molecules, al-
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
beit damaging to the disturbance rejection property of the
controller, significantly improves stability of the closed loop
system. This enhances the observations in [33], [34].
IV. C
ONTROL THROUGH
S
TRUCTURE
We have shown that fixed points’ stability could be quickly
determined from structure. In the following, we show that
robust perfect adaptation (RPA), biologists’ term for step
disturbance rejection, can be implemented through structures.
A. Birth-death control systems
We start by extending closed birh-death dynamical systems
to open birth-death control systems.
Definition 8:
A
birth-death control system
is
̇
x
=
f
(
x
,
w
) =
f
+
(
x
,
w
)
f
(
x
,
w
)
,
y
=
h
(
x
,
w
)
,
(25)
where
x
R
n
0
is state,
w
R
n
0
is disturbance input, and
y
R
p
is output. The analytic functions
f
±
:
R
n
0
×
R
d
0
R
n
0
are production and degradation rates, and
h
:
R
n
0
×
R
d
0
R
p
0
is output function.
Note that to keep the biological interpretation of the distur-
bance and output as coming from rates and concentrations,
the variables
w
and
y
are also assumed to be non-negative.
In the following, we restrict our attention to the single-
intput-single-output (SISO) case, so
w
and
y
are scalars.
B. Linearized dynamics in terms of log derivatives
We assume the birth-death control system admits a positive
reference point
(
x
,w
)
R
n
>
0
×
R
>
0
such that
f
(
x
,w
) =
0
with positive outputs and rates:
f
±
(
x
,w
)
>
0
,
y
:=
h
(
x
,w
)
>
0
. Linearizing the system at this reference point
then yields the following system.
̇
ε
=
+
b
δ,
z
=
+
dδ,
(26)
where
ε
i
x
i
x
i
,
δ
w
w
,
z
y
y
are
linearized variables of
x
i
,
w
and
y
. The matrices are
A
=
x
f
(
x
,w
)
R
n
×
n
,
b
=
w
f
(
x
,w
)
R
n
×
1
,
c
=
x
h
(
x
,w
)
R
n
×
1
, and
d
=
w
h
(
x
,w
)
.
To express the linearized dynamics in log derivatives,
we change variables into fold-change instead of additive
difference:
̇
̃
ε
=
Λ
1
τ
(
H
A
̃
ε
+
H
b
̃
δ
)
,
̃
z
=
H
c
̃
ε
+
H
d
̃
δ,
(27)
where
̃
ε
i
=
ε
i
x
i
,
̃
δ
=
δ
w
,
̃
z
=
z
y
are fold-change linearized
variables of
x
i
,
w
and
y
. The log derivative matrices are
[
H
A
H
b
H
c
H
d
]
=
[
log
f
+
i
log
f
i
log
x
j
log
f
+
i
log
f
i
log
w
log
h
log
x
j
log
h
log
w
]
(28)
with the right hand side functions evaluated at
(
x
,w
)
.
τ
i
:=
x
i
f
±
i
(
x
,w
)
is the reference time-scale as before. In
particular, the following relates the two versions of linearized
dynamics:
[
A b
c
d
]
=
[
Λ
1
τ
Λ
x
H
A
Λ
1
x
1
w
Λ
1
τ
Λ
x
H
b
y
H
c
Λ
1
x
y
w
H
d
]
.
(29)
From this, we see that the transfer function for the fold-
change linearized system is
̃
G
(
s
) =
H
d
+
H
c
(
s
Λ
τ
H
A
)
1
H
b
.
(30)
In fact, the fold-change transfer function is proportional to
the traditional transfer function of the additive variables:
y
w
̃
G
(
s
) :=
G
(
s
) =
d
+
c
(
s
I
A
)
1
b
.
These calculations prepare us to discuss robust perfect
adaptation based on structure.
C. Structural robust perfect adaptation
Maintaining homeostasis despite uncertainties and distur-
bances is an essential function for biological organisms.
Because of this, biomolecular circuits that achieve robust
perfect adaptation (RPA) have been actively studied in sys-
tems biology [27]–[29], [36] and synthetic biology [30],
[31]. In particular, one implementation of RPA through
molecular sequestration has been proposed and implemented
successfully in bacterial cells [32], [35], signifying important
progress in principled design of biomolecular circuits.
On the other hand, our theoretical understanding of RPA in
biomolecular systems is far from complete. From established
tools of control theory, RPA as step disturbance rejection
is thoroughly understood for linear dynamical systems, and
the internal model principle could be used as a guideline
for nonlinear systems [37]. However, biological constraints
on implementable dynamics, such as variables need to
be positive, make the design and implementation of RPA
biomolecular circuits a challenging problem in general [30],
[35]. Although significant progress has been made in RPA
design from nonlinear analysis of biomolecular circuits, such
approaches sensitively rely on the functional form of the
production and degradation rates assumed in the model. More
fundamentally, most biomolecular circuits are known to have
desired properties like RPA only under certain parameter
and state conditions, yet RPA based on the internal model
principle requires RPA to hold globally. In comparison, RPA
for linearized dynamics is local in nature, and it is described
in a way that is independent of the rates’ functional forms.
Below, we show how RPA in linearized dynamics could be
robustified by implementing it through structure.
RPA, i.e. step disturbance rejection, in a linear system
corresponds to the transfer function evaluating to zero at
zero frequency. This means
G
(0) =
d
cA
1
b
= 0
, which
could be written as the determinant of a matrix, therefore the
following definition and proposition.
Definition 9:
A birth-death control system has
RPA
at
reference point
(
x
,w
)
if its linearized dynamics at this
reference point in Eq (26) satisfies
z
(
t
)
0
as
t
→ ∞
for
all constant disturbances
δ
R
.
Proposition 10 ( [18], [30], [36]):
A birth death control
system has RPA at reference point
(
x
,w
)
if and only if
det
[
A b
c
d
]
= 0
.
(31)
With Eq (27) in mind, we see log derivatives again
nicely isolate the dependence on rates into the time-scales
τ
, prompting the following definition of structural RPA.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
Definition 11:
A birth-death control system has
structural
RPA
at reference point
(
x
,w
)
if it is RPA at this point for
all non-negative analytic rate functions
f
±
and
h
that keep
the log derivatives
(
H
A
,
H
b
,
H
c
,H
d
)
invariant at
(
x
,w
)
.
Practically, the variations and uncertainties described in
the above definition come down to the time scales
τ
taking
all positive values.
Theorem 12:
A birth-death control system has structurral
RPA at reference point
(
x
,w
)
if and only if
det
[
H
A
H
b
H
c
H
d
]
= 0
.
(32)
Proof:
It is necessary that the fold-change transfer
function satisfies
̃
G
(0) =
H
d
H
c
(
H
A
)
1
H
b
= 0
,
which is equivalent to Eq (32). Then since this condition
is independent of
τ
and only depends on the log derivatives
matrice, we see it is also sufficient for structural RPA.
D. Examples
We continue the IFF and sequestration examples discussed
in Section III-B. Let us consider the IFF system in Eq (18)
with output
y
=
x
2
, which states that we desire
x
2
’s steady
state concentration to be independent of disturbance
w
. Then
the log derivative matrices satisfy
[
H
A
H
b
H
c
H
d
]
=
1
1
1
0
1
1
1
0
0
,
(33)
where horizontal and vertical rules are added for clarity. We
calculate the determinant to be zero, concluding that the
circuit is structurally RPA.
We emphasize the significance of this statement. After a
glance at the system, we could write down the log deriva-
tives and calculate the determinant, immediately concluding
that
x
1
’s steady state concentration is independent of step
disturbances
w
, for the parameters
α,β,γ
taking arbitrary
positive values. A simple calculation of log derivatives is
able to provide powerful conclusions.
Structural RPA can be robust to variations in the functional
forms of the model as well. This can used to suggest alterna-
tive implementations of the same RPA structure. Considering
the variant in Eq (20), we see that the log derivative matrices
are the same regarding the disturbance
w
and output
y
=
x
1
,
therefore having the same structural RPA property.
Furthermore, the structural mindset can help us understand
how properties like RPA are valid under one regime while
invalid in other regimes. Consider the variant in Eq (21) that
captures saturation through a Hill function. When
x
2

k
,
the log derivative matrices in Eq (33) hold asymptotically.
So the regime with parameter condition
x
2

k
could be
considered as the structual RPA regime. For any variations in
parameters and functional forms of the system, as long as the
system is still inside this regime, the structural RPA property
holds. This suggests the view that a general birth-death
system consists of several regimes, each with properties such
as RPA implemented in its structure. This view is further
explained and illustrated in [15].
Lastly, for the sequestration feedback circuit in Eq (23),
since the goal is to make the concentration of
x
3
asymptoti-
cally track
μ
, we choose the output to be
y
=
x
3
μ
. This yields
log derivative matrices
[
H
A
H
b
H
c
H
d
]
=
1
a
1
0
1
a
2
1
1
0
a
3
0
1
0
0
0
1
1
,
(34)
where
a
i
(0
,
1)
for
i
= 1
,
2
,
3
. As argued in [32], making
a
1
=
a
2
= 1
achieves RPA. Indeed, the determinant of the
above matrix is
1
a
1
a
2
a
3
(1
a
1
)
. When
a
1
=
a
2
= 1
,
this is zero. Therefore,
a
1
=
a
2
= 1
achieves structural RPA
that is independent of
a
3
and all variations in parameters
and rates’ functional forms, as long as the log derivatives
are kept invariant.
V. D
ISCUSSION
In this work, we argued that structures of biomolecular
circuits could be captured through log derivatives. We also
demonstrated that fixed point stability and step disturbance
rejection can be analyzed and designed through structure,
independent of large variations in parameters and functional
forms of circuit models.
This work builds on a train of thought that can be traced
back to the very beginning of systems biology. Michaelis-
Menten showed that time-scale separation and large con-
centration differences reveal distinct operating regimes of
enzymatic regulations [8], [9]. In his pioneering work at
the early days of systems biology [38], Savageau argued
for the use of log dervatives as sensitivities of steady state
concentrations to parameters, in order to study robustness.
Furthermore, Savageau championed the view that a complex
biomolecular circuit consists of several operating regimes.
In particular, the concept of power systems is proposed that
directly motivated the definition of birth-death systems and
simple birth-death systems in this work. These pioneering
ideas were later continued in gene regulation networks by
Alon [29], and in stochasticity by Paulsson [24]. Works on
these fronts became the foundational concepts and tools for
systems biology.
This work, as well as several ongoing works, are attempts
at formalizing many of the inspiring ideas from this train of
thought, borrowing and creating tools from control theory,
chemical reactions networks, and mathematics in the process.
This formalization process also reveals further implications
and connections. For example, Section II-B argues that log
derivatives have their meaning rooted in the structures of
biomolecular circuits, which can be formalized through time-
scale separation with the application of implicit function
theorem. This not only reveals the fascinating observation
that log derivatives might form polytopes in general, but also
provide a formal connection between powers / exponents,
vertices of log derivative polytopes, and operating regimes
of biomolecular circuits. Section III and IV demonstrate in
a formal way that the structures captured by log dervatives
have strong robustness properties to uncertainties in rates and
functional forms used in the model, significantly extending
the scope of the structural view for analysis and design of
biomolecular circuits.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint
Another train of thought that this work borrowed much
from is the theory of chemical reaction networks (CRNs)
[16], [39]. With strong mathematical rigor, many fascinating
developments recently appeared from this community on
robustness and stability based on graphical structures of
CRNs [21], [40]–[42]. One challenge of CRN theory is to
identify a class of biological CRNs to avoid pathologies
from extreme-case CRNs [41]. This work suggests that a
candidate for a biological subset of CRNs could be the
set of binding and catalysis reactions. From the analysis in
Section II-B, a time-scale separation argument connects the
biological structures underlying systems biology models to
the graphical structures of CRN theories.
Lastly, there are many exciting questions left answered.
Although the idea that regimes and simple birth-death sys-
tems could be viewed as approximations of the full system
is introduced here through heuristic arguments, it could
be rigorously developed using the framework of dissipative
control [43]. Also, a foundational question is what classes of
chemical reaction networks admit polytopic log derivatives.
These questions are worth investigating, and we hope to
answer them in another occasion in the near future.
A
CKNOWLEDGEMENT
The authors would like to thank Daniele Cappelletti
and John Marken for constructive discussions. F. X. and
J. C. D. are partially funded by the Defense Advanced
Research Projects Agency (Agreement HR0011-16-2-0049
and HR0011-17-2-0008).
R
EFERENCES
[1] M. Csete and J. C. Doyle, “Reverse engineering of biological com-
plexity,”
Science
, vol. 295, no. 5560, pp. 1664–1669, 2002.
[2] J. Doyle and M. Csete, “Motifs, control, and stability,”
PLoS Biology
,
vol. 3, no. 11, pp. 1872–1868, 2005.
[3] H. Kitano, “Towards a theory of biological robustness,”
Molecular
Systems Biology
, vol. 3, no. 1, p. 137, 2007.
[4] K. Zhou, J. C. Doyle, and K. Glover,
Robust and Optimal Control
.
Pearson, 1995.
[5] B. Ø. Palsson,
Systems Biology: Constraint-based Reconstruction and
Analysis
, stu - student edition ed. Cambridge University Press, 2015.
[6] F. A. Chandra, G. Buzi, and J. C. Doyle, “Glycolytic oscillations and
limits on robust efficiency,”
science
, vol. 333, no. 6039, pp. 187–192,
2011.
[7] R. Phillips and R. Milo,
Cell Biology by the Numbers
.
New York:
Garland Science, 2015.
[8] J. Gunawardena, “Time-scale separation – michaelis and menten’s old
idea, still bearing fruit,”
The FEBS Journal
, vol. 281, no. 2, pp. 473–
488, 2014.
[9] K. A. Johnson and R. S. Goody, “The original michaelis con-
stant: Translation of the 1913 michaelis–menten paper,”
Biochemistry
,
vol. 50, no. 39, pp. 8264–8269, Oct 2011.
[10] J. Keener and J. Sneyd,
Mathematical Physiology I
. Springer-Verlag
New York, 2009.
[11] D. Del Vecchio and R. M. Murray,
Biomolecular feedback systems
.
Princeton University Press Princeton, NJ, 2015.
[12] J. Gunawardena,
Models in Systems Biology: The Parameter Problem
and the Meanings of Robustness
. John Wiley Sons, Ltd, 2010, ch. 2,
pp. 19–47.
[13] ——, “Models in biology: ‘accurate descriptions of our pathetic
thinking’,”
BMC Biology
, vol. 12, no. 1, p. 29, Apr 2014.
[14] J. C. Doyle, “Even noisy responses can be perfect if integrated
properly,”
Cell Systems
, vol. 2, no. 2, pp. 73 – 75, 2016.
[15] J. P. Marken, F. Xiao, and R. M. Murray, “A geometric and structural
approach to the analysis and design of biological circuit dynamics: a
theory tailored for synthetic biology,”
bioRxiv
, 2020.
[16] M. Feinberg,
Foundations of Chemical Reaction Network Theory
.
Springer, 2020.
[17] E. O. Volt, H. A. Martens, and S. W. Omholt, “150 years of the mass
action law,”
PLOS Computational Biology
, vol. 11, no. 1, 2015.
[18] D. D. Vecchio and R. M. Murray,
Biomolecular Feedback Systems
,
stu - student edition ed.
Princeton University Press, 2015.
[19] K. R. Albe, M. H. Butler, and B. E. Wright, “Cellular concentrations
of enzymes and their substrates,”
Journal of Theoretical Biology
, vol.
143, no. 2, pp. 163 – 195, 1990.
[20] D. Del Vecchio, A. J. Ninfa, and E. D. Sontag, “Modular cell biology:
retroactivity and insulation,”
Molecular Systems Biology
, vol. 4, no. 1,
p. 161, 2008.
[21] A. Dickenstein, M. P. Mill
́
an, A. Shiu, and X. Tang, “Multistationarity
in structured reaction networks,”
Bulletin of Mathematical Biology
,
vol. 81, no. 5, pp. 1527–1581, May 2019.
[22] T. Bajbar and O. Stein, “Coercive polynomials: stability, order of
growth, and newton polytopes,”
Optimization
, vol. 68, no. 1, pp. 99–
124, 2019.
[23] R. Murray, V. Chandrasekaran, and A. Wierman, “Newton polytopes
and relative entropy optimization,” 2020.
[24] J. Paulsson, “Models of stochastic gene expression,”
Physics of Life
Reviews
, vol. 2, pp. 157–175, 2005.
[25] O. Y. Kushel, “Unifying matrix stability concepts with a view to
applications,”
SIAM Review
, vol. 61, no. 4, pp. 643–729, 2019.
[26] A. M. Ostrowski, “Note on a theorem by hans schneider,”
Journal
of the London Mathematical Society
, vol. s1-37, no. 1, pp. 225–234,
1962.
[27] U. Alon, “Network motifs: theory and experimental approaches,”
Nature Reviews Genetics
, vol. 8, no. 6, pp. 450–461, 2007.
[28] J. E. Ferrell, “Perfect and near-perfect adaptation in cell signaling,”
Cell Systems
, vol. 2, no. 2, pp. 62–67, 2016.
[29] U. Alon,
An Introduction to Systems Biology, Design Principles of
Biological Circuits
.
London: CRC, 2006.
[30] F. Xiao and J. C. Doyle, “Robust perfect adaptation in biomolecular
reaction networks,” in
Proceedings of the 57th IEEE Conference on
Decision and Control
, in-press.
[31] T. H. Segall-Shapiro, E. D. Sontag, and C. A. Voigt, “Engineered
promoters enable constant gene expression at any copy number in
bacteria,”
Nature Biotechnology
, vol. 36, no. 4, pp. 352–358, Apr 2018.
[32] C. Briat, A. Gupta, and M. Khammash, “Antithetic integral feedback
ensures robust perfect adaptation in noisy biomolecular networks,”
Cell Systems
, vol. 2, no. 1, pp. 15 – 26, 2016.
[33] J. D. N Olsman, F Xiao, “Architectural principles for characterizing
the performance of antithetic integral feedback networks,”
Iscience
,
vol. 14, pp. 277–291, 2019.
[34] N. Olsman, A. Baetica, F. Xiao, Y. P. Leong, R. Murray, and J. C.
Doyle, “Hard limits and performance tradeoffs in a class of antithetic
integral feedback networks,”
Cell systems
, vol. 9, 2019.
[35] S. K. Aoki, G. Lillacci, A. Gupta, A. Baumschlager, D. Schweingruber,
and M. Khammash, “A universal biomolecular integral feedback
controller for robust perfect adaptation,”
Nature
, vol. 570, no. 7762,
pp. 533–537, Jun 2019.
[36] T.-M. Yi, Y. Huang, M. I. Simon, , and J. Doyle, “Robust perfect
adaptation in bacterial chemotaxis through integral feedback control,”
Proceedings of the National Academy of Sciences
, vol. 97, no. 9, 2000.
[37] J. Huang, A. Isidori, L. Marconi, M. Mischiati, E. Sontag, and W. M.
Wonham, “Internal models in control, biology and neuroscience,” in
2018 IEEE Conference on Decision and Control (CDC)
, 2018, pp.
5370–5390.
[38] M. Savageau,
Biochemical systems analysis. A study of function and
design in molecular biology
. ADDISON WESLEY, 1976.
[39] J.
Gunawardena,
“Chemical
reaction
network
the-
ory
for
in-silico
biologists,”
203.
[Online].
Available:
http://vcp.med.harvard.edu/papers/crnt.pdf
[40] F. Blanchini and E. Franco, “Structurally robust biological networks,”
BMC Systems Biology
, vol. 5, no. 1, p. 74, May 2011.
[41] A. Sadeghimanesh and E. Feliu, “The multistationarity structure of
networks with intermediates and a binomial core network,”
Bulletin
of Mathematical Biology
, vol. 81, no. 7, pp. 2428–2462, Jul 2019.
[42] M. Ali Al-Radhawi, D. Angeli, and E. D. Sontag, “A computational
framework for a lyapunov-enabled analysis of biochemical reaction
networks,”
PLOS Computational Biology
, vol. 16, no. 2, pp. 1–37, 02
2020.
[43] M. Arcak, C. Meissen, and A. Packard,
Networks of Dissipative
Systems
.
Springer, 2011.
.
CC-BY 4.0 International license
available under a
(which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
The copyright holder for this preprint
this version posted November 5, 2020.
;
https://doi.org/10.1101/2020.11.04.368381
doi:
bioRxiv preprint