of 23
Supplementary Material
I. TWO PHOTON FORMALISM
Assuming an initial phase of
0
,
an EM field can be written as [1]:
ˆ
E
(
t
) = (
A
(
t
) + ˆ
a
1
(
t
)) cos (
ω
0
t
) + ˆ
a
2
(
t
) sin (
ω
0
t
)
where
ˆ
a
1
,
2
(
t
)
are the Hermitian amplitude and phase quadrature operators respectively. They describe the amplitude
and phase modulation of the field (
ω
0
is the carrier frequency). They satisfy
ˆ
a
1
,
2
(
t
)
= 0
and their commutation
relations are given by:
a
1
(
t
)
,
ˆ
a
2
(
t
)] =
(
t
t
)
We can further define
ˆ
a
1
,
2
(Ω)
as the Fourier transform of
ˆ
a
1
,
2
(
t
)
:
ˆ
a
1
,
2
(Ω) =
1
2
π
Z
ˆ
a
1
,
2
(
t
)
e
i
t
dt
.
Here, we observe that
ˆ
a
1
,
2
(Ω)
are not Hermitian but they commute with the Hermitian conjugate of themselves.
This means that they have an orthonormal eigenbasis but their eigenvalues are complex. Therefore, the commutation
relations between them is given by:

ˆ
a
1
ˆ
a
2

,

ˆ
a
1
,
ˆ
a
2


:=

a
1
,
ˆ
a
1
] [ˆ
a
1
,
ˆ
a
2
]
a
2
,
ˆ
a
1
] [ˆ
a
2
,
ˆ
a
2
]

=
i

0 1
1 0

.
where the
dependence is suppressed, and we used the following notation for the commutation relations matrix:
h
ˆ
Q
,
ˆ
Q
i
:=
h
ˆ
Q
i
,
ˆ
Q
j
i
i,j
. Hereafter we will use this notation.
Therefore, we can interpret the above as two harmonic oscillators with:
ˆ
X
R
(Ω) =
2
Re
a
1
)
,
ˆ
P
R
(Ω)
=
2
Re
a
2
)
ˆ
X
I
(Ω) =
2
Im
a
1
)
,
ˆ
P
I
(Ω)
=
2
Im
a
2
)
It follows that the commutation relations of two harmonic oscillators are given by:
ˆ
X
R
ˆ
P
R
ˆ
X
I
ˆ
P
I
,
ˆ
X
R
,
ˆ
P
R
,
ˆ
X
I
,
ˆ
P
I

=

σ
y
0
0
σ
y

,
where
σ
x/y
are the Pauli
X/Y
matrices and the
dependence is suppressed.
In a multichannel interferometer, we have a vector of input quadratures
ˆ
Q
=
ˆa
1
,
ˆa
2

T
,
which defines a vector of
Hermitian quadratures (position and momentum operators) given by
ˆ
S
=
ˆ
X
R
,
ˆ
P
R
,
ˆ
X
I
,
ˆ
P
I

T
.
II. QUANTUM FISHER INFORMATION MATRIX
This section derives equation (3) in the main text and generalizes it to the case of multi-parameter estimation of
h
.
This requires using the Quantum Fisher Information Matrix (QFIM) which lower bounds the covariance matrix of
the estimators of
h
: COV
(
h
)
≥I
1
.
In a multichannel interferometer, the output sideband fields are given by
ˆ
Q
out
=

ˆ
b
1
(Ω)
ˆ
b
2
(Ω)

2
with commutation relations:

ˆ
b
1
ˆ
b
2

,

ˆ
b
1
,
ˆ
b
2


=
i

0
1
k
1
k
0

(1)
where
k
is the number of output fields and the
dependence is suppressed.
As above, the Hermitian quadratures are:
ˆ
S
=
ˆ
X
R
ˆ
P
R
ˆ
X
I
ˆ
P
I
=
2
Re

ˆ
b
1

Re

ˆ
b
2

Im

ˆ
b
1

Im

ˆ
b
2

,
which is the corresponding quadratures vector of
2
k
harmonic oscillators.
Given the input-output relations of
ˆ
Q
(eq. (1) in the main text), these relations for
ˆ
S
are given by:
ˆ
S
out
=
M
ˆ
S
in
+
V
h
+
A
∆x
,
(2)
where for all complex vectors
u
(=
h
,
∆x
)
we have
u
=
2

Re
(
u
)
Im
(
u
)

and for all complex matrices
M
(=
M,
V
,A
)
we have
M
=

Re
(
M
)
Im
(
M
)
Im
(
M
)
Re
(
M
)

.
h
,
∆x
,M,
V
and
A
are as defined in the main text. Note that we
expanded the complex-valued vector of parameters,
h
, to the real-valued vector
h
,
which consists of
4
parameters:
Re
(
h
+
)
,
Im
(
h
+
)
,
Re
(
h
×
)
,
Im
(
h
×
)
.
Since we consider an initial Gaussian state and the evolution is through a Gaussian channel, the final state of the
output modes is also Gaussian and can be characterized by:
d
s
(
h
) =
ˆ
S
Σ
i,j
=
1
2
ˆ
S
i
ˆ
S
j
+
ˆ
S
j
ˆ
S
i
⟩−⟨
ˆ
S
i
⟩⟨
ˆ
S
j
where all the information about
h
is encoded in the first moment vector
d
s
(
h
)
(mean vector) and
Σ
is the covariance
matrix.
The Quantum Fisher Information Matrix (QFIM) about
h
can be expressed using these first two moments [2, 3]:
I
h
= 2(
h
d
s
)
T
Σ
1
(
h
d
s
) = 2
V
′†
Σ
1
V
,
(3)
where we used the fact that the state is Gaussian, that all the information is encoded in
d
s
,
and that
h
d
s
=
V
.
In the following subsection, we will show that the QFIM can be expressed in a more compact form involving only
the mean values of
ˆ
Q
(
d
q
), and the covariance matrix of
ˆ
Q
(
Σ
q
).
A. Complex compact form of the QFIM
In this subsection we show that eq. 3 in the main text is a compact form of supplemental eq. 3.
Following ref.[4] let us first introduce the notion of circular symmetry of real matrices. A real symmetric matrix
A
has a circular symmetry if it takes the form of:
A
=

A
̄
A
̄
A A

.
If
A
satisfies this symmetry then we can define its complex-compact form:
A
c
=
A
+
i
̄
A.
This mapping between
complex Hermitian matrices and real symmetric matrices with this symmetry
M

Re
(
M
)
Im
(
M
)
Im
(
M
)
Re
(
M
)

is a
homomorphism, i.e. it preserves multiplication:
A
c
=
B
c
C
c
⇐⇒

A
̄
A
̄
A A

=

B
̄
B
̄
B B

C
̄
C
̄
C C

.
3
As a result, identity is mapped to identity:
1
k
←→

1
k
0
0
1
k

=
1
2
k
(4)
and
A
1

c
=
A
1
c
.
Let us now prove the following claim:
If the covariance matrix of the estimators of (real-valued)
h
has a circular symmetry, i.e. it takes the form of:
COV
(
h
) =

C
̄
C
̄
C C

,
then the covariance matrix of the estimators of (complex-valued)
h
is the complex-compact form of COV
(
h
)
:
COV
(
h
) =
C
+
i
̄
C.
Proof:
To show this we need to show that for any
h
φ
= cos (
φ
)
h
+
+ sin (
φ
)
h
×
:
1
2
var
(
Re
(
h
φ
)) +
1
2
var
(
Im
(
h
φ
)) =
u
T
φ
C
+
i
̄
C

u
φ
,
(5)
with
u
φ
=
cos (
φ
) sin (
φ
)

T
.
Given the circular symmetry of COV
(
h
)
we observe that
var
(
Re
(
h
φ
)) =
var
(
Im
(
h
φ
)) =
u
T
φ
C
u
φ
.
Since
̄
C
is anti-symmetric
u
T
φ
̄
C

u
φ
= 0
,
and thus eq. 5 is satisfied. Note that we could omit
̄
C
, but we keep it for
brevity of notation afterwards.
This immediately implies that if
I
h
satisfies a circular symmetry:
I
h
=

I
̄
I
̄
I I

,
then the Cramér-Rao bound for COV
(
h
)
is given by the complex-compact form of
I
h
: COV
(
h
)
I
+
i
̄
I

1
,
i.e.
the QFIM about
h
is given by:
I
=
I
+
i
̄
I
.
Observe that by definition
V
satisfies this circular symmetry. Hence if
Σ
satisfies it:
Σ =

Σ
̄
Σ
̄
Σ Σ

,
(6)
then
I
h
(eq. 3) also satisfies it. Therefore given that
Σ
has a circular symmetry the QFIM about
h
reads:
I
= 2
V
Σ
1
c
V
,
where
Σ
c
is the complex compact form of
Σ
:
Σ
c
= Σ
+
i
̄
Σ
.
The covariance matrix of
ˆ
Q
is defined as:
q
)
i,j
=
1
2
Dn
ˆ
Q
i
,
ˆ
Q
j
oE
−⟨
ˆ
Q
i
⟩⟨
ˆ
Q
j
,
with
n
ˆ
Q
i
,
ˆ
Q
j
o
:=
ˆ
Q
i
ˆ
Q
j
+
ˆ
Q
j
ˆ
Q
i
being the anti-commutator of
ˆ
Q,
ˆ
Q
j
.
Given that
Σ
satisfies the circular symmetry then COV

Re

ˆ
Q
i

,
Re

ˆ
Q
j

=
COV

Im

ˆ
Q
i

,
Im

ˆ
Q
j

,
COV

Re

ˆ
Q
i

,
Im

ˆ
Q
j

=
COV

Im

ˆ
Q
i

,
Re

ˆ
Q
j

.
4
Hence:
q
)
i,j
=
Cov

ˆ
Q
i
ˆ
Q
j

= 2
h
COV

Re

ˆ
Q
i

Re

ˆ
Q
j

+
i
COV

Im

ˆ
Q
i

Re

ˆ
Q
j
i
= (Σ
c
)
i,j
We can thus write
I
as:
I
= 2
V
Σ
1
q
V
(7)
.
This symmetry of the covariance matrix is satisfied in our problem given that the displacement noise process is
stationary (see section V for details). The initial state is a coherent state, hence
Σ
i
=
1
2
1
, and the symmetry for this
state is satisfied. In the interferometer, it undergoes a Gaussian channel which maps the covariance matrix to:
Σ =
R
Σ
i
R
+ Λ
,
where
R
=

Re
(
M
)
Im
(
M
)
Im
(
M
)
Re
(
M
)

with
M
being the transfer matrix.
Λ
is due to classical displacement noise
(thermal, seismic, etc.). Given that the classical displacement noise is stationary i.i.d
Λ
takes the form of (see section
V):
Λ =
δ
2
2

Re
AA

Im
AA

Im
AA

Re
AA


,
with A being the transfer matrix of the displacement noise. Since
R
,
Σ
0
,
Λ
satisfy this symmetry,
Σ
also satisfies this symmetry and we can thus use:
Σ
q
=
1
2
MM
+
δ
2
2
AA
.
Inserting this into eq. 7 yields:
I
= 4
V
MM
+
δ
2
AA

1
V
.
We observe (from numerics) that the eigenvector of
I
with maximal eigenavalue corresponds to
h
+
, hence this is
the polarization with maximal sensitivity. Focusing on this maximal sensitivity polarization reduces the problem to
a single complex parameter estimation of
h
+
, and thus the quantity of interest is the QFI about
h
+
. The single
parameter QFI (
I
) is a special case of the multi-parameter QFIM and thus reads:
I
= 2
V
+
Σ
1
q
V
+
= 4
V
+
MM
+
δ
2
AA

1
V
+
.
These are the expressions in eqs. (3) and (5) in the main text.
Hereafter, we will focus mainly on the single parameter estimation of
h
+
,
and will thus use this QFI expression.
III. FISHER INFORMATION WITH HOMODYNE MEASUREMENT
For
2
k
output quadratures
ˆ
Q
out
=

ˆ
b
1
ˆ
b
2

,
let us consider a homodyne measurement of these
l
k
commuting
quadratures:
T
h
ˆ
Q
out
,
where
T
h
is a
2
k
×
l
matrix. The outcomes of this measurement have a
l
-dimensional complex
Gaussian distribution with a mean vector
2
T
h
V
and a covariance matrix
σ
h
=
T
h
Σ
q
T
h
.
The Fisher information (FI) about
h
+
is therefore [3]:
F
= 2
V
+
T
h

T
h
Σ
q
T
h

1
T
h
V
+
.
(8)
The space of qudrature operators is a
2
k
-dimensional linear space. For convenience, we can represent these operators
as
2
k
-dimensional column vectors. A single quadrature
u
ˆ
Q
out
is represented by the (column) vector
u
,
and our
l
quadratures ,
T
h
ˆ
Q
out
, are represented by the
l
column vectors of the matrix
T
h
.
We then denote the projection operator
onto the
l
measured quadratures as
Π
h
,
observe that
Π
h
=
T
h
T
h
.
We now show that this FI can be decomposed into the sum of the FI’s of different subspaces of
Π
h
. Let us
decompose
Π
h
into orthogonal subspaces
Π
h
=
P
i
Π
h
i
,
and denote the FI given measurement of
Π
h
i
quadratures as
5
F
i
:
F
i
= 2
V
+
T
h
i

T
h
i
σ
h
T
h
i

1
T
h
i
V
+
.
Given that
σ
h
is block diagonal in this decomposition, then the measurements
of
Π
h
i
are statistically independent and thus
F
=
P
i
F
i
.
Formally:
F
= 2
V
+
X
i
Π
h
i
σ
h
Π
h
i
!
1
V
+
#
=
X
i
2
V
+
T
h
i

T
h
i
σ
h
T
h
i

1
T
h
i
V
+
=
X
i
F
i
(9)
where
(
)
is due to the fact that it is block diagonal and
(#)
is basically:

V
1
V
2
···
V
j

σ
1
σ
2
.
.
.
σ
j
1
V
1
V
2
.
.
.
V
j
=
X
i
V
i
σ
1
i
V
i
.
We can use this fact to analyze DFI schemes. For example, for phase quadratures measurement, the displacement
free subspace (DFS) is an eigenspace of the covariance matrix, and thus the covariance matrix is block diagonal in the
decomposition to the coupled subspace and the DFS. We thus have that
F
=
F
C
+
F
DFS
,
where
F
C
is the information
from the coupled subspace and
F
DFS
is the information from the DFS. The quantity:
η
=
F
DFS
F
DFS
+
F
C
(10)
is the fraction of the information that comes from the DFS and thus quantifies the effectiveness of the DFI.
IV. OPTIMAL MEASUREMENT BASIS
We prove here that the optimal quadrature to be measured is
Σ
1
q
V
+
,
i.e. the operator
Σ
1
q
V
+

·
ˆ
Q
out
.
We then
extend this to the multi-parameter case, proving that measuring the two quadratures
Σ
1
q
V
saturates the QFIM.
Consider the single parameter estimation of
h
+
. the mean vector is
d
q
=
V
h
and the covariance matrix is
Σ
q
(we
use here the complex compact form). Measuring the quadrature
u
of this Gaussian state yields the following FI about
h
+
(special case of eq. 8):
F
= 2
|
u
·
V
+
|
2
u
Σ
q
u
.
(11)
From the Cauchy-Schwarz inequality,
|
(
p
Σ
q
u
)
·
(
p
Σ
q
1
V
+
)
|
2
u
Σ
q
u


V
+
Σ
1
q
V
+

2
|
u
·
V
+
|
2
(
u
Σ
q
u
)
2

V
+
Σ
1
q
V
+

.
(12)
where the right-hand side of the inequality is the expression for QFI and equality is obtained if and only if,
p
Σ
q
u
p
Σ
q
1
V
+
u
Σ
1
q
V
+
.
(13)
Hence, the QFI is saturated given that the distributed quadrature
Σ
1
q
V
+
is measured.
In general, the QFI is saturated by measuring a set of quadratures, if and only if
Σ
1
q
V
+
is contained in the subspace
spanned by them.
Regarding the multi-parameter estimation of
h
+
,h
×
(or any other polarizations), we show that the QFIM is
saturated by measuring the two quadratures
Σ
1
q
V
,
and this is therefore the optimal measurement.
Proof:
Observe that for any projection operator
Π
:
V
p
Σ
q
1
(
1
Π)
p
Σ
q
1
V ≥
0
⇒V
p
Σ
q
1
Π
p
Σ
q
1
V ≤V
Σ
1
q
V
(14)
6
with equality if and only if
Π
p
Σ
q
1
V
=
p
Σ
q
1
V
.
Taking the following projection operator:
Π =
p
Σ
q
T
h

T
h
Σ
q
T
h

1
T
h
p
Σ
q
, and inserting it into eq. 14 we obtain that:
2
V
T
h

T
h
Σ
q
T
h

1
T
h
V ≤
2
V
Σ
1
q
V
.
The left term is exactly the homodyne FI. Note that our
Π
is a projection operator onto the span of the column vectors
of
p
Σ
q
T
h
,
denoted as
C
p
Σ
q
T
h

.
Hence equality is obtained iff
C

p
Σ
q
1
V

⊆ C
p
Σ
q
T
h

and thus the minimal
space of quadratures that saturate the inequality is:
T
h
= Σ
1
q
V
Λ
,
where
Λ
is a normalization and orthogonalization
matrix.
This was also proven in ref. [5].
Since the multi-parameter case requires commutativity of the quadratures given by the column vectors of
Σ
1
q
V
,
we
prove a useful claim - if the quadratures given by the column vectors of
V
commute and
Σ
q
is a conjugate symplectic
matrix then the quadratures given by
Σ
1
q
V
commute and thus the QFIM is achievable.
Proof: Since conjugate symplectic matrices form a group then
Σ
q
conjugate symplectic
Σ
1
q
conjugate symplectic
, i.e. denoting
W
=

0
1
k
1
k
0

:
Σ
1
q

W
Σ
1
q
=
W
.
Hence:
V
commute
↔V
WV
= 0
Σ
1
q
V

W
Σ
1
q
V

=
V
WV
= 0
.
Therefore
Σ
1
q
V
commute.
In our problem, the column vectors of
V
are in the phase quadratures, hence they commute. Therefore in order to
show commutativity of the optimal quadratures it suffices to show that
Σ
q
is symplectic.
V. QFI WITH THERMAL DISPLACEMENT NOISE
Displacement of optical components leads to a noisy displacement of the quadratures, i.e. in Heisenberg picture:
ˆ
Q
ˆ
Q
+
A
∆x
, where
∆x
is a multivariate Gaussian random variable.
The new state under the action of this noise is a Gaussian mixture of states
ρ
=
Z
p
(
∆x
)
ρ
(
∆x
)
d
∆x
(15)
and therefore is also Gaussian.
Note that while
∆x
(
t
)
is a real vector,
∆x
(Ω)
is complex. The transformation of the Hermitian quadratures vector
ˆ
S
is therefore given by,
ˆ
S
ˆ
S
+
A
∆x
(16)
where
A
=

Re
(
A
)
-Im
(
A
)
Im
(
A
)
Re
(
A
)

,
x
=
2

Re
(∆
x
)
Im
(∆
x
)

(17)
Since
∆x
= 0
, the vector of the first moments
d
s
is unchanged. The covariance matrix however changes to:
Σ =Σ
i
+
(
A
∆x
)(
A
∆x
)
i
+
A
Σ
∆x
A
′†
,
(18)
where
Σ
i
is the covariance matrix of the states in the absence of displacement noise and
Σ
x
=
∆x
(
∆x
)
is the
covariance matrix of
∆x
.
7
We assume that the displacement noise
{
∆x
(
t
)
}
t
is a Gaussian stationary process, where the different
x
i
(
t
)
,
x
j
(
t
)
are i.i.d. Therefore
∆x
(Ω)
is also a Gaussian random variable with a covariance matrix of:
(
Re
(∆
x
))
2
= 2
T
Z
0
T
Z
0
x
(
t
1
) ∆
x
(
t
2
)
cos (Ω
t
1
) cos (Ω
t
2
)
dt
1
dt
2
T
T
Z
0
C
(
τ
) cos (Ω
τ
)
(
Im
(∆
x
))
2
= 2
T
Z
0
T
Z
0
x
(
t
1
) ∆
x
(
t
2
)
sin (Ω
t
1
) sin (Ω
t
2
)
dt
1
dt
2
T
T
Z
0
C
(
τ
) cos (Ω
τ
)
Re
(∆
x
)
Im
(∆
x
)
= 2
T
Z
0
T
Z
0
x
(
t
1
) ∆
x
(
t
2
)
cos (Ω
t
1
) sin (Ω
t
2
)
dt
1
dt
2
0
,
and of course all the correlations of
x
i
,
x
j
(
i
̸
=
j
)
vanish. Therefore
Σ
x
=
δ
2
2
1
,
and thus
∆x
N

0
,
δ
2
2
1

.
Since
Σ
i
=
1
2
M
M
′†
we get that:
Σ =
1
2
M
M
′†
+
δ
2
A
A
′†

.
(19)
The QFIM therefore reads,
I
=2
h
d
s
M
M
′†
+
δ
2
A
A
′†

1
h
d
s
=2
V
′†
M
M
′†
+
δ
2
A
A
′†

1
V
(20)
Alternatively, the form of the covariance matrix can be also directly derived from the Wigner function. Due to the
displacement noise, we have:
W
[
ρ
] =
Z
p
(
∆x
)
W
[
ρ
(
∆x
)]
d
∆x
.
(21)
For
k
output quadratures, the Wigner function per realization of
∆x
is:
W
(
∆x
) =
1
(2
π
)
2
k
p
Det
(
σ
)
exp

1
2
(
S
d
h
d
∆x
)
Σ
1
i
(
S
d
h
d
∆x
)

,
Since
p
(
∆x
)
is Gaussian, then the averaging (eq. 21) is basically a convolution of two Gaussian distributions,
N
(
d
s
,
Σ
i
)
N
(0
,
Cov
(
d
∆x
)) =
N
(
d
s
,
Σ
i
)
N
0
2
A
A
′†

=
N
(
d
s
,
Σ
i
+
δ
2
A
A
′†
)
Therefore,
W
=
1
(2
π
)
2
n
p
Det
(Σ)
exp

1
2
(
S
d
h
)
Σ
1
(
S
d
h
)

,
with
Σ = Σ
i
+
δ
2
A
A
′†
.
We can therefore observe that the full covariance matrix, eq. 19, satisfies the symmetry of eq. 6. We have shown
in sec. II A that if the covariance matrix satisfies the circular symmetry then the QFI can be written in the complex
compact form, this justifies our use of this form:
I
= 4
V
+
MM
+
δ
2
AA

V
+
.
(22)
On a brief note - a DFS is defined as the kernel of the (general) noise term
A
ph
Σ
x
A
ph
.
Note that ker

A
ph

is
contained in this subspace. furthermore if
Σ
x
is a full rank matrix, the DFS is equal to ker

A
ph

.
8
VI. FI AND QFI WITH RADIATION PRESSURE
A. Derivation of transfer matrix
Let us first write how Radiation Pressure Noise (RPN) enters into the equations [6]. Resonance conditions are
assumed. Given
ˆa
,
ˆ
d
fields that hit a mirror,
ˆa
=

ˆ
a
1
ˆ
a
2

,
ˆ
d
=

ˆ
d
1
ˆ
d
2

, the reflected fields
ˆ
b
,
ˆc
satisfy :

ˆ
b
ˆ
c

=
M
mirror

ˆ
a
ˆ
d

2

ω
0
c

R
ω
0
∆ˆ
x

D
a
D
d

,
(23)
with
M
mirror
=
R
T
R
T
T
R
T
R
being the mirror transformation,
∆ˆ
x
is the displacement due to RPN.
Assuming resonance (
ω
0
L/c
= 2
πn
, where
n
is an integer):
D
j
=
p
2
p
j

1
0

,
D
j
=
p
2
p
j

0
1

, where
p
j
is the
power of the
j
-th carrier field. Eq. 23 is the general way displacement noise is being propagated. In RPN
∆ˆ
x
is an
operator, and it is given by :
∆ˆ
x
=
1
m
2
r
ω
0
c
2

D
t
a
D
t
d


ˆ
a
ˆ
d

+
D
t
b
D
t
c


ˆ
b
ˆ
c

=
1
m
2
r
2
ω
0
c
2
h
p
a
ˆ
a
1
p
d
ˆ
d
1
+
p
b
ˆ
b
1
p
c
ˆ
c
1
i
.
Inserting this
∆ˆ
x
into eq. 23 we get two coupled sets of equations. For the amplitude quadratures:

ˆ
b
1
ˆ
c
1

=
M
mirror

ˆ
a
1
ˆ
d
1

,
and for the phase quadratures:

ˆ
b
2
ˆ
c
2

=
M
mirror

ˆ
a
2
ˆ
d
2

2
2
ω
0
R
m
2
c
2
h
p
a
ˆ
a
1
p
d
ˆ
d
1
+
p
b
ˆ
b
1
p
c
ˆ
c
1
i

2
p
a
2
p
d

.
(24)
This implies a general structure for the multichannel case: the equations for the amplitude quadratures are closed
and their solution is given by
ˆ
b
1
=
M
int
ˆa
1
,
where
M
int
is the unitary transfer matrix of the interferometer which does not depend on the RPN. The equations
for the amplitude quadratures are coupled to the phase quadratures and their solution is given by:
ˆ
b
2
=
M
int
ˆa
2
+
M
21
ˆa
1
.
Hence amplitude noise is being propagated into phase noise with a transfer matrix
M
21
. The input-output relations
therefore read:

ˆ
b
1
ˆ
b
2

=

M
int
0
M
21
M
int

ˆa
1
ˆa
2

,
(25)
Two observations regarding the transfer matrix of eq. 25 will be useful later:
1.
M
21
can be expressed as a concatenation of two transfer matrices:
M
21
=
AD
x
,
where
A
is the transfer matrix
of mirror displacement vector
∆ˆx
,
and
D
x
is the transfer matrix of the amplitude noise to the displacement vector:
∆ˆx
=
D
x
ˆa
1
.
This fact will be used to in section VII.
2. Commutation relations have to be preserved (see eq. 1). This implies that
M
is a conjugate symplectic matrix
and thus
M
int
M
21
is Hermitian.
3. From eq. 24 we observe that
M
21
1
/m
2
.
9
B. QFI
Given the general form of the transfer matrix
M
(eq. 25), we can calculate the general form of the QFI and FI.
The QFI is given by
4
V
+
MM

1
V
+
where
MM
(= 2Σ
q
)
equals to:
MM
=

1
M
int
M
21
M
21
M
int
1
+
M
21
M
21

.
(26)
Since
M
int
is unitary we can observe that:
MM

1
=

∗ −
M
int
M
21
1

#
I
= 4
V
+
V
+
= 4
V
+
,
ph
V
+
,
ph
,
where
(#)
is because:
V
=

0
V
ph

. The QFI thus obtains the shot-noise limit. This is a generalization of the single
channel optimal frequency-dependent readout scheme [7]: by measuring certain quadratures we can overcome the RPN.
Using the results of section IV, we know that the optimal quadratures to be measured are (up to normalization):
Σ
1
q
V
+
.
The optimal quadrature is therefore:
u

M
int
M
21
V
+
,
ph
V
+
,
ph

,
(27)
i.e. measuring the operator
u
·
ˆ
Q
out
is optimal. This
u
is a linear combination of the
k
column vectors of the matrix
T
dec
:
T
dec
=

M
int
M
21
1

.
(28)
These
k
column vectors correspond to
k
quadratures decoupled from RPN. To see explicitly that these quadratures
are decoupled from RPN, note that the covariance matrix can be written as:
Σ
q

1
M
int
M
21
M
21
M
int
1
+
M
21
M
21

=

0 0
0
1

+

1
M
21
M
int

1
M
int
M
21

,
the space spanned by these quadratures is thus decoupled from RPN. Hence any combination of these quadratures is
decoupled from this noise and thus yields an FI that does not diverge in the
f
0
limit, the optimal combination of
eq. 27 obtains the shot noise limit.
Two remarks are now in order:
We remark that the quadratures of eq. 28 are not orthonormal, the orthonormalized quadratures are the
k
column vectors of the following matrix:

M
int
M
21
1


1
+
M
21
M
21

1
/
2
We note that these quadratures can be measured simultaneously only if they commute. Let show that they
indeed commute:
M
21
M
int
1


0
1
1
0

M
int
M
21
1

=
M
int
M
21
+
M
21
M
int
= 0
,
where in the last equality we used the fact that
M
21
M
int
is Hermitian.
We can thus either measure the quadratures of eq. 28 or the optimal quadrature in eq. 27. The advantage in
measuring the quadratures of eq. 27 is that they are independent of
V
+
. They are optimal for the estimation of any
polarization, and thus optimal also for simultaneous estimation of
V
+
,
V
×
or any two other polarizations.
10
C. FI
Let us now consider the case of measuring the phase quadratures. The correspondong covariance matrix is obtained
by applying eq. 8, i.e. keeping only the phase quadrature terms in the full covariance matrix (eq. 26), which leaves
us with
σ
h
= 1
/
2

1
+
M
21
M
21

, and thus the FI is
F
= 4
V
+
,
ph

1
+
M
21
M
21

1
V
+
,
ph
. M
21
M
21
is basically the
displacement noise term.
Since
M
21
=
A
ph
D
x
,
we get:
M
21
M
21
=
A
ph
D
x
D
x
A
ph
,
hence it takes the form of displacement noise (eq. 18) with
D
x
D
x
being the covariance matrix of the displacement
vector
∆x
.
We can immediately observe that the space decoupled from this noise is ker

D
x
A
ph

.
The DFS, ker
A
ph
, is thus contained in this subspace and therefore decoupled from this noise.
In fig. 1 we show the sensitivity profile given RPN for different homodyne measurements: optimal (eq. 27), phase
quadratures and maximal signal combination (basically
V
,
which maximizes the signal and thus optimal in case there
is only shot noise). For the optimal frequency-dependent combination (eq. 27), the SD coincides with the shot noise
limit as expected.
10
1
10
2
10
3
10
4
f
[
Hz
]
10
-
24
10
-
23
10
-
22
10
-
21
10
-
20
σ
[
Hz
-
1
/
2
]
phase quad. meas.
QFI
shot noise limit
Max signal comb.
(a)
FIG. 1: Sensitivity profile with RPN for different measurement bases. The QFI coreesponds to optimal measurement (solid
orange line) and saturates the shot noise limit (black dashed line). The solid blue line corresponds to phase quadratures
measurement (and thus optimal combination of phase quadratures) and the dashed red line to the max-signal combination of
phase quadratures, i.e. a combination that is optimal given only shot noise.
It is interesting to compare the behavior of the FI with phase quadratures measurement and the behavior with the
max-signal combination in fig. 1. Both diverge at low frequencies and clearly since the max-signal combination is
not optimal its sensitivity is worse than the sensitivity of the phase quadratures measurement. While the max-signal
combination diverges uniformly as
1
/f
2
,
the optimal phase quadratures combination has an intermediate range where
the divergence stops and the sensitivity remains constant. This creates two orders of magnitude difference between
the sensitivity with max-signal combination and with the optimal phase quadratures combination.
This plateau is due to a pseudo-DFS contained in the coupled subspace,
M
C
.
For any frequency,
M
21
M
21
has
3
eigenvalues:
0
,t
min
,t
max
,
where
t
min
t
max
.
The phase quadratures can be thus decomposed to the corresponding
eigenspaces. The eigenspace of
0
is the DFS,
M
DFS
,
and the eigenspaces of
t
min
,t
max
are denoted as
M
min
,M
max
respectively. Note that
M
C
=
M
min
M
max
.
Since the covariance matrix is diagonal w.r.t these subspaces the FI is
a sum of the FI’s of these subspaces:
F
=
F
DFS
+
F
C
=
F
DFS
+
F
max
+
F
min
,
where
F
max
,F
min
are the FI’s achieved with
M
max
,M
min
respectively.
V
+
is mostly in the subspace
M
max
,
hence for
frequencies higher than the plateau range
F
F
max
.
The SD that corresponds to
F
max
goes as
1
/f
2
and coincides
with the max-signal combination (see fig. 2). As
f
gets smaller,
F
max
drops as
f
4
while
F
min
remains the same (since
t
min
1
). In this regime
M
min
functions as a pseudo-DFS since the effect of displacement noise is much smaller
11
than the shot noise. Therefore at some point
F
min
> F
max
,
and the FI coincides with the
F
min
which remains the
same. This is the plateau that can be observed in figs. 1, 2.
t
min
however also goes as
1
/f
4
and thus for low enough
frequencies
t
min
1
and the SD continues to diverge as
1
/f
2
.
This is shown in fig 2 where the FI is shown along
with the contribution of
F
max
,F
min
, F
DFS
.
We can see the crossing between
F
max
and
F
min
that takes place at the
beginning of the plateau.
▲▲
▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲▲
▲▲
▲▲
10
0
10
1
10
2
10
3
f
[
Hz
]
10
-
24
10
-
22
10
-
20
10
-
18
σ
[
Hz
-
1
/
2
]
F
max
F
min
F
DFS
F
(a)
FIG. 2: Sensitivity profile that corresponds to phase quadratures measurement (
F
) along with
F
max
(blue dots),
F
min
(green
triangles), and
F
DFS
(brown squares), as defined in the text. As shown in the text
F
=
F
max
+
F
min
+
F
DFS
.
VII. FI AND QFI WITH RADIATION PRESSURE AND THERMAL DISPLACEMENT NOISE
In realistic scenarios we have both RPN and thermal displacement noise, i.e. a covariance matrix of
Σ
q
=
1
/
2
MM
+
δ
2
AA

.
The QFI in this case is the same as with only thermal displacement noise since the RPN
can be removed using the same optimal frequency-dependent readout (eqs. 27, 28). Let us show this formally:
Σ
q
= 1
/
2
1
M
int
M
21
M
21
M
int
1
+
M
21
M
21
+
δ
2
A
ph
A
ph
!
.
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
10
0
10
1
10
2
10
3
10
4
f
[
Hz
]
10
-
24
10
-
23
10
-
22
10
-
21
10
-
20
10
-
19
10
-
18
σ
[
Hz
-
1
/
2
]
(a)
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
*
10
0
10
1
10
2
10
3
10
4
f
[
Hz
]
10
-
24
10
-
23
10
-
22
10
-
21
10
-
20
10
-
19
10
-
18
σ
[
Hz
-
1
/
2
]
(b)
FIG. 3: Sensitivity profile given thermal noise, with and without radiation pressure. (a) QFI: QFI with both thermal noise and
RPN (red, solid line) coincides with QFI given only thermal noise (green diamonds). Blue dashed line corresponds to the shot
noise limit. (b) FI with phase quadrature measurement: red solid line corresponds to both thermal and RPN, green diamonds
to only thermal noise and blue, dashed line to shot noise limit.