of 17
Background information: A study on the sensitivity of astrophysical
gravitational-wave background searches
Arianna I. Renzini ,
1,2,3,4
,*
Tom Callister ,
5
,
Katerina Chatziioannou ,
1,2
,
and Will M. Farr
6,7
1
LIGO Laboratory,
California Institute of Technology
, Pasadena, California 91125, USA
2
Department of Physics,
California Institute of Technology
, Pasadena, California 91125, USA
3
Dipartimento di Fisica
G. Occhialini,
Universit`
a degli Studi di Milano-Bicocca
,
Piazza della Scienza 3, 20126 Milano, Italy
4
INFN, Sezione di Milano-Bicocca
, Piazza della Scienza 3, 20126 Milano, Italy
5
Kavli Institute for Cosmological Physics,
The University of Chicago
, Chicago, Illinois 60637, USA
6
Department of Physics and Astronomy,
Stony Brook University
, Stony Brook, New York 11794, USA
7
Center for Computational Astrophysics,
Flatiron Institute
, New York, New York 10010, USA
(Received 21 March 2024; accepted 14 June 2024; published 9 July 2024)
The vast majority of gravitational-wave signals from stellar-mass compact binary mergers are too weak
to be individually detected with present-day instruments and instead contribute to a faint, persistent
background. This astrophysical background is targeted by searches that model the gravitational-wave
ensemble collectively with a small set of parameters. The traditional search models the background as a
stochastic field and estimates its amplitude by
cross-correlating
data from multiple interferometers. A
different search uses gravitational-wave
templates
to marginalize over all individual event parameters and
measure the duty cycle and population properties of binary mergers. Both searches ultimately estimate the
total merger rate of compact binaries and are expected to yield a detection in the coming years. Given the
conceptual and methodological differences between them, though, it is not well understood how their
results should be mutually interpreted. In particular, when a detection of an astrophysical compact binary
background is claimed by either approach, which portion of the population is in fact contributing to this
detection? In this paper, we use the Fisher information to study the implications of a background detection
in terms of which region of the Universe each approach probes. Specifically, we quantify how information
about the compact binary merger rate is accumulated by each search as a function of the event redshift. For
the LIGO design sensitivity and a uniform-in-comoving-volume distribution of equal-mass
30
M
binaries,
the traditional cross-correlation search obtains 99% of its information from binaries up to redshift 2.5
(average signal-to-noise ratio
<
8
), and the template-based search from binaries up to redshift 1.0 (average
signal-to-noise ratio
8
). While we do not calculate the
total
information accumulated by each search, our
analysis emphasizes the need to pair any claimed detection of the stochastic background with an assessment
of which binaries contribute to said detection. In the process, we also clarify the astrophysical assumptions
imposed by each search.
DOI:
10.1103/PhysRevD.110.023014
I. INTRODUCTION
For every compact binary merger
[1]
observed by the
LIGO
[2]
and Virgo
[3]
observatories, there are many more
that are too distant and too weak to be directly detected.
Although these distant binaries may be individually indis-
tinguishable from instrumental noise, their population may
be collectively detectable via the slight coherence it imparts
across networks of widely separated gravitational-wave
detectors
[4
6]
. The resulting collection of weak signals is
colloquially referred to as the astrophysical gravitational-
wave background. Current constraints based on the indi-
vidually detectable tail of the total population suggest that
this astrophysical background is several orders of magni-
tude larger than any expected cosmological stochastic
background in the relevant frequency range
[6
8]
.
1
If
detected, the stochastic background will offer indirect
information about the properties of compact binaries
*
Contact author: arenzini@caltech.edu
Contact author: tcallister@uchicago.edu
Contact author: kchatziioannou@caltech.edu
§
Contact author:will.farr@stonybr
ook.edu, wfarr@flatironinstitute
.org
1
For this reason, we will drop the
astrophysical
designation
in the rest of the paper, with the understanding that unless
explicitly noted otherwise we refer to the astrophysical stochastic
gravitational-wave background.
PHYSICAL REVIEW D
110,
023014 (2024)
2470-0010
=
2024
=
110(2)
=
023014(17)
023014-1
© 2024 American Physical Society
beyond the horizons of present-day detectors
[9
13]
;itis
therefore a prime target for a variety of gravitational-wave
searches
[6,14,15]
.
Traditional searches for the background (both astrophysi-
cal and cosmological) rely on the cross-correlation between
detector pairs. These searches model the background
gravitational-wave strain as a mean-zero stochastic field
that is continuous and Gaussian and measure its variance.
The steep low-frequency noise of ground-based detectors
[16]
suggests that any autocorrelations induced by the
stochastic background are subdominant and can therefore
be neglected; this is often referred to as the weak signal limit
[4,5,17]
. But given sufficiently long observation times, the
stochastic background will manifest as excess cross power
between instruments. Given current predictions for the
power spectrum of the stochastic background
[6,7]
,a
detection is unlikely in the current observing run, but
may be feasible during the next observing run if the
Advanced LIGO sensitivity target
[18]
is reached.
An alternative search strategy is motivated by the fact
that the background is neither continuous nor Gaussian.
Given current merger rate estimates, we expect that a black
hole binary (neutron star binary) merges every 5
10 min
(5
60 s) in the mass range relevant for ground-based
detectors
[7]
. As black hole binary coalescences last for
O
ð
seconds
Þ
in the ground-based detector frequency band
[1,19,20]
, the black hole background is expected to be
composed of distinct nonoverlapping transient signals. For
neutron star binaries the duration is
O
ð
minutes
Þ
[21]
so
these signals overlap. However, given a low frequency
cutoff of 10 or 20 Hz, it is still unlikely that the relevant
background lies in the confusion noise limit
[22
26]
,
though subject to large uncertainty on the binary neutron
star merger rate
[6]
. While the non-Gaussianity of the
compact binary background does not bias the cross-
correlation search in the low-signal limit, it does imply
that it is suboptimal
[27]
.
Given distinct
though individually undetectable
transient signals, the author of Ref.
[28]
proposed a search
that relies on the matched-filter technique that has success-
fully resulted in the detection of individually resolved
binaries, e.g.,
[29]
. This template-based search utilizes
phase-coherent gravitational waveform models to margin-
alize over the properties of individual events that may (or
may not) be present within every segment of data, regard-
less of whether the events rise above the threshold for direct
detection. An extended version also infers and/or margin-
alizes over the properties of the compact-binary population
[30]
. In general, template-based techniques are expected to
be more sensitive than cross-correlation ones as the former
include (even vanishingly small) information about the
waveform phase, while the latter rely solely on excess
power. Whereas cross-correlation searches might require
years of integration to claim a detection of a gravitational-
wave background, it is argued that the template-based
search might reach a detection given only
days
of data at
design sensitivity
[28]
. If such an improvement is realized,
the template-based search would represent a remarkable
leap in sensitivity and enable imminent detection.
The stark difference in sensitivity motivates the main
question of this study: when we detect a background with
either search, which region of the Universe have we
successfully probed and what astrophysical assumption
have we made about this region? A direct time-to-detection
estimate does not fully address this question as the two
searches make different astrophysical assumptions and are
not necessarily sensitive to the same population of binaries
at the same regions in the Universe. Instead, the authors of
Ref.
[30]
examined the template-based search
s ability to
infer a cutoff in the binary redshift distribution and
concluded that there is some information from binaries
at the edge of resolvability. Additionally, the cross-
correlation and template-based formalisms are constructed
in terms of different physical quantities and rely on
different detection statistics, specifically the gravitational
wave
energy density
and the
event rate
, respectively.
The reliance on different statistics is not merely a
technical inconvenience, but also reveals a conceptual
incompatibility between approaches. The energy density
and event rate can be mathematically related to each other
only after
assuming a specific merger rate distribution
for
sources across the Universe. Gravitational-wave detectors
measure spacetime strain, which can be trivially converted
to intensity and energy density without further assump-
tions. This suggests that the cross-correlation search can
infer the gravitational-wave energy density directly from
the data, with no assumption on source distribution. The
template-based analysis, on the other hand, is based on the
measurement of a nonzero rate of events. The conversion
from measured strain to an event rate (or duty cycle) relies
on a Bayes factor that compares the hypotheses that a data
segment contains signal or noise, which in turn depends on
assumptions about the prior or astrophysical distribution of
sources, including the mass, spin, and redshift distribution.
While Ref.
[30]
relaxes the mass and spin dependence, the
assumption of knowledge of the redshift distribution
remains. If an analysis intrinsically assumes perfect knowl-
edge of how sources are distributed in the Universe, this
begs the question: how much information is actually
coming from the unresolved sources compared to what
is extrapolated from the resolved ones?
The goal of our work is to study the sensitivity of the
cross-correlation and template-based searches and identify
the astrophysical assumptions each search makes. We use
the Fisher information matrix to quantify the information
accumulated by each search and study how this information
is accumulated by observing binaries at different redshifts.
In lieu of a technically challenging full implementation of
the template-based search, we consider a simplified pop-
ulation of compact binaries where all parameters are known
RENZINI, CALLISTER, CHATZIIOANNOU, and FARR
PHYS. REV. D
110,
023014 (2024)
023014-2
other than the distance/redshift. This simplified setup does
not allow us to compare the
total
information accumulated
by each search, but it yields the
relative
contribution by
binaries at different redshifts. For
30
M
binaries, the cross-
correlation (template-based) search accumulates 99% of its
information from binaries up to redshift 2.5 (1.2). The same
redshift estimates for
70
M
binaries are 1.5 (1.6). With the
hopefully imminent detection of a stochastic background in
LIGO data, our study emphasizes the need to carefully
assess the sensitivity and assumptions of our search
methods.
The rest of the paper presents the details of our
calculation. We begin in Sec.
II
by reviewing the stochastic
background and its characterization. In Sec.
III
we lay out
the two search methodologies, highlighting the targeted
observables. In Sec.
IV
we describe the theory and calculate
the functional forms of the Fisher information for each
search with respect to a common parameter, and in Sec.
V
we study the information functions in different scenarios.
Finally, we draw our conclusions in Sec.
VI
.
II. THE COMPACT BINARY BACKGROUND
In this section, we introduce the basics of the gravitational-
wave stochastic background and establish notation. The
familiar reader can skip ahead to Sec.
III
where we introduce
the search methods.
The background of unresolved compact binaries is
generally characterized by its dimensionless energy-density
spectrum
[4,8]
,
2
Ω
GW
ð
f
Þ¼
1
ρ
c
d
ρ
GW
d
ln
f
;
ð
1
Þ
where
ρ
GW
is the energy density in gravitational waves
(GW) and
ρ
c
is the Universe
s closure energy density. The
present-day
Ω
GW
ð
f
Þ
in the frequency range
10
Hz
<f<
10
3
Hz is dominated by the integrated merger history of
compact binaries over all redshifts. For simplicity,
we consider a single class of compact binaries with
ensemble-averaged source-frame energy spectra
dE
s
=df
s
(a subscript
s
denotes source-frame quantities), and define
R
ð
z
Þ
to be the number of mergers per unit comoving
volume
V
c
per unit source-frame time
t
s
, i.e., the source-
frame merger rate density
R
ð
z
Þ¼
dN
dV
c
dt
s
:
ð
2
Þ
Given gravitational-wave sources up to redshift
z
max
, the
present-day dimensionless energy-density spectrum is
[6]
Ω
GW
ð
f
Þ¼
f
ρ
c
Z
z
max
0
R
ð
z
Þ
dE
s
df
s




f
ð
1
þ
z
Þ
dt
s
dz
dz
¼
f
ρ
c
Z
z
max
0
R
ð
z
Þ
ð
1
þ
z
Þ
H
ð
z
Þ
dE
s
df
s




f
ð
1
þ
z
Þ
dz;
ð
3
Þ
where
H
ð
z
Þ
H
0
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
Ω
M
ð
1
þ
z
Þ
3
þ
Ω
Λ
p
is the Hubble
parameter at redshift
z
and
H
0
is the Hubble constant.
The total detector-frame merger rate
R
is
R
¼
Z
z
max
0
dz
R
ð
z
Þ
1
þ
z
dV
c
dz
:
ð
4
Þ
Here
dV
c
=dz
is the comoving volume per unit redshift. The
factor of
1
þ
z
converts the source-frame rate into the
detector-frame rate as measured at Earth.
Both the gravitational-wave energy-density and the total
merger rate depend on the source-frame rate density,
R
ð
z
Þ
. It is convenient to write the latter as the product
R
ð
z
Þ¼
R
0
r
ð
z
Þ
, where
R
0
is the local merger rate at
z
¼
0
and
r
ð
z
Þ
is the merger rate density function normalized to 1 at
z
¼
0
. Then, in terms of the total rate
R
, the local merger rate
becomes
R
0
¼
R

Z
z
max
0
dz
r
ð
z
Þ
1
þ
z
dV
c
dz

1
R
I
ð
z
max
Þ
1
;
ð
5
Þ
where we have defined the integral
I
ð
z
max
Þ
which quantifies
the ratio between total and local merger rate, given
z
max
.We
also define the normalized probability distribution for source
redshift,
p
ð
z
Þ
,
p
ð
z
Þ¼
R
ð
z
Þ
1
þ
z
dV
c
dz
R
z
max
0
dz
0
R
ð
z
0
Þ
1
þ
z
0
dV
c
dz
0
¼
R
0
R
r
ð
z
Þ
1
þ
z
dV
c
dz
1
R
R
ð
z
Þ
;
ð
6
Þ
defining
R
ð
z
Þ
to be the event rate per unit detector-
frame time.
So far, searches targeting the stochastic background via
Ω
GW
ð
f
Þ
have operated under the assumption that the signal
is continuous and Gaussian. However, the background is
only Gaussian in the limit of large numbers of sources that
saturate the detector time stream and satisfy the central limit
theorem. Given the current set of compact binary detections
[7]
, it is clear that this is not the case
[24]
. As merging black
holes are expected to strongly contribute to the GW energy
density
[31]
, this suggests the astrophysical stochastic
background is non-Gaussian and intermittent in the detec-
tor frequency band, with a cadence dictated by the
prevalence of black hole binary mergers in the Universe.
2
This formalism is largely inspired by the cosmological
background, but it is applied to the astrophysical background
as well.
BACKGROUND INFORMATION: A STUDY ON THE
...
PHYS. REV. D
110,
023014 (2024)
023014-3
To quantify the non-Gaussianity and intermittence of the
signal and inspired by
[27]
, Ref.
[28]
split the data in
segments of duration
τ
and denoted the probability that a
given segment contains a signal as
ξ
. This duty cycle
ξ
can
be related to the merger rate. Given the total detector-frame
merger rate
R
, the probability that
N
s
gravitational-wave
signals are present in a data segment with duration
τ
is
Poisson distributed
p
ð
N
s
j
R;
τ
Þ¼
ð
R
τ
Þ
N
s
e
R
τ
N
s
!
:
ð
7
Þ
Then the duty cycle
ξ
is simply the probability that
N
1
signals are present in the data segment:
ξ
¼
X
N
s
¼
1
p
ð
N
s
j
R;
τ
Þ
¼
1
p
ð
0
j
R;
τ
Þ
¼
1
e
R
0
I
ð
z
max
Þ
τ
:
ð
8
Þ
In the limit where
R
τ
1
, this becomes simply
ξ
R
τ
.
Assuming a segment length
τ
O
ð
s
Þ
,
3
the expected duty
cycle of the black hole background is
ξ
10
3
.
The cross-correlation analysis (described in Sec.
III A
)is
typically expressed in terms of the
Ω
GW
ð
f
Þ
spectrum, as it
was conceived for Gaussian and continuous backgrounds.
The template-based search (described in Sec.
III B
), on the
other hand, was proposed with a non-Gaussian and
intermittent background in mind and is hence framed in
terms of
ξ
. In Sec.
IV
we also work in terms of independent
parameters that the observables of both searches can be
mapped to. Specifically, we use Eqs.
(3)
,
(4)
, and
(8)
to
express the cross-correlation and template-based searches
in terms of the same quantities,
R
0
and
z
max
, for direct
comparison.
III. STOCHASTIC BACKGROUND
SEARCH METHODS
We consider two searches that target the subthreshold
population of binary mergers in ground-based detectors:
the cross-correlation search (relevant quantities are labeled
as
CC
) and the template-based search (
TB
).
A. Cross-correlation search
The most common search for the stochastic gravitational-
wave background relies on the cross-correlation spectrum
between the (frequency domain) data
̃
d
1
ð
f
Þ
and
̃
d
2
ð
f
Þ
measured by two gravitational-wave detectors to construct
an optimal (i.e., unbiased minimum-variance) statistic for
Ω
GW
ð
f
Þ
[4,5,17]
,
ˆ
Ω
GW
ð
f
Þ¼
2
Q
ð
f
Þ
γ
ð
f
Þ
R
ð
̃
d
1
ð
f
Þ
̃
d

2
ð
f
ÞÞ
T
seg
;
ð
9
Þ
where
T
seg
is the time segment duration over which data are
measured. Here and throughout this discussion we use
hatless
symbols to denote physical quantities and
hatted
symbols to denote their estimates based on data. The data
are Fourier transforms of strain data and hence have units
½
̃
d
Hz
1
. The function
Q
ð
f
Þ
is defined as
Q
ð
f
Þ¼
f
3
10
π
2
3
H
2
0
;
ð
10
Þ
and converts the strain power spectrum to a dimensionless
energy-density spectrum. The factor
γ
ð
f
Þ
is the overlap
reduction function
[32,33]
, which quantifies the geometrical
sensitivity of the cross-correlated detector pair to an
isotropic background. The factor of 2 in Eq.
(9)
accounts
for the contribution of negative frequencies. The variance of
the statistic in Eq.
(9)
is
[6]
ˆ
σ
2
GW
ð
f
Þ¼
1
2

Q
ð
f
Þ
γ
ð
f
Þ

2
ˆ
P
1
ð
f
Þ
ˆ
P
2
ð
f
Þ
;
ð
11
Þ
where
ˆ
P
1
ð
f
Þ
and
ˆ
P
2
ð
f
Þ
are the one-sided strain power
spectra of the data in detectors 1 and 2, respectively,
defined as
ˆ
P
I
ð
f
Þ¼
2
T
seg
j
̃
d
I
ð
f
Þj
2
:
ð
12
Þ
The energy density spectrum can be decomposed as
Ω
GW
ð
f
Þ¼
Ω
ref
E
ð
f=f
ref
Þ
;
ð
13
Þ
with an overall amplitude
Ω
GW
ð
f
ref
Þ
Ω
ref
at reference
frequency
f
ref
. The spectral shape
E
ð
f=f
ref
Þ
can be
assumed known or parametrized
E
ð
f=f
ref
Þ
and inferred
[34]
. For example, for compact binary sources
E
ð
f=f
ref
Þ
should be universal, as the inspiral frequency evolution is
independent of
r
ð
z
Þ
. This is true up to a turnover frequency
that corresponds to the redshifted merger frequency of the
binaries. If we are not sensitive to the spectrum turnover, as
is the case with current ground-based interferometers, we
can treat the spectral shape of the signal as redshift-
independent and set
E
ð
f=f
ref
Þ
ð
f=f
ref
Þ
2
=
3
.
In practice, cross-correlation spectra are estimated inde-
pendently for a large number of short time segments, each of
O
ð
100
s
Þ
, and combined via a weighted average. Since a
large number (
10
4
10
5
)ofsuchtime segmentsarecombined,
the resulting cross-correlation measurements
ˆ
Ω
GW
ð
f
Þ
are
3
With current detector sensitivity, and in order to avoid double-
counting, it is reasonable to pick a segment duration such that
there is at most one event per data segment. The segment duration
is therefore chosen to be comparable to the time a signal spends in
the detector frequency band.
RENZINI, CALLISTER, CHATZIIOANNOU, and FARR
PHYS. REV. D
110,
023014 (2024)
023014-4
well-described by Gaussian statistics even in the case of an
intermittent signal. As shown in
[17]
, this cross-correlation
statistic is a sufficient statistic for the stochastic signal in the
case where the signal is Gaussian and weak compared to
detector noise
4
and can be derived from the mean-zero
Gaussian likelihood
p
CC
ð
̃
d
j
Ω
GW
;P
n;I
Þ¼
Y
f
1
j
2
π
C
ð
f
Þj
e
1
2
̃
d
C
1
̃
d
:
ð
14
Þ
Here,
̃
d
is thedatavector
̃
d
¼ð
̃
d
1
;
̃
d
2
Þ
T
,and
Ω
GW
ð
f
Þ
appears
in the data covariance as the intrinsic variance of the
gravitational-wave signal
[17]
,
C
ð
f
Þ¼
T
seg
4
P
1
ð
f
Þ
γ
ð
f
Þ
P
GW
ð
f
Þ
γ
ð
f
Þ
P
GW
ð
f
Þ
P
2
ð
f
Þ
!
;
ð
15
Þ
where
P
GW
ð
f
Þ¼
Ω
GW
ð
f
Þ
Q
ð
f
Þ
Ω
ref
E
ð
f=f
ref
Þ
Q
ð
f
Þ
:
ð
16
Þ
For stationary noise within each analysis segment the noise
covariance is diagonal in frequency; hence, the total like-
lihood is a product of Gaussian likelihoods evaluated at each
frequency. The
P
I
ð
f
Þ
parameters are the expected power
spectra in each detector and can be explicitly written as the
sum of the noise and gravitational-wave power spectra,
P
I
ð
f
Þ¼
P
n;I
ð
f
Þþ
P
GW
ð
f
Þ
:
ð
17
Þ
This implies both diagonal and off-diagonal terms of the
covariance in Eq.
(15)
depend on the gravitational-wave
energy density, giving rise to autocorrelation and cross-
correlation terms, respectively. The noise power spectra
P
n;I
ð
f
Þ
are additional parameters of the search and may in
principle be estimated alongside
Ω
GW
ð
f
Þ
.
In the weak signal limit, the autocorrelation terms in
Eq.
(14)
can be neglected, such that
P
I
ð
f
Þ
P
n;I
ð
f
Þ
and
the likelihood reduces to
p
CC
ð
ˆ
Ω
GW
j
Ω
GW
Þ
Y
f
exp

ð
ˆ
Ω
GW
ð
f
Þ
Ω
GW
ð
f
ÞÞ
2
2
ˆ
σ
2
GW
ð
f
Þ

;
ð
18
Þ
where hatted quantities are directly estimated from the data.
To simplify parameter estimation, we fix
E
ð
f=f
ref
Þ¼
ð
f=f
ref
Þ
2
=
3
such that the likelihood of Eq.
(18)
refers to
a single parameter,
Ω
ref
, and depends only on quantities
derived from the data as well as the assumed
E
ð
f
Þ
.Itis
possible to remove this second dependence by reexpressing
the likelihood in the full spectrum
Ω
GW
ð
f
Þ
and constrain
each frequency bin independently; however,
Ω
ref
is typi-
cally preferred as this allows us to marginalize over the
spectrum and improve detection statistics.
B. Template-based search
The template-based search adopts a different approach
that is more similar to the traditional matched-filter
searches for individually detectable signals
[28,30,35]
.In
this search, the entire strain time series measured by a
gravitational-wave detector network is divided into
N
t
time
segments, each with duration
τ
. For instance, the search for
a stochastic background from black-hole binaries divides
one year of data into approximately
N
t
¼
10
7
segments of
τ
¼
4
s duration, given the reasonable expectation that
each segment contains
1
signal on average
[28]
. Here the
segment length is selected by considering the expected
single-event duration at present sensitivity. This is in stark
contrast with the typical choices made for the cross-
correlation analysis, where the segment duration is typi-
cally on the timescale of a few minutes to access lower
frequency content, handle noise nonstationarity, and min-
imize the computational cost of the search
[36]
.
Within every time segment
i
, a template-based analysis
computes the marginalized likelihood (or
evidence
) that a
compact binary merger is (hypothesis
S
i
) or is not
(hypothesis
N
i
) present. The marginalized likelihood for
the signal hypothesis is obtained via marginalization over
all source parameters
θ
of the binary,
p
TB
ð
d
i
j
S
i
Þ¼
Z
p
ð
d
i
j
θ
;
S
i
Þ
p
ð
θ
j
S
i
Þ
d
θ
;
ð
19
Þ
where
d
i
is the data comprising segment
i
,
p
ð
d
i
j
θ
;
S
i
Þ
is the
likelihood of having obtained these data in the presence of a
source with parameters
θ
(binary masses, spins, redshift,
etc.), and
p
ð
θ
j
S
i
Þ
is the prior on the source parameters
θ
.
With the segment-by-segment signal and noise evidences
in hand, the template-based analysis then seeks to measure
the duty cycle
ξ
, i.e., the fraction of time segments
containing a gravitational-wave signal (see discussion in
Sec.
II
). The relevant likelihood across all
N
t
segments is
p
TB
ðf
d
i
gj
ξ
Þ¼
Y
i
p
ð
d
i
j
ξ
Þ
¼
Y
i
½
p
ð
d
i
j
S
i
Þ
p
ð
S
i
j
ξ
Þþ
p
ð
d
i
j
N
i
Þ
p
ð
N
i
j
ξ
Þ
¼
Y
i
½
ξ
p
ð
d
i
j
S
i
Þþð
1
ξ
Þ
p
ð
d
i
j
N
i
Þ
;
ð
20
Þ
where, by definition,
p
ð
S
i
j
ξ
Þ¼
ξ
is the probability that a
signal is present in segment
i
; correspondingly,
p
ð
N
i
j
ξ
Þ¼
1
ξ
is the probability that a signal is absent. Factoring out
p
ð
d
i
j
N
i
Þ
, we can rewrite the likelihood as
4
A weak signal is a signal that does not appreciably contribute
to the power measured by a single detector
[5]
, such that the
variance of the data can be equated to the variance of the noise.
BACKGROUND INFORMATION: A STUDY ON THE
...
PHYS. REV. D
110,
023014 (2024)
023014-5
p
TB
ðf
d
i
gj
ξ
Þ¼
Y
i
p
ð
d
i
j
N
i
Þ½
ξ
b
i
þð
1
ξ
Þ
;
ð
21
Þ
where
b
i
is the Bayes factor between signal and noise
hypotheses in segment
i
:
b
i
¼
p
ð
d
i
j
S
i
Þ
p
ð
d
i
j
N
i
Þ
:
ð
22
Þ
Within the framework of the template-based search, obser-
vation of the stochastic background amounts to con-
straining
ξ
away from zero. Equivalently, the Bayes
factor
B
ξ
between the
signal
hypothesis that allows
0
ξ
1
and the
noise
hypotheses in which
ξ
¼
0
,
B
ξ
¼
R
1
0
p
ðf
d
i
gj
ξ
Þ
p
ð
ξ
Þ
d
ξ
p
ðf
d
i
gj
ξ
¼
0
Þ
;
ð
23
Þ
can be used as a detection statistic given some prior on the
duty cycle
p
ð
ξ
Þ
.
In contrast to Eq.
(18)
, which only depends on the data
and the inferred quantities, Eq.
(23)
also depends on a
collection of binary parameter priors. These include priors
on compact binary masses
m
, spins
χ
, and redshift
z
(or,
equivalently, distance). These priors encode our belief
about the underlying population of compact binaries.
Choosing a particular prior
p
ð
m;
χ
;z
j
S
i
Þ
when evaluating
Eq.
(21)
amounts to assuming that the population distri-
bution of these parameters is perfectly known. An extended
version of the template-based analysis relaxes the
assumption of a known population distribution for masses
and spins
[30]
. Instead, the population prior
p
ð
m;
χ
j
S
i
Þ
is
parametrized, and the resulting set of hyperparameters are
added to the search. Conceptually, this is equivalent to
performing a search for compact binary mergers while
simultaneously performing a population analysis to mea-
sure their ensemble properties
[7,37]
. However, a fixed
prior on the binary redshift remains: the current formulation
of the search assumes perfect knowledge of how compact
binaries are distributed with redshift. This assumption,
combined with the presence of resolved low-redshift
binaries, raises the question of whether a measurement
of
ξ
>
0
is informed by binaries at high redshift, or is
instead dominated primarily by foreground binaries.
This question motivates our study: below we revisit the
sensitivity of stochastic searches and quantify the impact of
resolved sources in the template-based case.
IV. INFORMATION CONTENT
The different assumptions, methodology, and formalism
between the cross-correlation and template-based searches
make a direct comparison in terms of a simple time-to-
detection difficult. We instead consider the
Fisher infor-
mation
for each search and examine how this information is
gathered as a function of source redshift and for different
astrophysical assumptions about the event distribution.
Given a likelihood
p
ð
d
j
Λ
Þ
for data
d
conditioned on
parameters
Λ
¼f
Λ
i
g
, the Fisher information matrix is
defined as the expectation value over data realizations
F
ij
ð
Λ
Þ¼

2
Λ
i
Λ
j
ln
p
ð
d
j
Λ
Þ
:
ð
24
Þ
In the high signal-to-noise ratio limit where the likelihood
becomes approximately Gaussian, the inverse Fisher matrix
F
1
ij
corresponds to the covariance matrix quantifying the
uncertainties on parameters
Λ
i
. The Fisher information,
meanwhile, is defined as the matrix determinant:
I
ð
Λ
Þ¼
det
F
ij
ð
Λ
Þ
:
ð
25
Þ
The strong signal-to-noise assumption of the Fisher for-
malism is not in tension with the previously defined low-
signal limit employed for the cross-correlation search. The
latter refers to the fact that the autocorrelation of strain data
is dominated by detector noise, rather than astrophysical
signals. The former, in contrast, refers to the detectability of
excess cross power due to the gravitational-wave back-
ground, after integrating over a sufficiently long period of
time. In other words, even though the individual signals are
subthreshold, the applicability of the Fisher formalism
refers to the stochastic signal as a whole.
The cross-correlation and template-based searches are
framed in terms of different observables; i.e.,
Λ
is different
in each case. The cross-correlation search reports a meas-
urement of the amplitude
Ω
ref
, i.e.,
Λ
CC
¼
Ω
ref
. The
template-based search, meanwhile, reports a measurement
of the duty cycle
ξ
, i.e.,
Λ
TB
¼
ξ
. The duty cycle of a
population and its total fractional energy density can be
related by combining Eqs.
(3)
,
(5)
, and
(8)
; however, this
requires prior knowledge of the source redshift distribution
r
ð
z
Þ
. Equivalently, knowledge of
r
ð
z
Þ
is required to convert
the fractional energy density emitted by a population to its
local merger rate density
R
0
, and therefore to
ξ
. Comparing
the two searches thus relies on prior knowledge of
r
ð
z
Þ
,
which raises doubts as to what is actually being measured,
as opposed to extrapolated, to produce the individual search
results as well as in their comparison. To make progress,
our strategy here is to parametrize
r
ð
z
Þ
and infer it along-
side each search parameter.
Extending each search to also probe the source redshift
distribution amounts to a parametrization of
r
ð
z
Þ
r
ð
z
;
λ
Þ
and additional parameters
λ
. Here, we adopt the redshift
distribution parametrization of Eq.
(5)
, with
λ
¼
z
max
,
similar to
[30]
. For the template-based search this results
in
Λ
TB
¼f
ξ
;z
max
g
. In terms of the likelihood in Eq.
(21)
,
the duty cycle
ξ
is explicit, while
z
max
enters through the
distance/redshift prior used in the Bayes factor calculation,
b
i
ð
z
max
Þ
. However, varying
z
max
while keeping
ξ
constant
would result in a change in the redshift distribution of the
RENZINI, CALLISTER, CHATZIIOANNOU, and FARR
PHYS. REV. D
110,
023014 (2024)
023014-6
local, individually detectable sources. Since this is expected
to be well-constrained by the time of detection of the
stochastic background, we instead reparametrize the analy-
sis to explicitly separate the local merger rate
R
0
from the
source redshift distribution,
Λ
TB
¼f
R
0
;z
max
g
.
In the case of the cross-correlation search, the only
parameter considered in the likelihood of Eq.
(18)
is
Ω
ref
.
For direct comparison with the template-based search, we
also need to express this likelihood in terms of the same
R
0
and
z
max
parameters, linked to
Ω
ref
through Eqs.
(3)
and
(5)
.
We therefore also consider a
2
d
version of the cross-
correlation search with
Λ
CC
¼f
R
0
;z
max
g
, though both
parameters enter the likelihood only through their
Ω
ref
combination. In what follows, we discuss both the case
where the maximum cutoff
z
max
is assumed known and
fixed (
1
d
searches), and the case where it is a free parameter
(
2
d
searches). For the
1
d
comparison, we present results in
Ω
ref
, as this naturally quantifies the background amplitude;
while for the
2
d
case, we use
R
0
and
z
max
for the reasons
detailed above.
A back-of-the-envelope calculation clarifies the
ensuing detailed calculation in the
1
d
case. Here, the
cross-correlation search is sensitive to the GW energy
density while the template-based one is sensitive to the
local merger rate. In the cross-correlation case, the energy
density of
N
events contained in a volume
D
3
scales as
N
×
E
D
, as the single event energy is
E
D
2
.This
means that the total energy density scales roughly linearly
with event distance. Meanwhile, for the template-based
search the likelihood is dominated by the
ξ
×
b
term,
which scales as
ξ
×
b
N
×
e
SNR
2
=
2
D
3
×
e
D
2
=
2
,where
b
is an event Bayes factor as in
(21)
, and we have assumed
that
b
e
SNR
2
=
2
implying a loud signal. This indicates that
events at large
D
will contribute less to this search, as the
Bayes factor decreases rapidly with distance. In what
follows, to avoid taking this loud signal approximation for
events around/below the detection threshold, we evaluate
Bayes factors numerically.
A. Cross-correlation search
We calculate the Fisher information for the cross-
correlation search starting from the full likelihood of
Eq.
(14)
which may be expanded as
[17]
ln
p
CC
ð
̃
d
j
Ω
ref
Þ
∼−
N
seg
X
f

ln
j
C
ð
f
Þjþ

T
seg
4

2
P
1
ð
f
Þ
ˆ
P
2
ð
f
Þþ
P
2
ð
f
Þ
ˆ
P
1
ð
f
Þ
2
γ
2
ð
f
Þ
Q
2
ð
f
Þ
Ω
ref
E
ð
f=f
f
Þ
ˆ
Ω
GW
ð
f
Þ
j
C
ð
f
Þj

;
ð
26
Þ
where the determinant of the covariance is
j
C
ð
f
Þj¼

T
seg
4

2
½
P
1
ð
f
Þ
P
2
ð
f
Þ
γ
2
ð
f
Þ
P
2
GW
ð
f
Þ
;
ð
27
Þ
and
N
seg
is the total number of segments used in the
analysis. In what follows, we consider the noise terms
P
n;I
entering the data power spectra
P
I
in Eq.
(17)
to be known.
We proceed to calculate the Fisher matrix in log
Ω
ref
and
Ω
ref
,
F
log
Ω
ref
and
F
Ω
ref
, respectively. Here
F
log
Ω
ref
corre-
sponds to the fractional uncertainty on
Ω
ref
, and thus scales
with the sensitivity of the search. In contrast,
F
Ω
ref
scales
with both the sensitivity and the value of
Ω
ref
itself. In terms
of log
Ω
ref
we have
F
CC
log
Ω
ref
¼

2
log
Ω
2
ref
ln
p
CC
ð
̃
d
j
Ω
ref
Þ
;
ð
28
Þ
where the angle brackets imply calculating the expectation
value at maximum likelihood over many realizations;
hence, first derivatives of the likelihood are dropped.
Under the noise stationarity assumption, each frequency
contributes to the likelihood calculation independently;
hence, the total contribution is the sum over individual
frequencies:
F
CC
log
Ω
ref
¼
Ω
2
ref
X
f
E
2
ð
f
Þ
F
CC
Ω
f
;
ð
29
Þ
where
F
CC
Ω
f
¼

2
Ω
2
f
ln
p
CC
ð
̃
d
j
Ω
f
Þ
;
ð
30
Þ
computed at the individual frequency
f
, where
Ω
f
¼
Ω
ref
E
ð
f
Þ
. Taking the maximum likelihood limit
(which amounts to setting hatted quantities equal to their
unhatted counterparts, assuming these are unbiased estima-
tors), defining
β
ð
f
Þ¼
γ
ð
f
Þ
2
1
, and dropping the explicit
frequency dependence from the functions
γ
,
β
,
P
n;I
,
E
, and
Q
for conciseness, we find
F
CC
Ω
f
¼
N
seg
E
2
2
β
2
Ω
2
f
2
β
Q
ð
P
n;
1
þ
P
n;
2
Þ
Ω
f
þ
Q
2
ð
P
2
n;
1
þ
2
γ
2
P
n;
1
P
n;
2
þ
P
2
n;
2
Þ
ð
P
n;
1
Q
ð
Ω
f
þ
QP
n;
2
Þþ
Ω
f
ð
QP
n;
2
β
Ω
f
ÞÞ
2
:
ð
31
Þ
BACKGROUND INFORMATION: A STUDY ON THE
...
PHYS. REV. D
110,
023014 (2024)
023014-7
In accordance with the standard cross-correlation search,
we take the low-signal limit and expand
F
CC
Ω
f
in
ε
¼
γ
Ω
f
=
ð
F
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P
n;
1
P
n;
2
p
Þ
, which yields to first order,
F
CC
Ω
f
¼
A
ð
f
Þþ
B
ð
f
Þ
Ω
f
þ
O
ð
ε
2
Þ
;
ð
32
Þ
where
A
ð
f
Þ¼
N
seg
ð
P
2
n;
1
þ
2
γ
2
P
n;
1
P
n;
2
þ
P
2
n;
2
Þ
Q
2
P
2
n;
1
P
2
n;
2
;
ð
33
Þ
B
ð
f
Þ¼
2
N
seg
ð
P
3
n;
1
þ
3
γ
2
P
2
n;
1
P
n;
2
þ
3
γ
2
P
n;
1
P
2
n;
2
þ
P
3
n;
2
Þ
Q
3
P
3
n;
1
P
3
n;
2
:
ð
34
Þ
The term
F
CC
log
Ω
ref
is then
F
CC
log
Ω
ref
¼
Ω
2
ref
X
f
E
2
ð
f
Þ½
A
ð
f
Þþ
B
ð
f
Þ
E
ð
f
Þ
Ω
ref

:
ð
35
Þ
Since we are restricting to a single parameter here, Eq.
(35)
gives the Fisher information gathered in a cross-correlation
search for the fractional uncertainty in
Ω
ref
and has two
limiting cases. In the limit where the signal
Ω
ref
0
, i.e., a
Universe with no compact binaries, the absolute informa-
tion per frequency bin is
F
CC
Ω
f
A
ð
f
Þ
. Hence,
A
ð
f
Þ
quantifies the information inherent in a nondetection of
the stochastic background, and only depends on the search
sensitivity
P
n
1
;P
n
2
. The relative information, Eq.
(35)
,is
then zero as expected for vanishingly small signals, as in
this case; if the stochastic background is undetectable, then
we have infinite fractional uncertainty on its size. This
makes sense qualitatively, and though the Fisher formalism
is only strictly applicable in the strong signal limit, it
provides the Cramer-Rao lower bound on the variance of
the estimator. The term
B
ð
f
Þ
, on the other hand, quantifies
the information contributed by a measurable stochastic
signal. The Fisher information
F
CC
Ω
ref
itself scales intuitively
with the size of the stochastic background: given the
negative sign of
B
ð
f
Þ
,as
Ω
GW
grows
F
CC
Ω
ref
decreases
.
However, the Fisher information
F
CC
log
Ω
ref
can in principle
increase or decrease, depending on which term dominates
Eq.
(35)
. That is, as
Ω
GW
is increased, absolute uncertainty
on
δ
Ω
ref
grows, but fractional uncertainty
δ
Ω
ref
=
Ω
ref
decreases as long as we are in the weak-signal approxi-
mation.
5
Finally, information increases as observing time
grows. At fixed analysis segment length
T
seg
, the number of
segments grows linearly with time and
F
CC
Ω
ref
N
seg
. This is
expected as the variance of the measured stochastic field
power scales as
1
=T
obs
[5]
.
Figure
1
shows
A
ð
f
Þ
and
B
ð
f
Þ
, for the two-detector
network of Advanced LIGO detectors, each with design
sensitivity
[38]
assuming one year of data. As seen in
Eqs.
(33)
and
(34)
, the functional forms of these spectra
depend on both the overlap reduction function and the
individual detector sensitivities. The 5 orders of magnitude
between the two curves imply that for typical values of the
background amplitude (
Ω
f
<
10
8
) and at this sensitivity
the
A
ð
f
Þ
term dominates
F
CC
ð
Ω
f
Þ
, verifying the weak-
signal approximation. In the case in question, the majority
of the information is gathered at
45
Hz, where
A
ð
f
Þ
is
maximum for this specific configuration.
6
As sources at
cosmological distances are redshifted, this
most inform-
ative
frequency can be translated into a most informative
redshift, given the source mass. At the top of Fig.
1
we have
added axes with the
merger redshift
Z
at which equal-
mass binaries of different masses merge, at the correspond-
ing frequency on the
x
axis. We select three component
mass values for this example,
70
M
,
30
M
,
10
M
, which
we reprise in Sec.
IV
to calculate the Fisher information.
Binaries with larger mass merge at lower frequencies, and
conversely binaries with smaller mass contribute to lower
FIG. 1.
A
and
B
spectra from Eq.
(32)
as a function of
frequency. The top
x
axes mark the redshift
Z
at which equal-
component-mass binaries merge emitting at the frequency in the
lower
x
axis for different binary masses. The frequency corre-
sponding to maximum stochastic sensitivity, 45 Hz, is marked
with a gray dashed line.
5
This may be seen qualitatively by observing that, in Eq.
(32)
,
the
B
ð
f
Þ
Ω
f
term can compete with the
A
ð
f
Þ
term when
Ω
f
is of
the order of the noise terms
P
n;i
.
6
As discussed, the spectral shapes of
A
ð
f
Þ
and
B
ð
f
Þ
will
necessarily vary for different detector pairs. In general, the larger
the separation between detectors, the lower we expect these
functions to peak in frequency, due to the form of the overlap
reduction function
[5]
.
RENZINI, CALLISTER, CHATZIIOANNOU, and FARR
PHYS. REV. D
110,
023014 (2024)
023014-8
frequencies only when highly redshifted. For example, a
binary composed of two
70
M
black holes merges at
around
60
Hz at
z
¼
0
, and can contribute to lower
frequencies in the band (
20
<f<
60
Hz) when redshifted
by
z<
1
.
5
. On the other hand, a
10
M
equal-mass binary
system can only contribute to a signal at frequencies
<
60
Hz if redshifted by
z>
6
. This implies, in general,
that
Ω
ð
f
Þ
at peak sensitivity
45
Hz is larger for higher-
mass binary populations.
Finally, we calculate the Fisher matrix for the
2
d
extension to the cross-correlation search with parameters
R
0
and
z
max
,
F
CC
R
0
;z
max

F
CC
R
0
R
0
F
CC
R
0
z
max
F
CC
z
max
R
0
F
CC
z
max
z
max

:
ð
36
Þ
Each of the Fisher terms here can be related to the Fisher
matrix in
Ω
ref
using the chain rule. The first diagonal
term is
F
CC
R
0
R
0
¼

2
ln
p
R
2
0
¼

Ω
ref
R
0

2
X
f
E
2
ð
f
Þ
F
CC
Ω
f
R
2
0
F
CC
log
Ω
ref
;
ð
37
Þ
using the fact that
Ω
f
R
0
Ω
ref
R
0
E
ð
f
Þ
;
ð
38
Þ
as per Eqs.
(3)
and
(13)
.
The second diagonal term is similarly derived as
F
CC
z
max
z
max
¼

2
ln
p
z
2
max
¼
X
f
ð
Ω
0
f
Þ
2
F
CC
Ω
f
;
ð
39
Þ
where
Ω
0
f
Ω
f
z
max
ð
40
Þ
is the integrand of Eq.
(3)
evaluated at
z
max
.
Finally, the off-diagonal terms are
F
CC
R
0
z
max
¼
F
CC
z
max
R
0
¼

2
ln
p
R
0
z
max
¼

Ω
ref
R
0

X
f
E
ð
f
Þ
Ω
0
f
F
CC
Ω
f
:
ð
41
Þ
B. Template-based search
The template-based search likelihood, Eq.
(21)
, depends
on the redshift distribution parameters
R
0
and
z
max
explic-
itly through the
ξ
ð
R
0
;z
max
Þ
parameter, Eq.
(8)
, and
implicitly through the Bayes factors
b
i
ð
z
max
Þ
, Eq.
(A2)
,
which depend on prior distributions on source parameters
that in turn rely on
z
max
. The likelihood is then rewritten as
p
TB
ðf
d
i
gj
R
0
;z
max
Þ
¼
Y
i
p
ð
d
i
j
N
i
Þ½
ξ
ð
R
0
;z
max
Þ
b
i
ð
z
max
Þþð
1
ξ
ð
R
0
;z
max
ÞÞ
;
ð
42
Þ
and the Fisher matrix is
F
TB
R
0
;z
max
F
TB
R
0
R
0
F
TB
R
0
z
max
F
TB
z
max
R
0
F
TB
z
max
z
max
!
:
ð
43
Þ
Starting with the diagonal term in
R
0
and writing
p
TB
p
TB
ðf
d
i
gj
R
0
;z
max
Þ
,
ξ
¼
ξ
ð
R
0
;z
max
Þ
,
b
i
¼
b
i
ð
z
max
Þ
for conciseness, we find
F
TB
R
0
R
0
¼

2
R
2
0
ln
p
TB
¼

ξ
R
0
ξ

ξ
R
0
ξ
ln
p
TB

:
ð
44
Þ
The brackets signify ensemble averaging over data real-
izations, which we interpret in practice as utilizing all
available data and evaluating the derivatives at the maxi-
mum of the likelihood. Since the first derivative vanishes at
the maximum we get
F
TB
R
0
R
0
¼

ξ
R
0

2

2
ln
p
TB
ξ
2
:
ð
45
Þ
Substituting Eq.
(42)
we obtain
F
TB
R
0
R
0
¼
A
2
X
i
ð
b
i
1
Þ
2
ð
1
þ
ξ
ð
b
i
1
ÞÞ
2
;
ð
46
Þ
where
A
ξ
R
0
¼
I
ð
z
max
Þ
τ
e
R
0
I
ð
z
max
Þ
τ
¼
I
ð
z
max
Þ
τ
ð
1
ξ
Þ
;
ð
47
Þ
where
I
ð
z
max
Þ
is defined as in Eq.
(5)
. The Fisher matrix
diagonal term in
z
max
is
F
z
max
z
max
¼
X
i
ð
ξ
0
ð
b
i
1
ÞÞ
2
2
ξ
0
b
0
i
þð
ξ
b
0
i
Þ
2
ð
1
þ
ξ
ð
b
i
1
ÞÞ
2
;
ð
48
Þ
where
b
0
b
z
max
;
ð
49
Þ
BACKGROUND INFORMATION: A STUDY ON THE
...
PHYS. REV. D
110,
023014 (2024)
023014-9
ξ
0
¼
ξ
z
max
¼
τ
R
0
I
0
ð
z
max
Þð
1
ξ
Þ
:
ð
50
Þ
Finally, the off-diagonal terms are
F
R
0
z
max
¼
F
z
max
R
0
¼
A
X
i
ξ
0
ð
b
i
1
Þ
2
b
0
i
ð
1
þ
ξ
ð
b
i
1
ÞÞ
2
:
ð
51
Þ
V. SENSITIVITY REACH
We quantify the dependence of the Fisher information on
the merger rate redshift distribution with simulated popula-
tions. We adopt a constant normalized merger rate density
r
ð
z
Þ
such that the merger rate
R
ð
z
Þ
is uniform in comoving
volume, a local merger rate of
R
0
¼
30
Gpc
3
y
1
[7]
, and
vary
z
max
such that
R
ð
z
Þ¼
R
0
1
þ
z
dV
c
dz
z
z
max
0
;z>z
max
:
ð
52
Þ
We consider three distinct populations, made up of equal-
mass, nonspinning black hole binaries with source-frame
masses of
10
M
,
30
M
, and
70
M
, respectively. We
calculate the Fisher information for each population at
varying
z
max
, over one year of data from a network of two
LIGO detectors at design sensitivity
[38]
.
A. Cross-correlation search
For the cross-correlation search, both the
1
d
and the
2
d
Fisher matrices in Eqs.
(28)
and
(36)
can be calculated
analytically. The spectrum
Ω
GW
ð
f
Þ
is calculated for each
population and at each
z
max
using Eq.
(3)
. The emitted
energy spectrum
dE
s
=df
s
is a function of source-frame
chirp mass
M
c
[39]
,
dE
s
df
s
¼
ð
G
π
Þ
2
=
3
M
5
=
3
c
3
E
ð
f
s
Þ
;
ð
53
Þ
where the function
E
ð
f
s
Þ
may be found, e.g., in
[39]
. During
the inspiral phase, i.e., for frequencies lower than the merger
frequency,
E
ð
f
s
Þ
f
1
=
3
s
, implying
Ω
GW
ð
f
Þ
f
2
=
3
s
, which
is generally a good approximation to the low-frequency
background spectral shape. To model the background spec-
trum at higher frequencies, we adopt an analytical inspiral-
merger-ringdown approximation for
E
ð
f
s
Þ
[39,40]
.
We start by considering the single-parameter,
1
d
case.
We compute the Fisher matrix in log
Ω
ref
of Eq.
(35)
varying the cutoff
z
max
in the computation of
Ω
ref
and plot
F
CC
log
Ω
ref
as a function of
z
max
in the left panel of Fig.
2
. This
plot illustrates how the Fisher matrix (equivalently, the
Fisher information as we are in
1
d
) increases with
z
max
as
further distant binaries contribute to the total
Ω
ref
. The
information plateaus at varying
z
max
, depending on the
mass, reach higher values for higher masses as expected.
The
z
max
at which
F
CC
log
Ω
ref
plateaus is a combination of the
redshift at which the binaries emitting at the most sensitive
frequency merge (
Z
in Fig.
1
) and the redshift at which
binaries no longer appreciably contribute to
Ω
ref
.For
example, for
z
max
¼
5
, 99% of the background amplitude
is accumulated from binaries within
z<
2
.
9
, independently
of mass.
To understand how binaries in each redshift bin con-
tribute to the total information, we consider the case of a
Universe with binaries up to
z
max
¼
5
. We calculate the
contribution of a redshift shell
½
z; z
þ
δ
z

to the background
δ
Ω
GW
ð
z
Þ¼
Ω
GW
ð
z
þ
δ
z
Þ
Ω
GW
ð
z
Þ
and plug this into the
FIG. 2. Left: Single-parameter Fisher matrix
F
CC
log
Ω
ref
, Eq.
(35)
, calculated for varying
z
max
, for each equal-mass binary population
considered. Right: Cumulative sum of the information
F
CC
log
δ
Ω
ref
ð
z
Þ
contributed from binaries in a redshift shell
½
z; z
þ
δ
z

as a function of
z
for a Universe with binaries up to
z
max
¼
5
and for each mass. The dots indicate the redshift at which 99% of information is
accumulated. Both plots indicate that the information saturates at a mass-dependent redshift
2
.
RENZINI, CALLISTER, CHATZIIOANNOU, and FARR
PHYS. REV. D
110,
023014 (2024)
023014-10