of 27
All-sky search for continuous gravitational waves from isolated
neutron stars using Advanced LIGO O2 data
B. P. Abbott
etal.
*
(LIGO Scientific Collaboration and Virgo Collaboration)
(Received 15 March 2019; published 8 July 2019)
We present results of an all-sky search for continuous gravitational waves (CWs), which can be produced
by fast spinning neutron stars with an asymmetry around their rotation axis, using data from the second
observing run of the Advanced LIGO detectors. Three different semicoherent methods are used to search in
a gravitational-wave frequency band from 20 to 1922 Hz and a first frequency derivative from
1
×
10
8
to
2
×
10
9
Hz
=
s. None of these searches has found clear evidence for a CW signal, so upper limits on the
gravitational-wave strain amplitude are calculated, which for this broad range in parameter space are the
most sensitive ever achieved.
DOI:
10.1103/PhysRevD.100.024004
I. INTRODUCTION
Eleven detections of gravitational waves from black hole
binaries and from a neutron star binary have been reported
in
[1]
. One of the characteristics of the signals detected so
far is that their duration ranges from a fraction of a second
to tens of seconds in the detector sensitive frequency band.
Other mechanisms, however, can produce gravitational
waves with longer durations, not yet detected. In this paper
we describe an all-sky search for continuous gravitational
waves (CWs), almost monochromatic signals which are
present at the detectors during all the observing time. The
principal sources for CW emission (see
[2]
for a review) are
spinning neutron stars. If a spinning neutron star (NS) has
an asymmetry with respect to its rotation axis, it will emit
CWs at twice the rotation frequency.
Fast-spinning neutron stars in the Milky Way can generate
continuous gravitational waves via various processes which
produce an asymmetry. Crustal distortions from cooling or
from binary accretion, or magnetic field energy buried below
the crust could lead to the nonaxisymmetry necessary for
detectable emission. The excitation of r-modes in a newborn
or accreting NS is another promising mechanism for the
emission of CWs. Recently, some evidence for a limiting
minimum ellipticity was discussed in
[3]
. A comprehensive
review of continuous gravitational wave emission mecha-
nisms from neutron stars can be found in
[4]
. The detection
of a CW, possibly combined with electromagnetic observa-
tions of the same source, could yield insight into the structure
of neutron stars and into the equation of state of matter under
extreme conditions.
Searches for continuous waves are usually split in three
different domains: targeted searches look for signals from
known pulsars
[5
15]
; directed searches look for signals
from known sky locations like the Galactic Center, super-
nova remnants and low-mass x-ray binaries such as Sco-X1
[16
26]
; all-sky searches look for signals from unknown
sources
[27
40]
. Since all-sky searches need to cover a large
parameter space, they are the most computationally expen-
sive. For this reason, the most sensitive coherent search
methods (e.g., matched filtering for the full observing run)
cannot be used and semicoherent methods which split the
full observation time in shorter chunks need to be used.
The interest in all-sky searches stems from the fact that
they inspect regions of parameter space that no other
searches look at. Although the targeted searches are more
sensitive, they are limited to search for known pulsars
which are in the sensitive frequency band of the detectors,
while all-sky searches look for neutron stars with no
electromagnetic counterpart, which could have different
or more extreme properties than the observed pulsars.
In this paper we present the results of an all-sky search of
CWs by three different pipelines (
FrequencyHough
[41]
,
SkyHough
[42]
,
Time-Domain
F
-
statistic
[43]
) using O2
data from the Advanced LIGO detectors. Each pipeline
uses different data analysis methods and covers different
regions of the parameter space, although there exists some
overlap between them. Overall, we search the whole sky for
gravitational wave frequencies from 20 to 1922 Hz (this
number was chosen in order to avoid the violin modes of
the test masses found at higher frequencies) and a first
frequency derivative from
1
×
10
8
to
2
×
10
9
Hz
=
s
(positive frequency derivatives are possible for neutron
stars which are spun up by accretion from a companion).
No detection has been made, and upper limits on the
gravitational wave amplitude are presented.
The outline of the paper is the following: in Sec.
II
,we
summarize the second observing run and give some details
*
Full author list given at the end of the article.
PHYSICAL REVIEW D
100,
024004 (2019)
2470-0010
=
2019
=
100(2)
=
024004(27)
024004-1
© 2019 American Physical Society
about the data that is used; in Sec.
III
, we describe the
model of the signal that we want to detect; in Sec.
IV
,we
present the different pipelines which are used; in Sec.
V
,we
describe the results obtained by each pipeline; in Sec.
VI
,
we give some final remarks.
II. LIGO O2 OBSERVING RUN
The LIGO second observing run (called O2) started on
November 30, 2016 and finished on August 25, 2017.
During this time, three different gravitational-wave detec-
tors of second generation were active and producing data:
Advanced LIGO
[44]
, consisting of two detectors with
4-km arm lengths situated in Hanford, Washington (H1)
and Livingston, Louisiana (L1), and Advanced Virgo
[45]
,
a 3-km detector located in Cascina, Pisa. Advanced Virgo
first joined the run at the beginning of August 2017, with
less sensitivity than the LIGO detectors, so we have not
considered its data for the search described in this paper.
A representative noise curve from O2 for each LIGO
detector and a comparison to O1 is shown in Fig.
1
. We can
observe an improvement of the amplitude spectral density,
and we can also observe that the spectra features a number
(greatly reduced as compared to O1) of narrow lines and
combs affecting several frequency bands, which contami-
nate the data and complicate the analysis often raising
outliers which look like the searched CW signals
[46]
.
A cleaning procedure was applied to H1 data during
postprocessing in order to remove jitter noise and some
noise lines (more details are given in
[47]
). All of the
searches of this paper used this cleaned dataset. The
calibration of this dataset and its uncertainties on amplitude
and phase are described in
[48]
. These searches do not use
all the data from the observing run, since times where the
detectors are known to be poorly behaving are removed
from the analysis. This means that the effective amount of
data used is smaller than the full duration of the run. As in
previous observing runs, several artificial signals (called
hardware injections) have been physically injected in the
detectors in order to test their response and to validate the
different pipelines. These hardware injections are described
in Secs.
III
,
V
and in the Appendix.
III. SIGNAL MODEL
An asymmetric neutron star spinning around one of its
principal axes of inertia emits a CW signal at twice its
rotation frequency. This emission is circularly polarized
along the rotation axis and linearly polarized in the direc-
tions perpendicular to the rotation axis. The gravitational-
wave signal in the detector frame is given by
[43]
h
ð
t
Þ¼
h
0

F
þ
ð
t
Þ
1
þ
cos
ι
2
cos
φ
ð
t
Þþ
F
×
ð
t
Þ
cos
ι
sin
φ
ð
t
Þ

;
ð
1
Þ
where
F
þ
ð
t
Þ
and
F
×
ð
t
Þ
are the antenna patterns of the
detectors (which can be found in
[43]
),
h
0
is the amplitude of
the signal,
ι
is the inclination of the neutron star angular
momentum vector with respect to the observer
s sky plane,
and
φ
ð
t
Þ
is the phase of the signal. The amplitude of the
signal is given by
h
0
¼
4
π
2
G
c
4
I
zz
ε
f
2
d
;
ð
2
Þ
where
d
is the distance from the detector to the source,
f
is the gravitational-wave frequency,
ε
is the ellipticity
or asymmetry of the star, given by
ð
I
xx
I
yy
Þ
=I
zz
, and
I
zz
is the moment of inertia of the star with respect to the
principal axis aligned with the rotation axis. These two last
quantities are related to the mass quadrupole moment
Q
22
of
the star:
FIG. 1. Amplitude spectral density (ASD)
ffiffiffiffiffi
S
n
p
plots for the L1 (left panel) and H1 (right panel) detectors during O1 (blue trace) and
O2 (orange trace). The ASD is obtained by averaging over FFTs of 1800 s obtained for the entire run.
B. P. ABBOTT
et al.
PHYS. REV. D
100,
024004 (2019)
024004-2
ε
¼
ffiffiffiffiffiffi
8
π
15
r
Q
22
I
zz
:
ð
3
Þ
We assume that the phase evolution of the gravitational-
wave signal, which is locked to the evolution of the
rotational frequency, can be approximated with a Taylor
expansion (assumption taken from electromagnetic obser-
vations of pulsars) around a fiducial reference time
τ
r
:
φ
ð
τ
Þ¼
φ
0
þ
2
π

f
0
ð
τ
τ
r
Þþ
_
f
2!
ð
τ
τ
r
Þ
2
þ

;
ð
4
Þ
where
φ
0
is an initial phase and
f
0
and
_
f
are the frequency
and first frequency derivative at the reference time. The
relation between the time at the source
τ
and the time at the
detector
t
is given by (neglecting relativistic effects like
the Einstein and Shapiro delays)
τ
ð
t
Þ¼
t
þ
r
ð
t
Þ
·
ˆ
n
c
;
ð
5
Þ
where
r
ð
t
Þ
is the vector which joins the Solar System
barycenter (SSB) and the detector, and
ˆ
n
is the vector
identifying the star
s position in the SSB. From the previous
formula the frequency evolution of the signal can be
derived as
f
ð
t
Þ¼
1
2
π
d
φ
dt
f
0
þ
f
0
v
ð
t
Þ
·
ˆ
n
c
þ
_
ft:
ð
6
Þ
The second term in the right-hand side of this equation
describes the frequency modulation due to the Doppler
effect produced by Earth
s rotation and translation around
the SSB. This term, together with the spin-down/up of the
source, must be properly taken into account when carrying
out the search.
Equations
(4)
through
(6)
assume that the neutron star is
isolated. In case it is part of a binary system, the frequency
evolution is complicated by the binary system orbital
motion, which introduces an additional frequency modula-
tion. Such modulation, on a signal of frequency
f
and
neglecting the binary system ellipticity, is given by (see
[49]
)
Δ
f
orb
2
π
P
a
p
f;
ð
7
Þ
where
P
is the binary orbital period and
a
p
is the projected
orbital semimajor axis (in light-seconds). By imposing that
the orbital frequency modulation is contained into a fre-
quency bin
δ
f
¼
1
=T
FFT
,where
T
FFT
is the duration of the
data chunks which are incoherently combined in the analysis
(see Sec.
IV
), we find that two of the search pipelines
(
FrequencyHough
and
SkyHough
) used in this paper would
be fully sensitive to a CW signal from a NS in a binary
system if
a
p
0
.
076

P
1
day

f
100
Hz

1

T
FFT
1800
s

1
s
:
ð
8
Þ
For larger orbital frequency modulations the pipelines would
start to lose signal-to-noise ratio but a detailed study of this
issue is outside the scope of the paper. Out of 259 pulsars in
binary systems from the australia telescope national facility
catalogue, only six of them have such characteristics,
although many undiscovered neutron stars in binary systems
could also be part of systems with these properties.
IV. DESCRIPTION OF THE SEARCH METHODS
In this section we introduce and summarize the three
different pipelines which have been used in this work.
A. FrequencyHough
The
FrequencyHough
pipeline consists of an initial
multistep phase, in which interesting points (i.e., candidates)
are selected in the signal parameter space, and of a subsequent
follow-up stage to confirm or reject the candidates. A com-
plete description of the method and of the fundamental
implementation features are given in
[41,50]
. Upper limits
are computed with a frequentist approach, by injecting a large
number of simulated signals into the data. The pipeline
has been previously used in all-sky searches of Virgo
VSR2, VSR4
[36]
and LIGO O1 Science Runs data
[38]
.
1. Initial analysis steps
The starting point of the analysis is calibrated detector
data, used to create
short duration
fast Fourier transforms
(FFTs) with coherence time depending on the frequency
band being considered, according to Table
I
. Short time-
domain disturbances are removed from the data before
constructing the FFTs
[51]
. Next, a time-frequency map,
called peakmap, is built by identifying local maxima (called
peaks) above a dimensionless threshold
θ
thr
¼
1
.
58
on the
TABLE I. Properties of the FFTs used in the
FrequencyHough
pipeline. The time duration
T
FFT
refers to the length in seconds of
the data chunks on which the FFT is computed. The frequency
bin width is the inverse of the time duration, while the spin-down/
up bin width is computed as
δ
_
f
¼
δ
f=T
obs
, where
T
obs
is the total
run duration. In the analysis described in this paper only the
first three bands have been considered, the last one will be
analyzed in a future work. The spin-down/up range covered by
the analysis is (
þ
2
×
10
9
Hz
=
s,
10
8
Hz
=
s) up to 512 Hz and
(
þ
2
×
10
9
Hz
=
s,
2
×
10
9
Hz
=
s) from 512 Hz up to 1024 Hz.
Band [Hz]
T
FFT
[s]
δ
f
[Hz]
δ
_
f
[Hz/s]
10
128
8192
1
.
22
×
10
4
5
.
26
×
10
12
128
512
4096
2
.
44
×
10
4
1
.
05
×
10
11
512
1024
2048
4
.
88
×
10
4
2
.
10
×
10
11
1024
2048
1024
9
.
76
×
10
4
4
.
20
×
10
11
ALL-SKY SEARCH FOR CONTINUOUS GRAVITATIONAL
...
PHYS. REV. D
100,
024004 (2019)
024004-3
square root of the equalized power
1
of the data
[51]
. The
peakmap is cleaned using a line
persistency
veto
[41]
,
based on the projection of the peakmap onto the frequency
axis and on the removal of the frequency bins in which the
projection is higher than a given threshold.
After defining a grid in the sky, with bin size depending
on the frequency and sky location as detailed in
[41]
, the
time-frequency peaks are properly shifted, for each sky
position, to compensate the Doppler effect due to the
detector motion, see Eq.
(6)
. They are then processed by
the
FrequencyHough
algorithm
[41,50]
, which transforms
each peak to the frequency and spin-down/up plane of the
source. The frequency and spin-down/up bins (which we
will refer to as
coarse
bins in the following) depend on the
frequency band, as indicated in Table
I
, and are defined,
respectively, as
δ
f
¼
1
T
FFT
and
δ
_
f
¼
δ
f=T
obs
, where
T
obs
¼
268
.
37
days is the total run duration. In practice, as the
transformation from the peakmap to the Hough plane is not
computationally bounded by the width of the frequency
bin, we have increased the nominal frequency resolution by
a factor of 10
[41]
. The algorithm, moreover, properly
weights any noise nonstationarity and the time-varying
detector response
[52]
.
The
FrequencyHough
transform is computationally
very demanding, so the analysis is split into tens of
thousands of independent jobs, each of which computes a
FrequencyHough
transform covering a small portion of the
parameter space. The output of a
FrequencyHough
trans-
form is a 2D histogram in the frequency/spin-down plane
of the source. Candidates for each sky location are selec-
ted by dividing each 1-Hz band of the corresponding
FrequencyHough
histogram into 20 intervals and taking,
for each interval, the one or (in most cases) two candi-
dates with the highest histogram number count. This allows
us to avoid blinding by large disturbances in the data, as
described in
[41]
. All the steps described so far are applied
separately to the data of each detector involved in the
analysis.
Following the same procedure used in
[38]
, candidates
from each detector are clustered and then coincident
candidates among the clusters of the two detectors are
found using a distance metric built in the four-dimensional
parameter space of position
ð
λ
;
β
Þ
(in ecliptic coordinates),
frequency
f
and spin-down/up
_
f
, defined as
d
FH
¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Δ
f
δ
f

2
þ

Δ
_
f
δ
_
f

2
þ

Δ
λ
δλ

2
þ

Δ
β
δβ

2
s
;
ð
9
Þ
where
Δ
f
,
Δ
_
f
,
Δ
λ
, and
Δ
β
are the differences, for each
parameter, among pairs of candidates of the two detectors,
and
δ
f
,
δ
_
f
,
δλ
, and
δβ
are the corresponding bin widths.
Pairs of candidates with distance
d
FH
<
3
are considered
coincident. This value was chosen based on a study with
software simulated signals and allows, on one hand, to
reduce the false alarm probability and, on the other, to be
sufficiently robust with respect to the fact that a signal can
be found with slightly different parameters in the two
detectors. Coincident candidates are subject to a ranking
procedure, based on the value of a statistic built using the
distance and the
FrequencyHough
histogram weighted
number count of the coincident candidates, as described
in
[41]
. In this analysis, after the ranking the eight
candidates in each 0.1-Hz band with the highest values
of the statistic have been selected.
2. Candidate follow-up
Candidates passing the ranking selection are followed up
in order to confirm them as potential CW signals or to
discard them, if due to noise fluctuations or detector
disturbances. The follow-up consists of several steps, as
described in
[36]
. An important implementation novelty we
have introduced in the O2 analysis is the use of the
Band
Sampled Data
framework (BSD)
[53]
, which allows a
flexible and computationally efficient management of the
data. For each of the
N
candidates selected by the ranking
procedure, a fully coherent search is done using down-
sampled data from both detectors, covering a band of the
order of 0.2 Hz around the candidate frequency. The
coherent search is done by applying a Doppler and spin-
down/up correction based on the parameters of the candi-
date. Although the coherent search corrects exactly for the
Doppler and spin-down/up effect at a particular point in the
parameter space, corresponding to the candidate, the cor-
rection is extended by linear interpolation to the neighbors of
the candidate itself. In practice, this means that from the
corrected and down-sampled time series, a new set of FFTs
TABLE II. Properties of the FFTs used in the
FrequencyHough
follow-up step. The second column is the increased FFT duration,
the third is the enhancement factor
E
, with respect to the original
duration. The fourth and fifth columns show, respectively, the
new frequency and spin-down/up bins, while the sixth is the
estimated sensitivity gain
G
. The new durations have been chosen
in such a way to avoid the effect of the sidereal modulation, which
produces a spread of the signal power in frequency sidebands
[54]
. Actually, for the third band we have used a shorter duration
due to computer memory constraints. The last band, from 1024 to
2048 Hz, has not been considered in this work.
Band [Hz] FFT duration [s]
E
δ
f
[Hz]
δ
_
f
[Hz/s]
G
10
128
24600
3
4
.
07
×
10
5
1
.
75
×
10
2
1.39
128
512
24576
6
4
.
07
×
10
5
1
.
75
×
10
12
1.65
512
1024
8192
4
1
.
22
×
10
4
5
.
26
×
10
12
1.49
1
Defined as the ratio of the squared modulus of the FFT of the
data and an autoregressive estimation of the power spectrum, see
[51]
for more details.
B. P. ABBOTT
et al.
PHYS. REV. D
100,
024004 (2019)
024004-4
is built, with a longer duration (by a varying factor
E
,
depending on the frequency band, see Table
II
), as well as
the corresponding peakmap. Peaks are selected using
athreshold
θ
thr
¼
1
.
87
, bigger than the initial one (see
Sec.
IVA
). As explained in
[36]
this is a conservative choice
which provides a sensitivity gain and, at the same time,
reduces the computational cost of the follow-up. As a result
of the FFT length increase and of the new threshold for the
selection of the peaks, by using Eq. (67) of
[41]
,whichis
valid under the assumption of Gaussian noise, we estimate a
sensitivity gain
G
for the detectable
h
0
in the follow-up,
shown in the last column of Table
II
. A small area, centered
around the candidate position, is considered for the follow-
up. It covers

3
coarse bins, which amounts to seven bins in
each dimension, and thus 49 coarse sky positions for each
candidate. A refined sky grid is built over this area, with an
actual number of points which depends on the frequency
band and is, on the average, given by
49
E
2
. For each sky
position in this refined grid, we evaluate the residual Doppler
modulation (with respect to the center of the grid), which is
corrected in the peakmap by properly shifting each peak.
The
FrequencyHough
of the resulting ensemble of corrected
peakmaps is computed over a frequency and spin-down/up
ranges covering

3
coarse bins around the candidate values.
As before, an over-resolution factor of 10 is used for the
frequency. The absolute maximum identified over all the
Hough maps provides the refined parameters of the candi-
date we are considering. Next, to each pair of candidates
from the two detectors we apply a series of vetoes, as
detailed in the following.
First, we remove the candidates whose median value of
the frequency (computed over the full observing time), after
the removal of the Doppler and spin-down/up correction,
overlaps a known noise line frequency band (i.e., a line due
to a detector disturbance, whose instrumental origin has
been understood). Second, for each detector a new peak-
map is computed using the data coherently corrected with
the refined parameters of the candidate and then projected
on the frequency axis. We take the maximum
A
p
of this
projection in a range of

2
coarse bins around the
candidate frequency. We divide the rest of the 0.2 Hz band
(which we consider the
off-source
region) into
250
intervals of the same width, take the maximum of the
peakmap projection in each of these intervals and sort in
decreasing order all these maxima. We tag the candidate as
interesting
and keep it if it ranks first or second in this list
for both detectors. Surviving candidates are then subject to
a consistency test based on their critical ratio, defined as
CR
¼ð
A
p
μ
p
Þ
=
σ
p
, where
μ
p
and
σ
p
are the mean and
standard deviation of the peakmap projection on the off-
source region. Pairs of coincident candidates are removed if
their CRs, properly weighted by the detector noise level at
the candidate frequency, differ by more than a factor of 5.
Further details on O2 outlier selection and properties are
given in Sec.
VA
.
3. Upper limit computation
Upper limits are computed in each 1-Hz band between
20 and 1000 Hz by injecting software simulated signals,
with the same procedure used in
[36]
. For each 1 Hz band
20 sets of 100 signals each are generated, with fixed
amplitude within each set and random parameters (sky
location, frequency, spin-down/up, and polarization param-
eters). These are generated in time domain and then added
to the data of both detectors in the frequency domain. For
each injected signal in a set of 100, an analysis is done
using the
FrequencyHough
pipeline over a frequency band
of 0.1 Hz around the injection frequency, the full spin-
down/up range used in the real analysis, and nine sky points
around the injection position
[36]
. Candidates are selected
as in the real analysis, except that no clustering is applied,
as it would have been affected by the presence of too many
signals. We note, however, that clustering is used in the
analysis only to reduce the computational cost and does not
affect the subsequent steps. After coincidences and ranking,
candidates also coincident with the injected signal param-
eters, within the follow-up volume discussed in Sec.
IVA 2
,
are selected. Those having a critical ratio larger than the
largest critical ratio found in the real analysis in the same
1 Hz are counted as
detections
. For each 1 Hz band, we
build the detection efficiency curve, defined as the fraction
of detected signals as a function of their amplitude. The
upper limit is given by the signal amplitude such that the
detection efficiency is 95%. In practice, a fit is used in order
to interpolate the detection efficiency curve, as described
in
[38]
.
B. SkyHough
The
SkyHough
method has been used in other searches
using data from the Initial LIGO S2, S4 and S5 and
Advanced LIGO O1 observing runs
[27,28,32,38,40]
. Its
main description is given in
[42]
. Here we summarize its
main characteristics and the new features that have been
implemented in this search. The code for the main part of
the search is called
lalapps_DriveHoughMulti
and is part of
the publicly available LALSuite package
[55]
.
1. Initial analysis steps
This pipeline uses short Fourier transforms (SFTs) of the
time domain
h
ð
t
Þ
as its input data, with a coherent duration
of each chunk varying as a function of the searched
frequency (as shown in Table
III
). It creates peak-grams,
which are spectrograms with the normalized power sub-
stituted by 1s (if the power is above a certain threshold
ρ
t
¼
1
.
6
) and 0s, where the normalized power in a
frequency bin is defined as
ρ
k
¼
j
̃
x
2
k
j
h
n
k
i
2
;
ð
10
Þ
ALL-SKY SEARCH FOR CONTINUOUS GRAVITATIONAL
...
PHYS. REV. D
100,
024004 (2019)
024004-5
where
h
n
k
i
2
is estimated with a running median of 101
frequency bins.
We use the Hough transform to track the time-frequency
evolution of the signal including the Doppler modulation
of the signal at the detector. In the first stage the pipeline
employs a look-up Table (LUT) approach, taking into
account that at a given time the same Doppler modulation is
produced by an annulus of sky positions (given by
Δ
θ
),
which correspond to the width of a frequency bin
Δ
f
:
cos
Δ
θ
¼
c
v
ð
t
Þ
f
ð
t
Þ
ˆ
f
ð
t
Þ
ˆ
f
ð
t
Þ
¼
c
v
ð
t
Þ
Δ
f
ˆ
f
ð
t
Þ
;
ð
11
Þ
where
f
ð
t
Þ
is the observed frequency at the detector and
ˆ
f
¼
f
0
þ
f
1
t
is the searched frequency. The algorithm
tracks the sky positions which produce observed frequen-
cies with powers above the threshold. It then stacks these
sky positions by following the evolution of the source
frequency given by the spin-down/up term at different time
stamps and produces a final histogram. The LUT approach
reuses the same Doppler modulation pattern for different
search frequencies (more details in
[42]
), which produces
computational savings in exchange for not following the
exact frequency-time pattern.
For each template (described by
f
0
,
_
f;
α
;
δ
) being
searched, a detection statistic called number count signifi-
cance is calculated:
s
n
¼
n
h
n
i
σ
n
;
ð
12
Þ
where
h
n
i
and
σ
n
are the expected mean and standard
deviation of the Hough number count
n
when only noise is
present. The number count
n
is the weighted sum of 1s and
0s, where the weights (which are proportional to the
antenna pattern functions and inversely proportional to
the power spectral density) were derived in
[56]
.
The parameter space is separated in 0.1 Hz frequency
bands and in small sky-patches. A toplist is calculated for
each of these regions, which has the top templates ordered
by the number count significance. For the top templates, a
second step is performed where instead of using the look-
up Table approach the exact frequency path is tracked. At
this second step the power significance is also calculated,
which is defined as
s
P
¼
P
h
P
i
σ
P
;
ð
13
Þ
where instead of summing weighted 1s and 0s the weighted
normalized powers given by Eq.
(10)
are summed. This
detection statistic improves the sensitivity of
SkyHough
with a very small increase of computational cost. The best
5000 templates per sky-patch and 0.1 Hz band are passed
to the second step, and only the best 1000 candidates per
sky-patch and 0.1 Hz band are used for the postprocessing.
Furthermore, at the second step more SFTs are used than in
the first step. This is achieved by sliding the initial times of
each SFT that was used at the first step, obtaining more
SFTs (approximately twice the previous amount), all of
them of
T
c
contiguous seconds.
The grid resolution was obtained in
[42]
and it is
given by
δ
f
¼
1
T
c
ð
14
Þ
δ
_
f
¼
1
T
c
T
obs
ð
15
Þ
δθ
¼
c
vT
c
fP
F
;
ð
16
Þ
where
P
F
is a parameter which controls the sky resolution
grid. In this search we have set
P
F
¼
2
for all frequencies.
2. Postprocessing
The postprocessing consists of several steps:
(1) The output of the main SkyHough search is one
toplist for each dataset (there are two datasets, each
one with data from two detectors, detailed in
Table
V
) and each region in parameter space. We
search for coincidental pairs between these toplists,
by calculating the distance in parameter space and
selecting the pairs which are closer than a certain
threshold called
d
co
. For the coincidental pairs
the centers (average locations in parameter space
weighted by significance) are calculated. The param-
eter space distance is calculated as
d
SH
¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

Δ
f
δ
f

2
þ

Δ
_
f
δ
_
f

2
þ

Δ
x
δ
x

2
þ

Δ
y
δ
y

2
s
;
ð
17
Þ
where the numbers in the numerators represent the
difference between to templates and the numbers in
the denominators represent the parameter resolution
(this distance is unitless and is given as a number of
bins). The parameters
x
and
y
are the Cartesian
ecliptic coordinates projected in the ecliptic plane.
(2) Search for clusters in the obtained list of centers.
This will group different outliers which can be
ascribed to a unique physical cause, and will reduce
TABLE III. Coherent times and number of SFTs for each
frequency range searched by the
SkyHough
pipeline. The last
column shows the number of SFTs per dataset, and the numbers
in parentheses the SFTs used at the second step of the search.
Frequency [Hz]
T
c
[s]
N
SFT
[50, 300)
3600
2544 (4755)
[300, 550)
2700
3460 (6568)
[550, 1300)
1800
5283 (10195)
[1300, 1500)
900
10801 (21200)
B. P. ABBOTT
et al.
PHYS. REV. D
100,
024004 (2019)
024004-6
the size of the final toplist. Again, we set a threshold
in parameter space distance (called
d
cl
) and we find
candidates which are closer than this distance.
(3) Finally, we calculate the centers of the clusters. This
is done as a weighted (by significance) average,
taking into account all the members of the cluster.
We order the obtained clusters in each 0.1 Hz by
their sum of the power significance of all the
members of a cluster, and we select the highest
ranked cluster per 0.1 Hz band, if any. This produces
the final list of clusters with their parameters (
f
0
,
_
f;
α
;
δ
) which will be the outliers to be followed up.
3. Follow-up
We describe a follow-up method based on the
F
-statistic
(described in more detail in Sec.
IV C
) and the GCT
metric method
[57]
. This method uses the
lalapps_
HierarchSearchGCT
code, part of the publicly available
LALSuite
[55]
, and it is similar in spirit to the multistep
follow-up methods described in
[58]
or
[59]
.
The goal is to compare the
F
-statistic values obtained
from software injected signals to the
F
-statistic values
obtained from the outliers. We want to compare the
F
-
statistic obtained at different stages which scale the coherent
time. It is expected that for an astrophysical signal the
F
-
statistic value should increase if the coherent time increases.
The resolution in parameter space is given by
[57]
δ
f
¼
ffiffiffiffiffiffiffiffiffi
12
m
p
π
T
c
ð
18
Þ
δ
_
f
¼
ffiffiffiffiffiffiffiffiffiffiffi
720
m
p
π
T
2
c
γ
ð
19
Þ
δθ
¼
ffiffiffiffiffiffiffiffiffi
m
sky
p
π
f
τ
e
;
ð
20
Þ
where
m
and
m
sky
are mismatch parameters,
γ
is a
parameter which gives the refinement between the coherent
and semicoherent stages and
τ
e
¼
0
.
021
s represents the
light time travel from the detector to the center of the Earth.
We now enumerate the different steps of the procedure:
(1) Calculate the semicoherent
F
-statistic of outliers
with
T
c
¼
7200
s in a cluster box.
(2) Add injections to the original data using a sensitivity
depth (
ffiffiffiffiffi
S
n
p
=h
0
) value which returns
F
-statistic
values similar to the values obtained with the outliers
in order to compare similar signals. We inject signals
in eight different frequency bands with 200 injec-
tions per band, with a sensitivity depth of
42
Hz
1
=
2
.
Then, search in a small region (around ten bins in
each dimension) around the true parameters of the
injections with
T
c
¼
7200
s. Finally, analyze the
distances in parameter space from the top candidates
to the injections.
(3) Repeat the previous step increasing the coherent
time to
T
c
¼
72000
s.
(4) Calculate
F
72000
s
=
F
7200
s
for each of the 1600
injections using the top candidates. The threshold
will be the minimum value.
(5) Calculate the
F
-statistic values of outliers with
T
c
¼
72000
s around the top candidate from the first
stage. The size of the window to be searched is
estimated from the distances found in step 2.
(6) Calculate
F
72000
s
=
F
7200
s
using the top candidate
for all outliers. Outliers with values higher than the
threshold obtained in step 4 go to the next com-
parison, and the process is repeated from step 2
increasing the coherent time.
Table
IV
summarizes the parameters that we have chosen
at each different stage. As we will see in the Results section,
only two comparisons between three different stages were
needed.
C. Time-domain
F
-statistic
The
Time-domain
F
-
statistic
search method uses the
algorithms described in
[33,43,60,61]
and has been applied
to an all-sky search of VSR1 data
[33]
and an all-sky search
of the LIGO O1 data
[38,40]
. The main tool is the
F
-
statistic
[43]
by which one can search coherently the data
over a reduced parameter space consisting of signal
frequency, its derivatives, and the sky position of the
source. The
F
-statistic eliminates the need for a grid
search over remaining parameters [see Eqs.
(1)
and
(4)
],
in particular, the inclination angle
ι
and polarization
ψ
.
Once a signal is identified the estimates of those four
parameters are obtained from analytic formulas.
However, a coherent search over the whole LIGO O2
dataset is computationally prohibitive and we need to apply
a semicoherent method, which consists of dividing the data
into shorter time domain segments. The short time domain
data are analyzed coherently with the
F
-statistic. Then the
output from the coherent search from time domain seg-
ments is analyzed by a different, computationally manage-
able method. Moreover, to reduce the computer memory
required to do the search, the data are divided into narrow-
band segments that are analyzed separately. Thus our
search method consists primarily of two parts. The first
part is the coherent search of narrow-band, time-domain
segments. The second part is the search for coincidences
among the candidates obtained from the coherent search.
TABLE IV. Coherent times and mismatch parameters at each
different stage of the
SkyHough
follow-up.
Stage index
T
c
[s]
mm
sky
I
7200
0.1
0.01
II
72 000
0.1
0.003
III
720 000
0.1
0.0005
ALL-SKY SEARCH FOR CONTINUOUS GRAVITATIONAL
...
PHYS. REV. D
100,
024004 (2019)
024004-7
The pipeline is described in Sec. IV of
[38]
(see also
Fig. 13 of
[38]
for the flow chart of the pipeline). The same
pipeline is used for the search of LIGO O2 data presented
here except that a number of parameters of the search are
different. The choice of parameters was motivated by
the requirement to make the search computationally
manageable.
As in our O1 searches, the data are divided into over-
lapping frequency subbands of 0.25 Hz. We analyze
three frequency bands: [20
100], [100
434], and [1518
1922] Hz. As a result, the three bands have 332, 1379, and
1669 frequency subbands respectively.
The time series is divided into segments, called frames,
of 24 sidereal days long each, six days long, and two days
long respectively for the three bands. Consequently in each
band we have 11, 44, and 134 time frames, respectively.
The O2 data has a number of nonscience data segments.
The values of these bad data are set to zero. For this
analysis, we choose only segments that have a fraction of
bad data less than
1
=
2
both in H1 and L1 data. This
requirement results in eight 24-day-long, twenty-six six-
day-long, seventy-nine two-day-long data segments for
each band respectively. These segments are analyzed
coherently using the
F
-statistic defined by Eq. (9) of
[33]
. We set a fixed threshold for the
F
-statistic of
F
0
¼
16
and record the parameters of all threshold crossings,
together with the corresponding values of the signal-to-
noise ratio
ρ
,
ρ
¼
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2
ð
F
2
Þ
p
:
ð
21
Þ
Parameters of the threshold crossing constitute a candi-
date signal. At this first stage we also veto candidate signals
overlapping with the instrumental lines identified by
independent analysis of the detector data.
For the search we use a four-dimensional grid of
templates (parametrized by frequency, spin-down/up, and
two more parameters related to the position of the source in
the sky) constructed in Sec. IV of
[61]
. For the low
frequency band [20
434] Hz we choose the grid
s minimal
match MM
¼
ffiffiffi
3
p
=
2
whereas for the high frequency band
[1518
1922] Hz we choose a looser grid with MM
¼
1
=
2
.
In the second stage of the analysis we search for
coincidences among the candidates obtained in the coher-
ent part of the analysis. We use exactly the same coinci-
dence search algorithm as in the analysis of VSR1 data and
described in detail in Sec. VIII of
[33]
. We search for
coincidences in each of the subbands analyzed. To estimate
the significance of a given coincidence, we use the formula
for the false alarm probability derived in the Appendix of
[33]
. Sufficiently significant coincidences are called out-
liers and subjected to further investigation.
The sensitivity of the search is estimated by the same
procedure as in O1 data analysis (
[38]
, Sec.
IV
). The
sensitivity is taken to be the amplitude
h
0
of the
gravitational wave signal that can be confidently detected.
We perform the following Monte Carlo simulations. For a
given amplitude
h
0
, we randomly select the other seven
parameters of the signal:
f;
_
f;
α
;
δ
;
φ
0
;
ι
and
ψ
. We choose
frequency and spin-down/up parameters uniformly over
their range, and source positions uniformly over the sky.
We choose angles
φ
0
and
ψ
uniformly over the interval
½
0
;
2
π

and cos
ι
uniformly over the interval
½
1
;
1

. We add
the signal with selected parameters to the O1 data. Then the
data are processed through our pipeline. First, we perform a
coherent
F
-statistic search of each of the data segments
where the signal was added. Then the coincidence analysis
of the candidates is performed. The signal is considered to
be detected, if it is coincident in more than five of the eight
time frames analyzed, 14 out of 26, and 40 out of 79 for the
three bands respectively. We repeat the simulations
100 times. The ratio of numbers of cases in which the
signal is detected to the 100 simulations performed for a
given
h
0
determines the frequentist sensitivity upper limits.
We determine the sensitivity of the search in each 0.25 Hz
frequency subband separately. The 95% confidence upper
limits for the whole range of frequencies are given in
Fig.
12
; they follow very well the noise curves of the O2
data that were analyzed. The sensitivity of search decreases
with decreasing coherence time we use for the three bands
of our
F
-statistic search. Additionally it decreases in our
high-frequency band because of the looser grid used than in
low frequency bands.
V. RESULTS
In this section we detail the results obtained. The region
in frequency and first frequency derivative searched by
each of the three different pipelines is shown in Fig.
2
.
Although no detections have been made, we give details
on the different procedures and outliers which were found,
and we also present 95% confidence level (C.L.) upper
limits on the strain
h
0
given by Eq.
(2)
, shown in Fig.
3
.
FIG. 2. Regions in frequency and first frequency derivative
covered by each pipeline.
B. P. ABBOTT
et al.
PHYS. REV. D
100,
024004 (2019)
024004-8