of 10
A Geodesy
and Seismicity
Based Local Earthquake
Likelihood Model for Central Los Angeles
Chris Rollins
1,2
and Jean
Philippe Avouac
1
1
Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA, USA,
2
Now at
Department of Earth and Environmental Sciences, Michigan State University, East Lansing, MI, USA
Abstract
We estimate time
independent earthquake likelihoods in central Los Angeles using a model of
interseismic strain accumulation and the 1932
2017 seismic catalog. We assume that on the long
term
average, earthquakes and aseismic deformation collectively release seismic moment at a rate balancing
interseismic loading, mainshocks obey the Gutenberg
Richter law (a log linear magnitude
frequency
distribution [MFD]) up to a maximum magnitude and a Poisson process, and aftershock sequences obey the
Gutenberg
Richter and
Båth
laws. We model a comprehensive suite of these long
term systems, assess
how likely each system would be to have produced the MFD of the instrumental catalog, and use these
likelihoods to probabilistically estimate the long
term MFD. We estimate
M
max
= 6.8 + 1.05/
0.4 (every
~300 years) or
M
max
= 7.05 + 0.95/
0.4 assuming a truncated or tapered Gutenberg
Richter MFD,
respectively. Our results imply that, for example, the (median) likelihood of one or more
M
w
6.5
mainshocks is 0.2% in 1 year, 2% in 10 years, and 18
21% in 100 years.
Plain Language Summary
We develop a method to estimate the long
term
average earthquake
hazard in a region and apply it to central Los Angeles. We start from an estimate of how quickly faults are
being loaded by the gradual bending of the crust and assume that on the long
term average, they should
release strain in earthquakes at this same total loading rate. We then use a well
established rule that for every
M
w
> 7 earthquake, there are about ten
M
w
> 6 earthquakes, a hundred
M
w
> 5 earthquakes, and so on (with
some variability from an exact 1
10
100 slope), and we assume that there is a maximum magnitude that
earthquakes do not exceed. We use these constraints to build long
term earthquake rate models for central LA
and then evaluate each model by assessing whether an earthquake system obeying it would have produced the
relative rates of small, moderate, and large earthquakes in the 1932
2017 earthquake catalog. We estimate
a maximum magnitude of
M
w
= 6.8 + 1.05/
0.4 (every ~300 years) or
M
w
= 7.05 + 0.95/
0.4 in central LA
depending on speci
fi
c assumptions. Our results imply that, for example,
median
likelihood of one or more
M
w
6.5 mainshocks in central LA is 0.2% in 1 year, 2% in 10 years, and 18
21% in 100 years.
1. Introduction
The transpressional Big Bend of the San Andreas Fault (Figure 1c) induces north
south tectonic shortening
across Los Angeles (LA) that is released in thrust earthquakes such as the damaging 1971
M
w
~ 6.7 Sylmar,
1987
M
w
~ 5.9 Whittier Narrows, and 1994
M
w
= 6.7 Northridge shocks (e.g., Dolan et al., 1995).
Paleoseismologic studies have also found evidence of possible Holocene
M
w
7.0 earthquakes on several
thrust faults in greater LA (Leon et al., 2007, 2009; Rubin et al., 1998; Figure 1a). In principle, one can quan-
tify the likelihoods of future earthquakes on these faults by using geodetic data to assess how quickly elastic
strain is accumulating on them and employing the elastic rebound hypothesis (Field et al., 2015; Reid, 1910),
which implies that they should release strain at this same rate on the long
term average. The strain accumu-
lation can also be expressed as a de
fi
cit of seismic moment, which can be assumed to be balanced over the
long term by the moment released in earthquakes and aseismic slip (Avouac, 2015; Brune, 1968; Molnar,
1979). This approach has found use in several regional and global studies (e.g., Hsu et al., 2016; Michel
et al., 2018; Rong et al., 2014; Shen et al., 2007; Stevens & Avouac, 2016, 2017).
Applying this approach to LA is challenging, in part because the task of assessing strain buildup rates
encounters several unique hurdles there: some of the thrust faults are blind (do not break the surface),
obscuring strain accumulation on them (Lin & Stein, 1989; Shaw & Suppe, 1996; Stein & Yeats, 1989); the
geodetic data are affected by deformation related to aquifer and oil use (Argus et al., 2005; Riel et al.,
2018); and central LA sits atop a deep sedimentary basin that introduces a
fi
rst
order elastic heterogeneity
©2019. American Geophysical Union.
All Rights Reserved.
RESEARCH LETTER
10.1029/2018GL080868
Key Points:
We develop a method to
probabilistically estimate long
term
earthquake likelihoods using a
strain buildup model and a seismic
catalog
We infer that the
maximum
magnitude earthquake in
central Los Angeles is
M
w
= 6.8 + 1.05/
0.4 or
M
w
= 7.05 + 0.95/
0.4 depending on
assumptions
Our results can be used, for example,
to estimate the probability of having
an earthquake of or exceeding any
magnitude in any timespan
Supporting Information:
Supporting Information S1
Correspondence to:
C. Rollins,
rollin32@msu.edu
Citation:
Rollins, C., & Avouac, J.
P. (2019). A
geodesy
and seismicity
based local
earthquake likelihood model for central
Los Angeles.
Geophysical Research
Letters
,
46
, 3153
3162. https://doi.org/
10.1029/2018GL080868
Received 10 OCT 2018
Accepted 21 FEB 2019
Accepted article online 27 FEB 2019
Published online 21 MAR 2019
ROLLINS AND AVOUAC
3153
Figure 1.
(a) N
S shortening, seismic moment de
fi
cit buildup, and earthquakes in central LA. The blue arrows (translucent in study area) are Global Positioning
System velocities relative to the San Gabriel Mountains corrected for anthropogenic deformation and interseismic locking on the San Andreas system
(Argus et al.,
2005). Paleoearthquakes on the Sierra Madre, Puente Hills, and Compton faults are respectively from Rubin et al. (1998) and Leon et al. (2007, 2009). T
he color
shading is geodetically inferred distribution of moment de
fi
cit buildup rate associated with these three faults (Rollins et al., 2018). Study area is de
fi
ned by the three
faults and an inferred master décollement (thin dashed lines). The 1932
2017 earthquake locations and magnitudes are from the Southern California Earthquake
Data Center catalog. The 1933 Long Beach and 1971 Sylmar earthquakes and their aftershocks (brown circles) occurred on the periphery of the study area
. The
black lines are upper edges of faults, and the dashed lines are for blind faults. Faults: SGF, San Gabriel; SSF, Santa Susana; VF, Verdugo; CuF, Cucamon
ga; A
DF,
Anacapa
Dume; SMoF, Santa Monica; HF, Hollywood; RF, Raymond; UEPF, Upper Elysian Park; ChF, Chino; WF, Whittier; N
IF, Newport
Inglewood; PVF,
Palos Verdes; SPBF, San Pedro Basin. (b) Probability density function (PDF) of moment de
fi
cit buildup rate from Rollins et al. (2018). Folding updip of the Puente
Hills and Compton faults is assumed anelastic; if it were elastic, the PDF would be the red curve. (c) Tectonic setting. The arrow pairs show slip senses
of major
faults. Offshore arrow is Paci
fi
c Plate velocity relative to North American plate (Kreemer et al., 2014). SB, Santa Barbara. LA, Los Angeles. SD, San Diego. Faults:
GF, Garlock; SJF, San Jacinto; EF, Elsinore.
10.1029/2018GL080868
Geophysical Research Letters
ROLLINS AND AVOUAC
3154
(Shaw et al., 2015). In recent work, Rollins et al. (2018) addressed these
three challenges and modeled the north
south shortening as resulting
from interseismic strain buildup on the upper sections of the north dip-
ping Sierra Madre, Puente Hills, and Compton thrust faults (Figure 1a),
implying that a de
fi
cit of seismic moment accrues at a total rate of
1.6 + 1.3/
0.5 × 10
17
Nm/year (Figure 1b). This model assumes that
deformation updip of the blind Compton and Puente Hills faults is anelas-
tic and aseismic; the total moment de
fi
cit buildup rate would be 2.4 + 1.3/
0.6 × 10
17
Nm/year if this deformation were instead elastic (Figure 1b),
but this seems unlikely in view of the depth distribution of seismicity
(Rollins et al., 2018). The 1.6 × 10
17
Nm/year moment de
fi
cit could be
all released by a
M
w
= 7.0 earthquake every 240 years, for example, but
this cannot form a basis for seismic hazard assessment as (1) the choice
of magnitude is arbitrary and (2) it overlooks the contributions of smaller
(and possibly larger) events and aseismic slip. Here we develop a probabil-
istic estimate of long
term
average earthquake likelihoods by magnitude
in central LA that accounts for these factors, using the moment de
fi
cit
buildup rate and the seismic catalog.
2. Moment Buildup Versus Release in Earthquakes
We
fi
rst assess whether this moment de
fi
cit has been balanced by the col-
lective moment release in small, moderate, and large earthquakes over the
period of the instrumental catalog (e.g., Meade & Hager, 2005; Stevens &
Avouac, 2016). We use locations and magnitudes from the 1932
2017
Southern California Earthquake Data Center (SCEDC) catalog within a study area de
fi
ned by the geometries
of the Sierra Madre, Puente Hills, and Compton faults and an inferred master décollement (Fuis et al., 2001;
Shaw et al, 2015; Rollins et al., 2018; Figure 1a, thin dashed lines). The 1933
M
w
~ 6.4 Long Beach and 1971
M
w
~ 6.7 Sylmar earthquakes occurred on the edges of the study area (Figure 1a); we handle this ambiguity
by using four versions of the instrumental catalog that alternatively include or exclude them and their after-
shocks (supporting information S1). (We exclude the 1994 Northridge earthquake, which occurred further
west on a fault not counted in our estimate of moment de
fi
cit buildup rate.) We compare moment buildup
and release in 1932
2017 over a range of upper cutoff magnitudes for the earthquakes so as to qualitatively
assess how large earthquakes need to get in central LA to collectively balance the
moment budget
. The
answer visibly depends on whether the 1933 and 1971 earthquakes are counted or not (Figure 2). This tech-
nical issue hints at the reason why this comparison has limited predictive power: the instrumental catalog
(e.g., exactly one
M
w
~ 6.4 and one
M
w
~ 6.7 earthquake) does not simply repeat every 86 years but rather
is an 86
year realization of an underlying process. (This approach also ignores the moment released by unde-
tected small earthquakes, which may be nonnegligible.)
3. The Gutenberg
Richter Relation, Long
Term Models, and a New Approach
A way around these issues is to assume that on the long
term average, (1) the geodetic moment de
fi
cit
buildup rate is constant and is balanced by earthquakes and aseismic deformation and (2) earthquakes obey
the Gutenberg
Richter (G
R) law, meaning that their magnitude
frequency distribution (MFD) is log linear
with slope
b
(Gutenberg & Richter, 1954). If the G
R distribution is additionally assumed to hold up to a
maximum earthquake magnitude
M
max
, the long
term MFD is uniquely determined by the moment buildup
rate,
b
,
M
max
, and the aseismic contribution (Avouac, 2015; Molnar, 1979). We work with two alternate
closed
form MFD solutions: a truncated G
R distribution (supporting information S2) and a
tapered
G
R dis-
tribution (supporting information S3). In the 2
D space of
M
w
versus log
frequency of earthquakes of or
exceeding that
M
w
, which we call G
R space, the truncated G
R distribution is a line that ends at
M
max
(Figures S1a and S1 S2a), while the tapered G
R distribution tapers to
at
M
max
(Figures S1c and S2f).
These may be suitable end
members: the truncated G
R distribution in fact implies a mix of log linear and
characteristic behavior (Figure S1a and supporting information S2); the tapered G
R distribution (which
Figure 2.
Comparison, over the 86
year timespan of the Southern California
Earthquake Data Center catalog, of moment de
fi
cit buildup rate (mode and
16th
84th percentiles of probability density function) with moment release
rate in earthquakes in Figure 1. The brown and white lines denote, at each
magnitude, the cumulative moment release per year by earthquakes that do
not exceed that magnitude. We consider four versions of the instrumental
catalog as indicated. aft.: aftershocks.
10.1029/2018GL080868
Geophysical Research Letters
ROLLINS AND AVOUAC
3155
implies no characteristic element) follows from a different use of the log linear relation (supporting informa-
tion S3) and does not require specifying a form for the tapering (e.g., Jackson & Kagan, 1999), and both
are log linear in G
R space except at or near
M
max
and therefore may be reconcilable with observations in
most settings.
However, several challenges remain in this approach. First,
M
max
is unknown due to the short history of
observation. Some studies iteratively estimate
M
max
in cumulative magnitude
frequency space (Stevens &
Avouac, 2016, 2017); others estimate it using total fault areas and scaling relations (Field et al., 2014) or
assume a value for the maximum earthquake's recurrence interval (Hsu et al., 2016). Second, while some stu-
dies estimate
b
a priori from the catalog (Field et al., 2014; Stevens & Avouac, 2016, 2017), it is desirable to
fully account for the covariances between
b
,M
max
, the moment de
fi
cit buildup rate, and other factors in esti-
mating long
term earthquake rates. Third, it is uncertain whether to decluster the instrumental catalog
fi
rst
(Michel et al., 2018), which method to use if so, whether declustering should yield a smaller
b
value (Felzer,
2007; Marsan & Lengline, 2008), and how this may affect the inferred long
term model.
Here we develop a probabilistic method to estimate long
term earthquake rates in a way that handles these
challenges (Figures S1 and S2 and supporting information S2
S5). We generate a large suite of moment
balancing long
term models (described by MFDs), use each to populate a set of synthetic 86
year earthquake
catalogs, and compare the synthetic MFDs to that of the 86
year
long SCEDC catalog to evaluate how likely
the 1932
2017 seismicity would be to arise as an 86
year realization of each long
term process. This approach
is similar to the
Turing
style
tests of Page and van der Elst (2018). We generate the long
term models by
iterating over a wide range of values of
b
and
M
max
and over the probability density function (PDF) of
moment de
fi
cit accumulation rate (Figure 1b) and computing the moment
balancing truncated or tapered
G
R MFD under each combination of parameters (supporting information S2 and S3). Following Michel et al.
(2018), we incorporate
Båth's law,
the observation that the largest aftershock is often ~1.2 magnitude units
smaller than the mainshock (Båth, 1965). To do so, we assume that it is mainshocks (not all earthquakes)
that obey the truncated or tapered G
R form described by
b
and that each mainshock is then individually
accompanied by aftershocks obeying their own truncated G
R distribution (described by the same
b
)upto
a single aftershock 1.2 magnitude units below the mainshock. The moment contribution of aftershocks is
then a constant (supporting information S4), and the parameter
b
is essentially the
declustered
(mainshocks
only)
b
value, which we have also assumed governs individual aftershock sequences. We
assume that each mainshock is also followed by aseismic deformation that releases 25% as much moment
as the mainshock, based on inferences from the Northridge earthquake (Donnellan & Lyzenga, 1998). We
then use each long
term MFD to populate a set of 25 synthetic 86
year catalogs assuming that mainshocks
of each magnitude obey a Poisson process and adding their aftershocks. We compute the mis
fi
t of the 25
synthetic catalogs' cumulative MFDs to those of the four versions of the 1932
2017 catalog in G
R space
(Figures S2b
S2d), convert these mis
fi
ts to Gaussian likelihoods, and use these likelihoods to compute the
PDFs of key parameters and long
term earthquake rates (supporting information S5). In a truncated G
R
distribution, these parameters also de
fi
ne
T
(
M
max
), the maximum earthquake's recurrence interval, so we
estimate the 2
D PDF of
M
max
and
T
(
M
max
); in a tapered G
R distribution,
T
(
M
max
)isin
fi
nite and so we
only estimate the 1
D PDF of
M
max
. This method has the advantages that (1) it directly tests long
term mod-
els based on whether the instrumental catalog is a plausible realization of each long
term process, (2)
b
and
M
max
are estimated a posteriori with full covariance with other variables, and 3) it does not require
declustering the catalog.
4. Results
We
fi
rst describe our two preferred long
term average earthquake likelihood models (Figure 3), which
respectively assume a truncated and a tapered G
R distribution for mainshocks. In the truncated case, the
2
D PDF of
M
max
and
T
(
M
max
) peaks at a
M
w
= 6.75 event with a recurrence interval of ~280 years. The
weighted 16th
and 84th
percentile recurrence intervals of the maximum earthquake for
M
max
= 6.75 are
170 and 610 years; the 1
D PDF of
M
max
(mode and same percentiles) is
M
w
= 6.8 + 1.05/
0.4
(Figure 3a). In the tapered case, the 1
D PDF of
M
max
gives
M
w
= 7.05 + 0.95/
0.4. (
M
max
is always ~0.25
larger in the tapered models because the tapering requires a larger
M
max
to close the moment budget.)
The aggregate mean magnitude and recurrence interval of paleoseismologically inferred Holocene
10.1029/2018GL080868
Geophysical Research Letters
ROLLINS AND AVOUAC
3156