of 15
1
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
The United States COVID-19
Forecast Hub dataset
Estee
Y.
Cramer
1,200
, Yuxin Huang
1,200
, Yijin Wang
1,200
, Evan L. Ray
1
, Matthew
Cornell
1
,
Johannes
Bracher
2,3
, Andrea Brennen
4
, Alvaro J. Castro Rivadeneira
1
, Aaron Gerding
1
, Katie House
1
,
Dasuni Jayawardena
1
, Abdul Hannan Kanji
1
, Ayush Khandelwal
1
, Khoa Le
1
, Vidhi Mody
1
,
Vrushti Mody
1
, Jarad Niemi
5
, Ariane Stark
1
, Apurv Shah
1
, Nutcha Wattanchit
1
, Martha W. Zorn
1
,
Nicholas G. Reich
1
& US COVID-19 Forecast Hub Consortium*
Academic researchers, government agencies, industry groups, and individuals have produced forecasts
at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United
States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at
the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April
2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident
hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and
national, levels in the United States. Included forecasts represent a variety of modeling approaches,
data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish
a standardized and comparable set of short-term forecasts from modeling teams. These data can be
used to develop ensemble models, communicate forecasts to the public, create visualizations, compare
models, and inform policies regarding COVID-19 mitigation. These open-source data are available via
download from GitHub, through an online API, and through R packages.
Introduction
To understand how the COVID-19 pandemic would progress in the United States, dozens of academic research
groups, government agencies, industry groups, and individuals produced probabilistic forecasts for COVID-19
outcomes starting in March 2020
1
. We collected forecasts from over 90 modeling teams in a data repository, thus
making forecasts easily accessible for COVID-19 response efforts and forecast evaluation. The data repository
is called the US COVID-19 Forecast Hub (hereafter, Forecast Hub) and was created through a partnership
between the United States Centers for Disease Control and Prevention (CDC) and an academic research lab at
the University of Massachusetts Amherst.
The Forecast Hub was launched in early April 2020 and contains real-time forecasts of reported COVID-19
cases, hospitalizations, and deaths. As of May 3
rd
, 2022, the Forecast Hub had collected over 92 million individ
-
ual point or quantile predictions contained within over 6,600 submitted forecast files from 110 unique models.
The forecasts submitted each week reflected a variety of forecasting approaches, data sources, and underlying
assumptions. There were no restrictions in place regarding the underlying information or code used to generate
real-time forecasts. Each week, the latest forecasts were combined into an ensemble forecast (Fig.
1
), and all
recent forecast data were updated on an official COVID-19 Forecasting page hosted by the US CDC (
https://
www.cdc.gov/coronavirus/2019-ncov/science/forecasting/mathematical-modeling.html
). The ensemble models
were also used in the weekly reports that are posted on the Forecast Hub website,
https://covid19forecasthub.
org/doc/reports/
.
Forecasts are quantitative predictions about data that will be observed at a future time. Forecasts differ from
scenario-based projections, which examine feasible outcomes conditional on a variety of future assumptions.
Because forecasts are unconditional estimates of data that will be observed in the future, they can be evalu
-
ated against eventual observed data. An important feature of the Forecast Hub is that submitted forecasts are
1
Department of Biostatistics and Epidemiology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
2
Chair of Econometrics and Statistics, Karlsruhe Institute of Technology, Karlsruhe, 76185, Germany.
3
computational
Statistics Group, Heidelberg Institute for Theoretical Studies, Heidelberg, 69118, Germany.
4
IQT Labs, Waltham, MA,
02451, USA.
5
Department of Statistics, Iowa State University, Ames, IA, 50011, USA.
200
These authors contributed
equally: Estee Y. Cramer, Yuxin Huang, Yijin Wang. *A list of authors and their affiliations appears at the end of the
paper.
e-mail:
nick@umass.edu
ARTICLE
OPEN
2
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
time-stamped so the exact time at which a forecast was made public can be verified. In this way, the Forecast Hub
serves as a public, independent registration system for these forecast model outputs. Data from the Forecast Hub
have served as the basis for research articles for forecast evaluation
2
and forecast combination
3
5
. These studies
can be used to determine how well models have performed at various points during the pandemic, which can, in
turn, guide best practices for utilizing forecasts in practice and inform future forecasting efforts
2
.
Teams submitted predictions in a structured format to facilitate data validation, storage, and analysis.
Teams also submitted a metadata file and license for their model’s data. Forecast data, ground truth data from
the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE)
6
, New York Times
(NYTimes)
7
, USA Facts
8
, and HealthData.gov
9
and model metadata were stored in the public Forecast Hub
GitHub repository
10
.
The forecasts were automatically synchronized with an online database called Zoltar via calls to a representa
-
tional State Transfer (REST) application programming interface (API)
11
every six hours (Fig.
2
). Raw forecast
data may be downloaded directly from GitHub or Zoltar via the
covidHubUtils
R package
12
, the
zoltr
R package
13
or
zoltpy
Python library
14
.
This dataset of real-time forecasts created during the COVID-19 pandemic can provide insights into the
shortcomings and successes of predictions and improve forecasting efforts in years to come. Although these data
are restricted to forecasts for COVID-19 in the United States, the structure of this dataset has been used to create
datasets of COVID-19 forecasts in the EU and the UK, and longer-term scenario projections in the US
15
18
. The
general structure of this data collection could be applied to additional diseases or forecasting outcomes in the
future
11
.
This large collaborative effort has provided data on short-term forecasts for over two years of forecasting
efforts. Nearly all data were collected in real time and therefore are not subject to retrospective biases. The data
are also openly available to the public, thus fostering a transparent, open science approach to support public
health efforts.
Results
Data acquisition.
Beginning in April 2020, the Reich Lab at the University of Massachusetts, Amherst, in
partnership with the US CDC, began collecting probabilistic forecasts of key COVID-19 outcomes in the United
States (Table
1
). The effort began by collecting forecasts of deaths and hospitalizations at the weekly and daily
scales for the 50 US states and 5 territories (Washington DC, Puerto Rico, US Virgin Islands, Guam, and the
Northern Mariana Islands) as well as the aggregated US national level. In July 2020, daily resolution-level fore-
casts for COVID-19 deaths were discontinued, and the effort expanded to include forecasts of weekly incident
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
Feb−2020
Mar−2020
Apr−2020
Ma
y−2020
Jun−2020
Jul−2020
Aug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Feb−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
Aug−2021
Sep−2021
Incident Cases
A
5,000
10,000
15,000
20,000
25,000
Aug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Feb−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
Aug−2021
Sep−2021
Incident Hospitalizations
B
0
10,000
20,000
30,000
Fe
b−2020
Mar−2020
Apr−2020
Ma
y−2020
Jun−2020
Jul−2020
A
ug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Fe
b−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
A
ug−2021
Sep−2021
Incident Deaths
C
0
200,000
400,000
600,000
Fe
b−2020
Mar−2020
Apr−2020
Ma
y−2020
Jun−2020
Jul−2020
A
ug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Fe
b−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
A
ug−2021
Sep−2021
Cumulativ
e Deaths
D
Fig. 1
Time series of weekly incident deaths at the national level and forecasts from the COVID-19 Forecast
Hub ensemble model for selected weeks in 2020 and 2021. Ensemble forecasts (blue) with 50%, 80% and 95%
prediction intervals shown in shaded regions and the ground-truth data (black) for incident cases (
A
), incident
hospitalizations (
B
), incident deaths (
C
) and cumulative deaths (
D
). The truth data come from JHU CSSE
(panels
A
,
C
,
D
) and HealthData.gov (panel
B
).
3
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
cases at the county, state, and national levels. Forecasts may include a point prediction and/or quantiles of a pre-
dictive distribution.
Any team was eligible to submit data to the Forecast Hub provided they used the correct formatting. Upon
initial submission of forecast data, teams were required to upload a metadata file that briefly described the meth
-
ods used to create the forecasts and specified a license under which their forecast data were released. Individual
model outputs are available under different licenses as specified in the GitHub data repository. No model code
was stored in the Forecast Hub.
During the first month of operation, members of the Forecast Hub team downloaded forecasts made avail
-
able by teams publicly online, transformed these forecasts into the correct format (see
Forecast format
section),
and pushed them into the Forecast Hub repository. Starting in May 2020, all teams were required to format and
submit their own forecasts.
Repository structure.
The dataset containing forecasts is stored in two locations, and all data can be
accessed through either source. The first is the COVID-19 Forecast Hub GitHub repository,
https://github.com/
reichlab/covid19-forecast-hub
, and the second is an online database, Zoltar, which can be accessed via a REST
API
11
. Details about data access and format are documented in the subsequent sections.
When accessing data through the Zoltar forecast repository REST API, subsets of submitted forecasts can be
queried directly from a PostgreSQL database. This eliminates the need to access individual CSV files and facili
-
tates access to versions of forecasts in cases when they were updated.
Forecast outcomes.
The Forecast Hub dataset stores forecasts for four different outcomes: incident cases,
incident hospitalizations, incident deaths, and cumulative deaths (Table
1
). Incident case forecasts were first
introduced as a forecast outcome several months after the Forecast Hub started and have several key differences
from other predicted outcomes. They are the only outcomes for which the Forecast Hub accepts county-level
forecasts, as well as state and national level forecasts. Since there are over 3,000 counties in the US, this required
some compromises on the scale of data collected for these forecasts in other ways. Specifically, case forecasts may
only be submitted for up to 8 weeks into the future instead of up to 20 weeks for deaths and are required to have
fewer quantiles (seven quantiles) compared to other outcomes, which can have up to twenty-three quantiles. This
gives a coarser representation of the forecast (see the section on Forecast format below).
Forecast target dates.
Weekly targets follow the standard of epidemiological weeks (EW) used by the
CDC, which defines a week as starting on Sunday and ending on the following Saturday
19
. Forecasts of cumu-
lative deaths target the number of cumulative deaths reported by Saturday ending a given week. Forecasts of
weekly incident cases or deaths target the difference between reported cumulative cases or deaths on consecutive
Te am 1 Metadata
Te am 2 Metadata
Te am 3 Metadata
Te am 2 Forecast
Te am 1 Forecast
Te am 3 Forecast
zoltr/zoltpy
covidHubUtil
s
COVID-19
Forecast Hub
Github Repositor
y
COVID-19
Forecast Hub
Zoltar Project
covid19-
validation
s
covidEnsembles
Data pulled for various uses
●V
isualization
Model Evaluations
●C
DC communication
s
ltpy
CO
VID-19
Forecast Hub
Zoltar Project
covidDat
a
Ground
Truth Data:
●J
HU CSSE
●N
YT
imes
●U
SAFacts
●H
ealthData.go
v
A
B
C
D
E
covidHubUtil
s
Ensemble Forecast
Fig. 2
Schematic of the data storage and related infrastructure surrounding the COVID-19 Forecast Hub.
(
A
) Forecasts are submitted to the COVID-19 Forecast Hub GitHub repository and undergo data format
validation before being accepted into the system. (
B
) A continuous integration service ensures that the
GitHub repository and PostgreSQL database stay in sync with mirrored versions of the data. (
C
) Truth data for
visualization, evaluation, and ensemble building are retrieved once per week using both the
covidHubUtils
and
the
covidData
R packages. Truth data are stored in both repositories. (
D
) Once per week, an ensemble forecast
submission is made using the
covidEnsembles
R package. It is submitted to the GitHub repository and undergoes
the same validation as other submissions. (
E
) Using the
covidHubUtils
R package, forecast and truth data may
be extracted from either the GitHub or PostgreSQL database in a standard format for tasks such as scoring or
plotting.
4
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
Saturdays. As an example of a forecast and the corresponding observation, forecasts submitted between Tuesday,
October 6, 2020 (day 3 of EW41) and Monday, October 12, 2020 (day 2 of EW42) contained a “1 week ahead”
forecast of incident deaths that corresponded to the change in cumulative reported deaths observed in EW42 (i.e.,
the difference between the cumulative reported deaths on Saturday, October 17, 2020, and Saturday, October 10,
2020), a “2 week ahead” forecast that corresponded to the change in cumulative reported deaths in week EW43. In
this paper, we refer to the “forecast week” of a submitted forecast as the week corresponding to a “0-week ahead”
horizon. In the example above, the forecast week would be EW41. Daily incident hospitalization horizons are for
the number of reported hospitalizations a specified number of days after the forecast was generated.
Summary of forecast data collected.
In the initial weeks of submission, fewer than 10 models provided
forecasts. As the pandemic spread, the number of teams submitting forecasts increased; as of May 3
rd
, 2022, 93
primary, 9 secondary models, and 17 models with the designation “other” had been submitted to the Forecast
Hub. As of May 3
rd
, 2022, across all weeks, a median of 30 primary models (range: 14 to 39) contributed incident
case forecasts (Fig.
3a
), a median of 11 primary models (range: 1 to 16) contributed incident hospitalizations
(Fig.
3b
), a median of 37 primary models (range 1 to 49) contributed incident death forecasts (Fig.
3c
), and a
median of 35 primary models (range 3 to 46) contributed cumulative death forecasts each week (Fig.
3d
). As of
May 3
rd
, 2022, the dataset contained 6,633 forecast files with 92,426,015 point or quantile predictions for unique
combinations of targets and locations.
Ensemble and baseline forecasts.
Alongside the models submitted by individual teams, there are also
baseline and ensemble models generated by the Forecast Hub and CDC.
The COVIDhub-baseline model was created by the Forecast Hub in May 2020 as a benchmarking model. Its
point forecast is the most recent observed value as of the forecast creation date with a probability distribution
around that based on weekly differences in previous observations
2
. The baseline model initially produced fore
-
casts for case and death outcomes. Hospitalization baseline forecasts were added in September 2021.
The COVIDhub-ensemble model creates a combination of submitted forecasts to the Forecast Hub. The
ensemble produces forecasts of incident cases at a horizon of 1 week ahead, forecasts of incident hospitalizations
at horizons up to 14 days ahead, and forecasts of incident and cumulative deaths at horizons up to 4 weeks ahead.
Outcome
Scale
Locations
Horizons
Stored
Number of
quantiles for
probabilistic
forecasts
Earliest
Forecast Date
First date of
standardized
truth data
Date of first
ensemble
forecast
County
State
National
Incident Cases
We e k l y
X
X
X
1 - 8 weeks
7
2020-07-05
2020-03-15
2020-07-18
Incident Hospitalizations
Daily
X
X
1 - 130 days
23
2020-03-27
2020-11-16
2020-12-05
Incident Deaths
Daily
X
X
1 - 130 days
23
2020-03-15
2020-03-15
NA
Incident Deaths
We e k l y
X
X
1-20 weeks
23
2020-03-15
2020-03-15
2020-06-20
Cumulative Deaths
Daily
X
X
1 - 130 days
23
2020-03-15
2020-03-15
NA
Cumulative Deaths
We e k l y
X
X
1-20 weeks
23
2020-03-15
2020-03-15
2020-04-13
Ta b l e 1
.
Forecast characteristics for all four outcomes. The table shows the temporal scale, spatial scale of
locations, horizons stored, number of quantiles, and dates of the earliest forecast, earliest standardized truth
data, and earliest ensemble build.
Incident Deaths
Cumulativ
e Deaths
Incident Cases
Incident Hospitalizations
Ma
y−2020
Jun−2020
Jul−2020
Aug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Fe
b−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
Aug−2021
Sep−2021
Oct−2021
No
v−2021
Dec−2021
Jan−2022
Fe
b−2022
Mar−2022
Apr−2022
Ma
y−2022
Ma
y−2020
Jun−2020
Jul−2020
Aug−2020
Sep−2020
Oct−2020
No
v−2020
Dec−2020
Jan−2021
Fe
b−2021
Mar−2021
Apr−2021
Ma
y−2021
Jun−2021
Jul−2021
Aug−2021
Sep−2021
Oct−2021
No
v−2021
Dec−2021
Jan−2022
Fe
b−2022
Mar−2022
Apr−2022
Ma
y−2022
0
10
20
30
40
50
0
10
20
30
40
50
Forecast Submission Date
Number of Models Pro
viding F
orecasts
Fig. 3
Number of primary forecasts submitted for each outcome per week from April 27
th
, 2020 through May
3
rd
, 2022. In the initial weeks of submission, fewer than 10 models provided forecasts. Over time, the number of
teams submitting forecasts for each forecasted outcome increased into early 2021 and then saw a small decline
through the end of 2021, with some renewed interest in 2022.
5
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
Initially the ensemble produced forecasts of incident cases at horizons of 1 to 4 weeks and incident hospitaliza
-
tions at 1 to 28 days. However, in September 2021, due to the unreliability of incident case and hospitalization
forecasts at horizons greater than 1 week (for cases) and 14 days (for hospitalizations), horizons past those
respective thresholds were excluded from the COVIDhub-ensemble model, although they were still included
in the COVIDhub-4_week_ensemble
20
. Other work details the methods used for determining the appropriate
combination approach
3
,
4
. Starting in February 2021, GitHub tags were created to document the exact version
of the repository used each week to create the COVIDhub-ensemble forecast. This creates an auditable trail in
the repository so the correct version of the forecasts used could be recovered even in cases when some forecasts
were subsequently updated.
The Forecast Hub also collaborates with the CDC on the production of three additional ensemble forecasts
each week. These are the COVIDhub-4_week_ensemble, COVIDhub-trained_ensemble, and the COVIDhub_
CDC-ensemble. The COVIDhub-4_week_ensemble produces forecasts of incident cases, incident deaths, and
cumulative deaths at horizons of 1 through 4 weeks ahead, and forecasts of incident hospitalizations at hori
-
zons of 1 through 28 days ahead and uses the equally-weighted median of all component forecasts at each
location, forecast horizon, and quantile level. The COVIDhub-trained_ensemble uses the same targets as the
COVIDhub-4_week_ensemble but computes the models as a weighted median of the ten component forecasts
with the best performance as measured by their weighted interval score (WIS) in the 12 weeks prior to the fore
-
cast date. The COVIDhub_CDC-ensemble pulls forecasts of cases and hospitalizations from the COVIDhub-4_
week_ensemble and forecasts of deaths from the COVIDhub-trained_ensemble. The set of horizons that are
included is updated regularly using rules developed by the CDC based on recent forecast performance.
Several other models are also combinations of some or all models submitted to the Forecast Hub. As of May
3
rd
, 2022, these models are FDANIHASU-Sweight, JHUAPL-SLPHospEns, and KITmetricslab-select_ensemble.
These models are flagged in the metadata using the Boolean metadata field, “ensemble_of_hub_models”.
Use scenarios.
R package covidHubUtils
.
We have developed the
covidHubUtils
R package at
https://github.
com/reichlab/covidHubUtils
to facilitate bulk retrieval of forecasts for analysis and evaluation. Examples of how
to use the
covidHubUtils
package and its functions can be found at
https://reichlab.io/covidHubUtils/
. The pack-
age supports loading forecasts from a local clone of the GitHub repository or by querying data from Zoltar. The
package supports common actions for working with the data, such as loading specific subsets of forecasts, plotting
forecasts, scoring forecasts, retrieving ground truth data, and many other utility functions to simplify working
with the data.
Visualization of forecasts in the COVID-19 Forecast Hub.
In addition to viewing forecasts in an R package,
forecasts can also be viewed through our public website,
https://viz.covid19forecasthub.org/
. Through this tool,
viewers can select the outcome, location, prediction interval, issue date of the truth data, and the models of
interest to view forecasts. This tool can be used to see forecasts for the upcoming weeks, qualitatively evaluate
model performance in past weeks, or visualize past performance based on available data at the time of forecast
-
ing (Fig.
4
).
Communicating results from the COVID-19 Forecast Hub.
Communication of probabilistic forecasts to the pub
-
lic is challenging
21
,
22
, and the best practices regarding the communication of outbreaks are still developing
23
.
Starting in April 2020, the CDC published weekly summaries of these forecasts on their public website
24
, and
these forecasts were occasionally used in public briefings by the CDC Director
25
. Additional examples of the com
-
munication of Forecast Hub data can be viewed through weekly reports generated by the Forecast Hub team for
dissemination to the general public, including state and local departments of health(
https://covid19forecasthub.
org/doc/reports/
). On December 22nd, 2021, the CDC ceased communication of case forecasts due to low relia
-
bility of these forecasts (
https://www.cdc.gov/coronavirus/2019-ncov/science/forecasting/forecasts-cases.html
).
Discussion
We present here the US COVID-19 Forecast Hub, a data repository that stores structured forecasts of COVID-19
cases, hospitalizations, and deaths in the United States. The Forecast Hub is an important asset for visualizing,
evaluating, and generating aggregate forecasts. It also demonstrates the highly collaborative effort that has gone
into COVID-19 modeling efforts. This open-source data repository is beneficial for researchers, modelers, and
casual viewers interested in forecasts of COVID-19. The website was viewed over half a million times in the first
two years of the pandemic.
The US COVID-19 Forecast Hub is a unique, large-scale, collaborative infectious disease modeling effort.
The Forecast Hub emerged from years of collaborative modeling efforts that started as government sponsored
forecasting “challenges”. These collaborations are distinct from modeling efforts of individual teams, as the
Forecast Hub has created open collaborative systems that facilitate model collection, curation, comparison, and
combination, often in direct collaboration with governmental public health agencies
26
28
. The Forecast Hub built
on these past efforts by developing a new quantile-based data format as well as automated data submission and
validation procedures. Additionally, the scale of the collaborative effort for the US COVID-19 Forecast Hub has
exceeded prior COVID-19 forecasting efforts by an order of magnitude in terms of the number of participating
teams and forecasts collected. Finally, the infrastructure developed for the US COVID-19 Forecast Hub has been
adapted for use by a number of other modeling hubs, including the US COVID-19 Scenario Modeling Hub
17
, the
European COVID-19 Forecast Hub
15
, the German/Polish COVID-19 Forecasting Hub
16
, the German COVID-
19 Hospitalization Nowcasting Hub
29
, and the 2022 US CDC Influenza Hospitalization Forecasting challenge
30
.
The Forecast Hub has played a critical role in collecting forecasts in a single format from over 100 different
prediction models and making these data available to a wide variety of stakeholders during the COVID-19
6
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
pandemic. While some of these teams register their forecasts in other publicly available locations, many teams
do not. Thus the Forecast Hub is the only location where many teams’ forecasts are available. In addition to
curating data from other models, the Forecast Hub has also played a central role in synthesizing the outputs of
models together. The Forecast Hub has generated an ensemble forecast, which has been used in official commu
-
nications by the CDC, every week since April 2020. The ensemble model for incident deaths, a median aggregate
of all other eligible models, was consistently the most accurate model when aggregated across forecast targets,
weeks, and locations, even though it was rarely the single most accurate forecast for any single prediction
2
.
The US COVID-19 Forecast Hub has built a specific set of open-source tools that have facilitated the devel
-
opment of operational stand-alone and ensemble forecasts for the pandemic. However, the structure of the tools
is quite general and could be adapted for use in other real-time prediction efforts. Additionally, the Forecast Hub
infrastructure and data described represent best practices for collecting, aggregating, and disseminating fore
-
casts
31
. The US COVID-19 Forecast Hub has developed and operationalized one standardized forecast format,
time-stamped submissions, open access, and a collection of tools to facilitate working with the data.
The data in this hub will be useful in the future for continuing analysis and comparisons of forecasting meth
-
ods. The data can also be used as an exploratory dataset for creating and testing novel models and methods for
model analysis (e.g., new ways to create an ensemble or post hoc forecast calibration methods). Because the data
serve as an open repository of the state of the art in infectious disease forecasting, they will also be helpful as a
retrospective reference point for comparison when new forecasting models are developed.
Model coordination efforts occur in many fields –including climate science
32
, ecology
33
, and space weather
34
,
among others– to inform policy decisions by curating many models and synthesizing their outputs and uncer
-
tainties. Such efforts ensure that individual model outputs may indeed be easily compared to and assimilated
with one another, and thus play a role in making scientific research more rigorous and transparent. As the use of
advanced computational models becomes more commonplace in a wide range of scientific fields, model coor
-
dination projects and model output standardization efforts will play an increasingly important role in ensuring
that policy makers can be provided with a unified set of model outputs.
Fig. 4
Visualization tool updated weekly by the US COVID-19 Forecast Hub displays model forecasts and
truth data at selected forecast dates, locations, forecast outcomes and PI levels. US national level incident death
forecasts from 39 models are shown with point values and a 50% PI. These forecasts are for 1 through 4 week
ahead horizons. Data used for forecasting were generated on July 24th, 2021. The visualization tool is available
at:
https://viz.covid19forecasthub.org
.
7
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
Methods
Forecast assumptions.
Forecasters used a variety of assumptions to build models and generate predictions.
Forecasting approaches include statistical or machine learning models, mechanistic models incorporating disease
transmission dynamics, and combinations of multiple approaches
2
. Teams have also included varying assump-
tions regarding future changes in policies and social distancing measures, the transmissibility of COVID-19,
vaccination rates, and the spread of new virus variants throughout the United States.
Weekly submissions.
A forecast submission consists of a single comma-separated value (CSV) file sub-
mitted via pull request to the GitHub repository. Forecast submissions are validated for technical accuracy and
formatting (see below) using automated checks implemented by continuous integration servers before being
merged. To be included in the weekly ensemble model, teams were required to submit their forecast on Sunday
or prior to a deadline on Monday. The majority of teams contributing to the dataset submitted forecasts to the
Forecast Hub repository on Sunday or Monday, although some teams submitted at other times depending on
their model production schedule.
Exclusion criteria.
No forecasts were excluded from the dataset due to the forecast values or the background
experience of the forecasters. Forecast files were only rejected if they did not meet the automatic formatting crite-
ria implemented through automatic GitHub checks
35
. These included checks to ensure that, among other criteria:
A forecast file is submitted no more than two days after it has been created (to ensure forecasts submitted
were truly prospective). The creation date is based on the date in the filename created by the submitting team.
The forecast dates in the content of the file are in the format YYYY-MM-DD and must match the creation
date.
Quantile forecasts do not contain any quantiles at probability levels other than the required levels (see Fore-
cast Format section below).
Updates to files.
To ensure that forecasting is done in real-time, all forecasts are required to be submitted to
the Forecast Hub within 2 days of the forecast date, which is listed in a column within each forecast file. Although
occasional late submissions were accepted through January 2021, the policy was updated to not accept late fore-
casts due to missed deadlines, updated modeling methods, or other reasons.
Exceptions to this policy were made if there was a bug that affected the forecasts in the original submission
or if a new team joined. If there was a bug, teams were required to submit a comment with their updated sub
-
mission affirming that there was a bug and that the forecast was only produced using data that were available at
the time of the original submission. In the case of updates to forecast data, both the old and updated versions
of the forecasts can be accessed either through the GitHub commit history or through time-stamped queries of
the forecasts in the Zoltar database. Note that an updated forecast can include “retracting” a particular set of
predictions in the case when an initial forecast was not able to be updated. When new teams join the Forecast
Hub, they can submit late forecasts if they can provide publicly available evidence that the forecasts were made
in real-time (e.g., GitHub commit history).
Ground truth data.
Data from the JHU CSSE dataset
36
are used as the ground truth data for cases and
deaths. Data from the HealthData.gov system for state-level hospitalizations are used for the hospitalization out-
come. JHU CSSE obtained counts of cases and deaths by collecting and aggregating reports from state and local
health departments. HealthData.gov contains reports of hospitalizations assembled by the U.S. Department of
Health and Human Services. Teams were encouraged to use these sources to build models. Although hospitaliza-
tion forecasts were collected starting in March 2020, hospitalization data from HealthData.gov were only available
later, and we started encouraging teams to target these data in November 2020. Some teams used alternate data
sources, including the NYTimes, USAFacts, US Census data, and other signals
2
. Versions of truth data from JHU
CSSE, USAFacts, and the NYTimes are stored in the GitHub repository.
Previous reports of ground truth data for past time points were occasionally updated as new records became
available, definitions of reportable cases, deaths, or hospitalizations changed, or errors in data collection were
identified and corrected. These revisions to the data are sometimes quite substantial
35
,
36
, and for purposes such
as retrospective ensemble construction, it is necessary to use the data that would have been available in real-time.
The historically versioned data can be accessed either through GitHub commit records, data versions released on
HealthData.gov, or third-party tools such as the covidcast API provided by the Delphi group at Carnegie Mellon
University or the
covidData
R package
37
.
Model designation.
Each model stored in the repository must have a classification of “primary,” “second-
ary”, or “other”. Each team must only have one “primary” model. Teams submitting multiple models with similar
forecasting approaches can use the designations “secondary” or “other” for their models. Models with the desig-
nation “primary” are included in evaluations, the weekly ensemble, and the visualization. The “secondary” label
is designed for models that have a substantive methodological difference than a team’s “primary” model. Models
with the designation “secondary” are included only in the ensemble and the visualization. The “other” label is
designed for models that are small variations on a team’s “primary” model. Models with the designation “other”
are not included in evaluations, the ensemble build, or the visualization.
GitHub repository data structure.
Forecasts in the GitHub repository are available in subfolders organ-
ized by model. Folders are named with a team name and model name, and each folder includes a metadata file and
8
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
forecast files. Forecast CSV files are named using the format “
<
YYYY-MM-DD
>
-
<
team abbreviation
>
-
<
model
abbreviation
>
.csv”. In these files, each row contains data for a single outcome, location, horizon, and point or
quantile prediction as described above.
The metadata file for each team, named using the format “metadata-
<
team abbreviation
>
-
<
model abbre
-
viation
>
.txt”, contains relevant information about the team and the model that the team is using to generate
forecasts.
Forecast format.
Forecasts were required to be submitted in the format of point predictions and/or quantile
predictions. Point predictions represented single “best” predictions with no uncertainty, typically representing a
mean or median prediction from the model. Quantile predictions are an efficient format for storing predictive
distributions of a wide range of outcomes.
Quantile representations of predictive distributions lend themselves to natural computations of, for exam
-
ple, pinball loss or a weighted interval score, both proper scoring rules that can be used to evaluate forecasts
38
.
However, they do not capture the structure of the tails of the predictive distribution beyond the reported quan
-
tiles. Additionally, the quantile format does not preserve any information on correlation structures between
different outcomes.
The forecast data in this dataset are stored in seven columns:
1.
forecast_date
- the date the forecast was made in the format YYYY-MM-DD.
2.
target
- a character string giving the number of days/weeks ahead that are being forecasted (horizon) and
the outcome. Horizons must be one of the following:
a.
“N wk ahead cum death” where N is a number between 1 and 20
b.
“N wk ahead inc death” where N is a number between 1 and 20
c.
“N wk ahead inc case” where N is a number between 1 and 8
d.
“N day ahead inc hosp” where N is a number between 0 and 130
3.
target_end_date
- a character string representing the date for the forecast target in the format YYYY-
MM-DD. For “k day-ahead” targets, target_end_date will be k days after forecast_date. For “k week ahead”
targets, target_end_date will be the Saturday at the end of the specified epidemic week, as described above.
4.
location
- character string of Federal Information Processing Standard Publication (FIPS) codes identify-
ing U.S. states, counties, territories, and districts as well as “US” for national forecasts. The values for the
FIPS codes are available in a CSV file in the repository and as a data object in the covidHubUtils R package
for convenience.
5.
type
- character value of “point” or “quantile” indicating whether the row corresponds to a point forecast or
a quantile forecast.
6.
quantile
- the probability level for a quantile forecast. For death and hospitalization forecasts, forecasters
can submit quantiles at 23 probability levels: 0.01, 0.025, 0.05, 0.10, 0.15
...
, 0.95, 0.975, and 0.99. For cases,
teams can submit up to 7 quantiles at levels .025, 0.100, 0.250, 0.5, 0.750, 0.900 and 0.975. If the forecast
“type” is equal to “point”, the value in the quantile column is equal to “NA”.
7.
value
– non-negative numbers indicating the “point” or “quantile” prediction for the row. For a “point” pre-
diction, the value is simply the value of that point prediction for the target and location associated with that
row. For a “quantile” prediction, the model predicts that the eventual observation will be less than or equal
to this value with the probability given by the quantile probability level.
Metadata format.
Each team documents their model information in a metadata file which is required along
with the first forecast submission. Each team is asked to record their model’s design and assumptions, the model
contributors, the team’s website, information regarding the team’s data sources, and a brief model description.
Teams may update their metadata file periodically to keep track of minor changes to a model.
A standard metadata file should be a YAML file with the following required fields in a specific order:
1.
team_name
- the name of the team (less than 50 characters).
2.
model_name
- the name of the model (less than 50 characters).
3.
model_abbr
- an abbreviated and uniquely identified name for the model that is less than 30 alphanumeric
characters. The model abbreviation must be in the format of ‘[team_abbr]-[model_abbr]‘ where each of the
‘[team_abbr]‘ and ‘[model_abbr]‘ are text strings that are each less than 15 alphanumeric characters that
do not include a hyphen or whitespace.
4.
model_contributors
- a list of all individuals involved in the forecasting effort, affiliations, and email
addresses. At least one contributor needs to have a valid email address. The syntax of this field should be
name1 (affiliation1)
<
user@address
>
, name2 (affiliation2)
<
user2@address2
>
5.
website_url
*
- a URL to a website that has additional data about the model. We encourage teams to submit
the most user-friendly version of the model, e.g., a dashboard, or similar, that displays the model forecasts. If
there is an additional data repository where forecasts and other model code are stored, this can be included in
the methods section. If only a more technical site, e.g., GitHub repo, exists, that link should be included here.
6.
license
- one of the acceptable license types in the Forecast Hub. We encourage teams to submit as a “cc-
by-4.0” to allow the broadest possible use, including private vaccine production (which would be excluded
9
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
by the “cc-by-nc-4.0” license). If the value is “LICENSE.txt”, then a LICENSE.txt file must exist within the
model folder and provide a license.
7.
team_model_designation
- upon initial submission this field should be one of “primary”, “secondary” or
“ o t h e r ”.
8.
methods
- a brief description of the forecasting methodology that is less than 200 Characters.
9.
ensemble_of_hub_models
- a Boolean value (‘true‘ or ‘false‘) that indicates whether a model combines
multiple hub models into an ensemble.
*
in earlier versions of the metadata files, this field was named
model_output
.
Teams are also encouraged to add model information with optional fields described below:
1.
institution_affil
- University or company names, if relevant.
2.
team_funding
- Like an acknowledgement in a manuscript, teams can acknowledge funding here.
3.
repo_url
- A GitHub repository url or something similar.
4.
twitter_handles
- one or more Twitter handles (without the @) separated by commas.
5.
data_inputs
- A description of the data sources used to inform the model and the truth data targeted by
model forecasts. Common data sources are NYTimes, JHU CSSE, COVIDTracking, Google mobility, HHS
hospitalization, etc. An example description could be “case forecasts use NYTimes data and target JHU
CSSE truth data, hospitalization forecasts use and target HHS hospitalization data”
6.
citation
- a url (doi link preferred) to an extended description of the model, e.g., blog post, website, pre-
print, or peer-reviewed manuscript.
7.
methods_long
- An extended description of the methods used in the model. If the model is modified, this
field can be used to provide the date of the modification and a description of the change.
Technical Validations
Two similar but distinct validation processes were used to validate data on the GitHub repository and on Zoltar.
Validations during data submission.
Validations were set up using GitHub Actions to manage the con-
tinuous integration and automated data checking
35
. Teams submitted their metadata files and forecasts through
pull requests on GitHub. Each time a new pull request was submitted, a validation script ran on all new or updated
files in the pull request to test for their validity. Separate checks ran on metadata file changes and forecast data
file changes.
The metadata file for each team was required to be in a valid YAML format, and a set of specific checks were
required before a new metadata file could be merged into the repository. Checks included ensuring that all
metadata files are using the rules outlined in the Metadata Format section, that the proposed team and model
names do not conflict with existing names, that a valid license for data reuse is specified, and that a valid model
designation was present. Additionally, each team must have their files under a folder named consistently with
their
model_abbr
, and they must only have one
primary
model.
New or changed forecast data files for each team were required to pass a series of checks for data formatting
and validity. These checks also ensured that the forecast data files did not meet any of the exclusion criteria (see
the Methods section for specific rules). Each forecast file is subject to the validation rules documented at:
https://
github.com/reichlab/covid19-forecast-hub/wiki/Forecast-Checks
.
Validations on Zoltar.
When a new forecast file is uploaded to Zoltar, unit tests are run on the file to ensure
that forecast elements contain a valid structure. (For a detailed specification of the structure of forecast elements,
see
https://docs.zoltardata.com/validation/
.) If a forecast file does not pass all unit tests, the upload will fail and
the forecast file will not be added to the database; only when all tests pass will the new forecast be added to Zoltar.
The validations in place on GitHub ensure that only valid forecasts will be uploaded to Zoltar.
truth data.
Raw truth data from multiple sources including JHU, NYTimes, USAFacts, and Healthdata.
gov, were downloaded and reformatted using the scripts in the R packages
covidHubUtils
(
https://github.com/
reichlab/covidHubUtils
) and
covidData
(
https://github.com/reichlab/covidData
. This data generating process
is automated by GitHub Actions every week, and the results (called “truth data”) are directly uploaded to the
Forecast Hub repository and Zoltar. Specifically, case and death raw truth data were aggregated to a weekly level,
and all three outcomes (cases, deaths, and hospitalization) are reformatted for use within the Forecast Hub.
Data availability
The datasets generated and/or analyzed during the current study are available in the reichlab/covid19-forecast-
hub GitHub repository,
https://github.com/reichlab/covid19-forecast-hub
. A permanent DOI for the GitHub
repository for the Forecast Hub is available as
https://doi.org/10.5281/zenodo.5208210
10
Forecast data are also
available through our Zoltar forecast repository at
https://zoltardata.com/project/44
.
10
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
Code availability
All code for forecast data validation and storage associated with the current submission is available in the Forecast
Hub GitHub repository,
https://github.com/reichlab/covid19-forecast-hub-validations
. Ensemble models are
built with code in the
covidEnsembles
R package,
https://github.com/reichlab/covidEnsembles
. The code for
forecast analysis is at
https://doi.org/10.5281/zenodo.5207940
12
(covidHubUtils R package) and
https://doi.
org/10.5281/zenodo.5208224
7
(covidData R package). Any updates will also be published on Zenodo.
Received: 17 January 2022; Accepted: 29 June 2022;
Published: xx xx xxxx
References
1.
Haghani, M. & Bliemer, M. C. J. Covid-19 pandemic and the unprecedented mobilisation of scholarly efforts prompted by a health
crisis: Scientometric comparisons across SARS, MERS and 2019-nCoV literature.
Scientometrics
125
, 2695–2726 (2020).
2.
Cramer, E. Y.
et al
. Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.
Proc.
Natl. Acad. Sci. U. S. A.
119
, e2113561119 (2022).
3. Brooks, L. C.
et al
. Comparing ensemble approaches for short-term probabilistic COVID-19 forecasts in the U.S.
International
Institute of Forecasters
(2020).
4. Ray, E. L.
et al
. Comparing trained and untrained probabilistic ensemble forecasts of COVID-19 cases and deaths in the United
States.
arXiv [stat.ME]
(2022).
5. Taylor, J. W. & Taylor, K. S. Combining Probabilistic Forecasts of COVID-19 Mortality in the United States.
Eur. J. Oper. Res
.
https://
doi.org/10.1016/j.ejor.2021.06.044
(2021).
6. CSSEGISandData/COVID-19.
GitHub
https://github.com/CSSEGISandData/COVID-19
.
7.
Ray, E.
et al
. reichlab/covidData: repository release for Zenodo.
Zenodo
https://doi.org/10.5281/zenodo.5208224
(2021).
8.
US COVID-19 cases and deaths by state.
https://usafacts.org/visualizations/coronavirus-covid-19-spread-map/
(2021).
9. HealthData.gov.
healthdata.gov
https://healthdata.gov/
. (2022).
10.
Cramer, E.
et al
. reichlab/covid19-forecast-hub: release for Zenodo, 20210816.
Zenodo
https://doi.org/10.5281/zenodo.5208210
(2021).
11.
Reich, N. G., Cornell, M., Ray, E. L., House, K. & Le, K. The Zoltar forecast archive, a tool to standardize and store interdisciplinary
prediction research.
Sci Data
8
, 59 (2021).
12.
Wang, S. Y.
et al
. reichlab/covidHubUtils: repository release for Zenodo.
Zenodo
https://doi.org/10.5281/zenodo.5207940
(2021).
13.
Cornell, M., Gruson, H., Wang, S. Y. & Ray, E. reichlab/zoltr: Release for Zenodo, 20210816.
Zenodo
https://doi.org/10.5281/
zenodo.5207856
(2021).
14.
Cornell, M.
et al
. reichlab/zoltpy: Release for Zenodo, 20210816.
Zenodo
https://doi.org/10.5281/zenodo.5207932
(2021).
15.
covid19-forecast-hub-europe: European Covid-19 Forecast Hub
. (Github).
16.
covid19-forecast-hub-de: German and Polish COVID-19 Forecast Hub
. (Github).
17.
Borchering, R. K.
et al
. Modeling of Future COVID-19 Cases, Hospitalizations, and Deaths, by Vaccination Rates and
Nonpharmaceutical Intervention Scenarios - United States, April-September 2021.
MMWR Morb. Mortal. Wkly. Rep.
70
, 719–724
(2021).
18.
COVID 19 scenario model hub.
https://covid19scenariomodelinghub.org/
.
19.
MMWR Week Fact Sheet. National Notifiable Diseases Surveillance System, Division of Health Informatics and Surveillance,
National Center for Surveillance, Epidemiology and Laboratory Services. Downloaded from
http://wwwn.cdc.gov/nndss/
document/MMWR_Week_overview.pdf
.
20.
Nicholas G. Reich, Ryan J. Tibshirani, Evan L. Ray, Roni Rosenfeld. On the predictability of COVID-19.
International Institute of
Forecasters
https://forecasters.org/blog/2021/09/28/on-the-predictability-of-covid-19/
(2021).
21.
Gigerenzer, G., Hertwig, R., van den Broek, E., Fasolo, B. & Katsikopoulos, K. V. ‘A 30% chance of rain tomorrow’: how does the
public understand probabilistic weather forecasts?
Risk Anal
25
, 623–629 (2005).
22.
Raftery, A. E. Use and Communication of Probabilistic Forecasts.
Stat. Anal. Data Min
9
, 397–410 (2016).
23.
Tracy L. Rouleau, L. U. Risk Communication and Behavior Best Practices and Research Findings. National Oceanic and
Atmospheric Administration. 1-66.(2016).
24.
CDC. COVID-19 Forecasts: Deaths.
https://www.cdc.gov/coronavirus/2019-ncov/covid-data/forecasting-us.html
(2021).
25.
Waldrop, T., Andone, D. & Holcombe, M. CDC warns new Covid-19 variants could accelerate spread in US.
CNN
(2021).
26.
Johansson, M. A.
et al
. An open challenge to advance probabilistic forecasting for dengue epidemics.
Proc. Natl. Acad. Sci. U. S. A.
116
, 24268–24274 (2019).
27.
Reich, N. G.
et al
. Accuracy of real-time multi-model ensemble forecasts for seasonal influenza in the U.S.
PLoS Comput. Biol.
15
,
e1007486 (2019).
28.
Viboud, C.
et al
. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.
Epidemics
22
, 13–21 (2018).
29.
hospitalization-nowcast-hub: Collecting nowcasts of the 7-day hospitalization incidence in Germany
.
https://github.com/
KITmetricslab/hospitalization-nowcast-hub
(2022).
30.
CDC. FluSight: Flu Forecasting.
Centers for Disease Control and Prevention
https://www.cdc.gov/flu/weekly/flusight/index.html
(2021).
31.
Reich, N. G.
et al
. Collaborative hubs: making the most of predictive epidemic modeling.
Am. J. Public Health
e1–e4 (2022).
32.
IPCC — Intergovernmental Panel on Climate Change.
https://www.ipcc.ch/
(2022).
33.
The Inter-Sectoral Impact Model Intercomparison Project.
https://www.isimip.org/about/marine-ecosystems-fisheries/
(2022).
34.
CCMC: Community Coordinated Modeling Center.
https://ccmc.gsfc.nasa.gov/index.php
(2022).
35.
Hannan, A., Huang, Y. D. & Wang, S. Y. reichlab/covid19-forecast-hub-validations: Release for Zenodo, 20210816.
Zenodo
https://
doi.org/10.5281/zenodo.5207934
(2021).
36.
Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track COVID-19 in real time.
Lancet Infect. Dis.
20
, 533–534
(2020).
37.
Reinhart, A.
et al
. An open repository of real-time COVID-19 indicators.
Proc. Natl. Acad. Sci. USA
.
118
, (2021).
38.
Bracher, J., Ray, E. L., Gneiting, T. & Reich, N. G. Evaluating epidemic forecasts in an interval format.
PLoS Comput. Biol.
17
,
e1008618 (2021).
Acknowledgements
This work has been supported in part by the US Centers for Disease Control and Prevention (1U01IP001122)
and the National Institutes of General Medical Sciences (R35GM119582). The content is solely the responsibility
of the authors and does not necessarily represent the official views of the CDC, FDA, NIGMS or the National
Institutes of Health. Johannes Bracher was supported by the Helmholtz Foundation via the SIMCARD
Information & Data Science Pilot Project. For teams that reported receiving funding for their work, we report
11
Scientific
Data
|
(2022) 9:462
|
https://doi.org/10.1038/s41597-022-01517-w
www.nature.com/scientificdata
www.nature.com/scientificdata/
the sources and disclosures below.
AIpert-pwllnod
:
Natural Sciences and Engineering Research Council of
Canada.
Caltech-CS156
: Gary Clinard Innovation Fund.
CEID-Walk
: University of Georgia.
CMU-TimeSeries
:
CDC Center of Excellence, gifts from Google and Facebook.
Covid19Sim:
National Science Foundation awards
2035360 and 2035361, Gordon and Betty Moore Foundation, and Rockefeller Foundation to support the work
of the Society for Medical Decision Making COVID-19 Decision Modeling Initiative.
COVIDhub
:
This work
has been supported by the US Centers for Disease Control and Prevention (1U01IP001122) and the National
Institutes of General Medical Sciences (R35GM119582). The content is solely the responsibility of the authors
and does not necessarily represent the official views of the CDC, NIGMS or the National Institutes of Health.
Johannes Bracher was supported by the Helmholtz Foundation via the SIMCARD Information & Data Science
Pilot Project. Tilmann Gneiting gratefully acknowledges support by the Klaus Tschira Foundation.
CUBoulder,
CUB-PopCouncil
: The Population Council, and the University of Colorado Population Center (CUPC) funded
by Eunice Kennedy Shriver National Institute of Child Health & Human Development of the National Institutes
of Health (P2CHD066613).
CU-select:
NSF DMS-2027369 and a gift from the Morris-Singer Foundation.
DDS-NBDS
:
NSF III-1812699.
epiforecasts-ensemble1
:
Wellcome Trust (210758/Z/18/Z).
FDANIHASU
:
supported by the Intramural Research Program of the NIH/NIDDK.
GT_CHHS-COVID19
: William W.
George Endowment, Virginia C. and Joseph C. Mello Endowment, NSF DGE-1650044, NSF MRI 1828187,
research cyberinfrastructure resources and services provided by the Partnership for an Advanced Computing
Environment (PACE) at Georgia Tech, and the following benefactors at Georgia Tech: Andrea Laliberte, Joseph C.
Mello, Richard “Rick” E. & Charlene Zalesky, and Claudia & Paul Raines, CDC MInD-Healthcare U01CK000531-
Supplement.
GT-DeepCOVID:
This work was supported in part by the NSF (Expeditions CCF-1918770, CAREER
IIS-2028586, RAPID IIS-2027862, Medium IIS-1955883, Medium IIS-2106961, CCF-2115126), CDC MInD
program, ORNL, faculty research award from Facebook and funds/computing resources from Georgia Tech. BA
was supported by CDC-MIND U01CK000594 and start-up funds from University of Iowa.
IHME
: This work
was supported by the Bill & Melinda Gates Foundation, as well as funding from the state of Washington and
the National Science Foundation (award nocoviddata. FAIN: 2031096). Imperial-ensemble1: SB acknowledges
funding from the Wellcome Trust (219415).
Institute of Business Forecasting
: I B F.
IowaStateLW-STEM
: NSF DMS-
1916204, Iowa State University Plant Sciences Institute Scholars Program, NSF CCF-1934884, Laurence H. Baker
Center for Bioinformatics and Biological Statistics.
IUPUI CIS
:
NSF.
JHU_CSSE-DECOM
:
JHU CSSE: National
Science Foundation (NSF) RAPID “Real-time Forecasting of COVID-19 risk in the USA”. 2021-2022. Award ID:
2108526. National Science Foundation (NSF) RAPID “Development of an interactive web-based dashboard to
track COVID-19 in real-time”. 2020. Award ID: 2028604.
JHU_IDD-CovidSP
: State of California, US Dept of
Health and Human Services, US Dept of Homeland Security, Johns Hopkins Health System, Office of the Dean at
Johns Hopkins Bloomberg School of Public Health, Johns Hopkins University Modeling and Policy Hub, Centers
for Disease Control and Prevention. (5U01CK000538-03), University of Utah Immunology, Inflammation, &
Infectious Disease Initiative (26798 Seed Grant). JHU_UNC_GAS-StatMechPool: NIH NIGMS: R01GM140564.
JHUAPL-Bucky: US Dept of Health and Human Services. KITmetricslab-select_ensemble: Daniel Wolffram was
supported by the Klaus Tschira Foundation as well as the Helmholtz Association under the joint research school
“HIDSS4Health – Helmholtz Information and Data Science School for Health”. Moreover, his work was funded
by the German Federal Ministry of Education and Research (BMBF) and the Baden-Württemberg Ministry of
Science as part of the Excellence Strategy of the German Federal and State Governments.
LANL-GrowthRate:
LANL LDRD 20200700ER.
LosAlamos_NAU-CModel_SDVaxVar
: NIH/NIGMS grant R01GM111510; LANL-
Directed Research and Development Program, Defense Threat Reduction Agency; Laboratory-Directed
Research and Development Program project 20220268ER.
LU-compUncertLab
: UMass Amherst Center of
Excellence for Influenza, Institute for Data Intelligent Systems and Computation.
MIT-Cassandra
: MIT Quest for
Intelligence.
MOBS-GLEAM_COVID
: COVID Supplement CDC-HHS-6U01IP001137-01; CA NU38OT000297
from the Council of State and Territorial Epidemiologists (CSTE).
NCSU-COVSIM
: Cooperative Agreement
NU38OT000297 from the CSTE and the CDC.
NotreDame-FRED
: NSF RAPID DEB 2027718.
NotreDame-
mobility
:
NSF RAPID DEB 2027718.
PSI-DRAFT
:
NSF RAPID Grant # 2031536.
QJHong-Encounter
: NSF DMR-
2001411 and DMR-1835939.
SDSC_ISG-TrendModel
: The development of the dashboard was partly funded by
the Fondation Privée des Hôpitaux Universitaires de Genève.
UA-EpiCovDA
: NSF RAPID Grant # 2028401.
UChicagoCHATTOPADHYAY-UnIT: Defense Advanced Research Projects Agency (DARPA) #HR00111890043/
P00004 (I. Chattopadhyay, University of Chicago).
UCSB-ACTS
: NSF RAPID IIS 2029626.
UCSD_NEU-
DeepGLEAM
: Google Faculty Award, W31P4Q-21-C-0014.
UMass-MechBayes
: NIGMS #R35GM119582, NSF
#1749854, NIGMS #R35GM119582.
UMich-RidgeTfReg
: This project is funded by the University of Michigan
Physics Department and the University of Michigan Office of Research.
USC-SikJalpha:
This material is based
upon work supported by the National Science. Foundation RAPID under Grant No. 2135784 with support
from Centers for Disease Control and Prevention (CDC).
UVA-Ensemble
: National Institutes of Health (NIH)
Grant 1R01GM109718, NSF BIG DATA Grant IIS-1633028, NSF Grant No.: OAC-1916805, NSF Expeditions
in Computing Grant CCF-1918656, CCF-1917819, NSF RAPID CNS-2028004, NSF RAPID OAC-2027541,
US Centers for Disease Control and Prevention 75D30119C05935, a grant from Google, University of Virginia
Strategic Investment Fund award number SIF160, Defense Threat Reduction Agency (DTRA) under Contract
No. HDTRA1-19-D-0007, and Virginia Dept of Health Grant VDH-21-501-0141.
Wadnwani_AI-BayesOpt
: This
study is made possible by the generous support of the American People through the United States Agency for
International Development (USAID). The work described in this article was implemented under the TRACETB
Project, managed by WIAI under the terms of Cooperative Agreement Number 72038620CA00006. The contents
of this manuscript are the sole responsibility of the authors and do not necessarily reflect the views of USAID or
the United States Government.
WalmartLabsML-LogForecasting
: Team acknowledges Walmart to support this study.