Influenza Hospitalizations 2017-18

Influenza Hospitalization Forecasting 2017-18

Influenza (flu) is a respiratory virus that can result in illness ranging from mild to severe. Each year, millions of people get sick with influenza, hundreds of thousands are hospitalized and thousands of people die from flu. Tracking influenza-related hospitalizations to inform prevention measures is an important public health function that is currently performed by CDC’s FluSurv-NET surveillance system, which can lag behind real-time flu activity. But what if it were possible to predict influenza hospitalizations accurately weeks or months in advance? While this is not currently possible, the goal of flu forecasting is to provide a more-timely and forward-looking tool that health officials can use to target medical interventions, inform earlier public health actions, and allocate resources for communications, disease prevention and control. The potential benefits of flu forecasting are significant.

Since 2013, the Influenza Division at the Centers for Disease Control and Prevention has worked with external researchers to improve the science and usability of influenza forecasts by coordinating seasonal influenza prediction challenges focused on the percentage of outpatient visits due to influenza like illness (ILI). This forecasting challenge expands on that work by forecasting influenza hospitalizations, which, unlike ILI, are laboratory-confirmed influenza cases.

Multiple outside research teams have developed different flu forecasting models that will provide influenza hospitalization forecasts to CDC for the 2017-18 influenza season. This beta website houses the weekly influenza hospitalization forecasts provided by the various research teams. It’s important to note that these are not CDC forecasts and that the forecasts on this website are not endorsed by CDC. These forecasts are based on different models, can vary significantly, and may be inaccurate.

Interested in participating in the challenge? Please email flucontest@cdc.gov for more information.

Submitted Forecasts

Use the interactive tool below to explore submitted forecasts for the 2017-18 influenza season. Click throughout the season to examine forecasts received during a given week. To see the most recent forecasts, click the forecast week immediately preceeding the dotted "Today" line.

Peak week and rate predictions are visualized by the stand-alone dots with confidence intervals, and week-ahead forecasts are visualized as the connected dots with confidence bands. More information on interpreting forecasts can be found in the FAQs.

Forecast Targets

For each week during the season, participants will be asked to provide overall and age-group specific probabilistic forecasts for the entire influenza season (seasonal targets) and for the next four weeks (four-week ahead targets). The seasonal targets are the peak week and the peak weekly hospitalization rate of the 2017-18 influenza season. The four-week ahead targets are the weekly rate of influenza hospitalizations one week, two weeks, three weeks, and four weeks ahead from date of the forecast.

Seasonal Peak Week

Definition The peak week will be defined as the MMWR surveillance week that the weekly FluSurv-NET hospitalization rate, rounded to one decimal place, is the highest for the 2017-18 influenza season.

Motivation Accurate and timely forecasts for the peak week can be useful for planning and promoting activities to increase influenza vaccination prior to the bulk of influenza illness. For healthcare, pharmacy, and public health authorities, a forecast for the peak week can guide efficient staff and resource allocation.

Seasonal Peak Rate

Definition The intensity will be defined as the highest numeric value, rounded to one decimal place, that the weekly FluSurv-NET hospitalization rate reaches during the 2017-18 influenza season.

Motivation Accurate and timely forecasts for the peak week and intensity of the influenza season can be useful for influenza prevention and control, including the planning and promotion of activities to increase influenza vaccination prior to the bulk of influenza illness. For healthcare, pharmacy, and public health authorities, a forecast for the peak week and intensity can help with appropriate staff and resource allocation since a surge of patients with influenza illness can be expected to seek care and receive treatment in the weeks surrounding the peak.

Short Term Forecasts

Definition One- to four-week ahead forecasts will be defined as the weekly FluSurv-NET hospitalization rate, rounded to one decimal place.

Motivation Forecasts capable of providing reliable estimates of influenza hospitalizations over the next month are critical because they allow healthcare and public health officials to prepare for and respond to near-term changes in influenza activity and bridge the gap between reported incidence data and long-term seasonal forecasts.

FluSurv-NET Data

Data on the weekly rate of influenza hospitalizations is reported through the FluSurv-NET system for the United States as a whole, as well as for individual FluSurv-NET sites. Rates are reported for specific age groups as well as an overall rate. These data can be accessed directly from CDC. Alternatively, the R package cdcfluview (available from CRAN or GitHub) can be used to access the data as shown in the following example

# Option 1: Install from CRAN
install.packages("cdcfluview")

# Option 2: Install from GitHub (most up-to-date version)
devtools::install_github("hrbrmstr/cdcfluview")

library(cdcfluview)

# FluSurv-NET data for entire network from 2009 to present
hospital <- hospitalizations(surveillance_area = "flusurv", years = 2009:2017)

Please note that while cdcfluview accesses publically available CDC data, it is not produced, maintained, or endorsed by the CDC.

Additional Data

Teams are welcome to use data sources for model development beyond ILINet - possible additional data sources include but are not limited to:

Forecast Evaluation

All forecasts will be evaluated using the weighted observations pulled from the FluSurv-NET system in week 28, and the logarithmic score will be used to measure the accuracy of the probability distribution of a forecast. Logarithmic scores will be averaged across different time periods, the seasonal targets, the four-week ahead targets, and locations to provide both specific and generalized measures of model accuracy. Forecast accuracy will be measured by log score only. Nonetheless, forecasters are requested to continue to submit point predictions, which should aim to minimize the absolute error (AE).

Logarithmic Score

If ;;\mathbf{p};; is the set of probabilities for a given forecast, and ;;\mathbf{p_i};; is the probability assigned to the observed outcome ;;i;;, the logarithmic score is:

$$S(\mathbf{p},i) = \text{ln}(p_i)$$

For peak week, the probability assigned to that correct bin (based on the observed FluSurv-NET) plus the probability assigned to the preceding and proceeding bins will be summed to determine the probability assigned to the observed outcome. In the case of multiple peak weeks, the probability assigned to the bins containing the peak weeks and the preceding and proceeding bins will be summed. For peak weekly rate and 1-4 week-ahead forecasts, the probability assigned to the correct 0.1 bin plus the probability assigned to the age group-specific number of preceding and following bins will be summed to determine the probability assigned to the observed outcome, with a minimum of 1 preceding and following bin. For each age group, and for overall rates, bins including up to plus or minus 10% of the observed value will be applied to each side of the observed value, rounded to the nearest 0.1. For example, if the observed overall peak hospitalization rate is 3.3 per 100,000, the bins encompassing 10% of this value (0.3) above and below the observed value of 3.3 will be included. Therefore, the probabilities assigned to all bins ranging from 3.0 to 3.6 would be summed to determine the probability assigned to the observed outcome.

In the case of very small observed values, a minimum of one bin proceeding and following the observed bin will be included with the observed bin. For example, if the observed weekly hospitalization rate is 0.2 per 100,000, the probabilities assigned to bins representing 0.1 and 0.3 will also be included. For all targets, if the correct bin is near the first or last bin, the number of bins will be truncated at the respective boundary. Undefined natural logs (which occur when the probability assigned to the observed outcome was 0) will be assigned a value of -10. Forecasts which are not submitted (e.g. if a week is missed) or that are incomplete (e.g. sum of probabilities greater than 1.1) will also be assigned a value of -10.

Example: At the conclusion of the season, FluSurv-NET showed that the 2016/2017 overall weekly hospitalization rates peaked at 5.4 per 100,000. As a result, the window of plus or minus 0.54 rounds to 0.5, which spans the probability bins from 4.9 to 5.9. If a forecast predicts there is a probability of 0.1 (i.e. a 10% chance) that hospitalization rates peak at 5.4 per 100,000, with an additional 0.3 probability that they peak between 4.9 and 5.3 and a 0.2 probability that they peak between 5.5 and 5.9, then the forecast would receive a score of ;;ln(0.6) = -0.51;;. If the season peaked on another week, the score would be calculated on the probability assigned to that week plus the values assigned to the preceding and proceeding week.

References

FluSight Package

The FluSight R package contains functions to help create and format forecasts, read and verify forecast CSVs, and score forecasts. These are the functions that will be used at CDC to verify and score submitted forecasts. Teams are welcome to use these tools to ensure their forecasts fit the required template and score their forecasts prior to receiving official scores from CDC

The package can be downloaded from GitHub.

# Install and load package
devtools::install_github("jarad/FluSight")

library(FluSight)

# Read in entry CSV
entry <- read_entry("your_csv.csv")

# Verify entry
verify_entry(entry, challenge = "hospital")
verify_entry_file("your_csv.csv", challenge = "hospital")

# Create file of observed truth from CDC surveillance data
truth <- create_truth(fluview = T, challenge = "hospital", year = 2017)

# Expand observed truth to take into account additional bins - 1 bin for weeks, +/-10% for percentage
exp_truth <- expand_truth(truth, week_expand = 1, challenge = "hospital", expand_by_percent = T, percent_observed = 0.10)

# Score a weekly entry against the observed truth
exact_scores <- score_entry(entry, truth)
expand_scores <- score_entry(entry, exp_truth)
Guidance Documents

Guidance for the 2017-18 Influenza Hospitalization challenge is available here

An empty copy of the offical submission template is available here

Frequenty Asked Questions

How do I see the most recently received forecasts?

To see the most recent forecasts on the visualization page, click in the visualization field on the week immediately preceeding the vertical dashed line marked "Today".

How do I view forecasts for a particular age group?

To see forecasts for a particular age group, use the dropdown menu in the top right corner of the visualization pane.

How do I view forecasts for influenza like illness in the entire United States or a particular HHS Region?

These forecasts are hosted on the "FluSight 2017-18" challenge. Please go to the main EPI page by clicking the "Epidemic Prediction Initiative" logo in the top left corned and select the "FluSight 2017-18" challenge.

What is the "FluSight Avg" forecast?

The FluSight average is an ensemble forecast generated by taking the arithmetic mean of all submitted forecasts. Ensemble forecasts have a record of success in both weather and infectious disease forecasting, and taking the mean of all forecasts reduces the likelihood of basing a decision on a poor individual forecast.

How do I interpret the forecasts shown?

For all of the following explanations, assume that the confidence intervals have been set to 50%. You can choose either 50% or 90% confidence intervals by clicking the corresponding number in the top right corner of the visualizations.

Forecasts for each target are displayed in different sections of the visualizations.

Peak week and rate forecasts are shown by the stand alone dots in the main graph. They represent a model's point forecast for the timing and intensity of the season peak. Mousing over a model's point forecast will pop up a display box with the model name and specific prediction values. The confidence intervals represent the range the model is 50% confident peak week or peak weekly rate will occur in.

Week-ahead forecasts are shown by the connected points. The dots represent a model's point forecast for the value of the FluSurv-NET weekly rate at a given week. Mousing over one of the dots will bring up a confidence band surrounding the point forecast. This band represents the range the model is 50% confident the observed FluSurv-NET weekly rate will fall in.

Why don't the point forecasts and confidence intervals for peak forecasts always line up?

The visualizations pull data directly from the forecast files submitted by teams. Depending on a team's methodology, point forecasts may be generated in a different way than the underlying probability distribution the confidence intervals are calculated from.

If you have additional questions, please email flucontest@cdc.gov

Participating teams can submit their forecasts here. If you are interested in participating please email flucontest@cdc.gov