Human behavior is key to building a better long-term COVID forecast
From extreme weather to another wave of COVID-19, forecasts give decision-makers valuable time to prepare. When it comes to COVID, though, long-term forecasting is a challenge, because it involves human behavior.
While it can sometimes seem like there is no logic to human behavior, new research is working to improve COVID forecasts by incorporating that behavior into prediction models.
UConn College of Agriculture, Health and Natural Resources Allied Health researcher Ran Xu, along with collaborators Hazhir Rahmandad from the Massachusetts Institute of Technology, and Navid Ghaffarzadegan from Virginia Tech, have a paper out today in PLOS Computational Biology where they detail how they applied relatively simple but nuanced variables to enhance modelling capabilities, with the result that their approach out-performed a majority of the models currently used to inform decisions made by the federal Centers for Disease Control and Prevention (CDC).
Xu explains that he and his collaborators are methodologists, and they were interested in examining which parameters impacted the forecasting accuracy of the COVID prediction models. To begin, they turned to the CDC prediction hub, which serves as a repository of models from across the United States.
“Currently there are over 70 different models, mostly from universities and some from companies, that are updated weekly,” says Xu. “Each week, these models give predictions for cases and number of deaths in the next couple of weeks. The CDC uses this information to inform their decisions; for example, where to strategically focus their efforts or whether to advise people to do social distancing.”
The Human Factor
The data was a culmination of over 490,000 point forecasts for weekly death incidents across 57 US locations over the course of one year. The researchers analyzed the length of prediction and how relatively accurate the predictions were across a period of 14 weeks. On further analysis, Xu says they noticed something interesting when they categorized the models based on their methodologies: More