The science community has responded to the COVID-19 pandemic with such a flurry of research studies that it is hard for anyone to digest them all, underscoring a long-standing need to make scientific publication more accessible, transparent and accountable, two artificial intelligence experts assert in a data science journal.
The rush to publish results has resulted in missteps, say Ganesh Mani, an investor, technology entrepreneur and adjunct faculty member in Carnegie Mellon University’s Institute for Software Research, and Tom Hope, a post-doctoral researcher at the Allen Institute for AI. In an opinion article in today’s issue of the journal Patterns, they argue that new policies and technologies are needed to ensure relevant, reliable information is properly recognized.
Those potential solutions include ways to combine human expertise with AI as one way to keep pace with a knowledge base that is expanding geometrically. AI might be used to summarize and collect research on a topic, while humans serve to curate the findings, for instance.
“Given the ever-increasing research volume, it will be hard for humans alone to keep pace,” they write.
In the case of COVID-19 and other new diseases, “you have a tendency to rush things because the clinicians are asking for guidance in treating their patients,” Mani said. Scientists certainly have responded — by mid-August, more than 8,000 preprints of scientific papers related to the novel coronavirus had been posted in online medical, biology and chemistry archives. Even more papers had been posted on such topics as quarantine-induced depression and the impact on climate change from decreased transportation emissions.
At the same time, the average time to perform peer review and publish new articles has shrunk; in the case of virology, the average dropped from 117 to 60 days.
This surge of information is what the World Health Organization calls an “infodemic” — an overabundance of information, ranging from accurate to demonstrably false. Not surprisingly, problems such as the hydroxycholoroquine controversy have erupted as research has been rushed to publication and subsequently withdrawn.
“We’re going to have that same conversation with vaccines,” Mani predicted. “We’re going to have a lot of debates.”
Problems in scientific publication are nothing new, he said. As a grad student 30 years ago, he proposed an electronic archive for scientific literature that would better organize research and make it easier to find relevant information. Many ideas continue to circulate about how to improve scientific review and publication, but COVID-19 has exacerbated the situation.
Some of the speed bumps and guard rails that Mani and Hope propose are new policies. For instance, scientists usually emphasize experiments and therapies that work; highlighting negative results, on the other hand, is important for clinicians and discourages other scientists from going down the same blind alleys. Identifying the best reviewers, sharing review comments and linking papers to related papers, retraction sites or legal rulings are among other ideas they explore.
Greater use of AI to digest and consolidate research is a major focus. Previous attempts to use AI to do so have failed in part because of the often figurative and sometimes ambiguous language used by humans, Mani noted. It may be necessary to write two versions of research papers — one written in a way that draws the attention of people and another written in a boring, uniform style that is more understandable to machines.
Mani said he and Hope have no illusions that their paper will settle the debate about improving scientific literature, but hope that it will spur changes in time for the next global crisis.
“Putting such infrastructure in place will help society with the next strategic surprise or grand challenge, which is likely to be equally, if not more, knowledge intensive,” they concluded.
Story Source:
Materials provided by Carnegie Mellon University. Original written by Byron Spice. Note: Content may be edited for style and length.