UK Markets closed

Could This A.I. Tool Help Prevent the Next Pandemic Nightmare?

·4-min read
MANAN VATSYAYANA/AFP via Getty
MANAN VATSYAYANA/AFP via Getty

Despite a vocal cohort suspecting otherwise, many scientists continue to believe the novel coronavirus that has killed nearly five million people emerged when the deadly pathogen made the jump from animals to humans, a process called zoonosis.

Suffice it to say that after 20 months of lockdowns and despair, the world desperately wants to avoid another global-health crisis like this one. Now, a trio of scientists at the University of Glasgow in the U.K. think they have just the tool: an A.I. model that can identify animal viruses with a high risk of one day infecting humans.

Early tests, detailed in a new study in PLOS Biology, even purport to show how this very technology might have helped identify SARS-CoV-2, the technical name for the virus that causes COVID-19, before its documented emergence in Wuhan, China, in late 2019.

This is far from the first time A.I. has been dangled as a tool to help us understand how animal viruses can lead to human infections. Just this year, scientists at the University of Liverpool said they used A.I. to predict tens of thousands of previously unknown links that help explain how animal viruses may spill over into humans.

Armed with that information and other insights from the coronavirus pandemic, the Glasgow team developed its own A.I. model to help evaluate zoonotic risk soon after viruses have just been discovered—when there’s hardly any information available yet.

“The ability to predict whether a virus can infect humans from just a genome sequence, while still working reliably for completely new viruses not seen by the model, sets it apart from other approaches,” Nardus Mollentze, a viral ecologist at the University of Glasgow and the lead author of the new study, told The Daily Beast.

The Only Way to Resolve the Wuhan ‘Lab Leak’ Controversy

Using a dataset of 861 virus species from 36 families that are known to be zoonotic, the model was trained to look at features in viral genomes that would suggest a potential to infect humans, and to then assess the probability that human infection could actually occur. According to Mollentze, the model performed as well or even better compared to similar ones developed by other groups in the last year.

When the model was deployed on 645 additional animal viruses, it found 272 exhibited a high risk of zoonoses from animal to human, and 41 were deemed “very high-risk” candidates. That doesn’t necessarily mean that these viruses are outright able to infect humans, but rather that, like SARS-CoV-2, some mutations here and there might allow them to do so down the road.

Most scientists currently think the novel coronavirus found its way into bat or pangolin populations before infecting humans, even as some experts think more investigation is needed of the possibility of some kind of lab leak at a much-scrutinized research facility in Wuhan. For their part, the study’s authors say that additional tests of their model suggest it would have identified SARS-CoV-2 as a high-risk coronavirus strain before the first human cases, even in the absence of knowledge about close zoonotic sister viruses, like the first SARS virus.

At the very least, the researchers behind the new study think they have enough evidence to suggest the model could be an inexpensive tool to apply to growing databases of animal viruses. That, in turn, could help initially identify moments when humans might need to be cautious if and when an animal population were to experience an outbreak.

The model isn’t perfect, and false negative and false positive rates during training were sometimes as high as one-in-four, depending on which parts of the model were given more weight. Mollentze emphasized this is a symptom of how little we know about the huge diversity of animal viruses—training the model off of 861 viruses is not bad, but it’s just a sliver of the millions of viruses we’ve yet to even identify. As one opinion paper published by Philosophical Transactions of the Royal Society B pointed out, “no matter how sophisticated [A.I.] approaches become, they all face the fundamental task of overcoming data limitations.”

And while this model may only get better and better as time goes (it’s already being applied to new virus discoveries), we might still have only a fuzzy idea how we should use its predictions to protect ourselves. It’s easy to warn people what viruses might be a threat; it’s harder to actually outright reduce the risk of exposure (anti-maskers, anyone?)

But as Mollentze countered, “getting an initial risk assessment using our approach comes with little extra cost.”

In other words, given how crushing the coronavirus pandemic has been to the world, even an imperfect defensive measure is better than nothing.

Read more at The Daily Beast.

Get our top stories in your inbox every day. Sign up now!

Daily Beast Membership: Beast Inside goes deeper on the stories that matter to you. Learn more.

Our goal is to create a safe and engaging place for users to connect over interests and passions. In order to improve our community experience, we are temporarily suspending article commenting