Advertisement
UK markets closed
  • NIKKEI 225

    40,168.07
    -594.66 (-1.46%)
     
  • HANG SENG

    16,541.42
    +148.58 (+0.91%)
     
  • CRUDE OIL

    83.11
    +1.76 (+2.16%)
     
  • GOLD FUTURES

    2,254.80
    +42.10 (+1.90%)
     
  • DOW

    39,807.37
    +47.29 (+0.12%)
     
  • Bitcoin GBP

    56,103.69
    +1,352.71 (+2.47%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • NASDAQ Composite

    16,379.46
    -20.06 (-0.12%)
     
  • UK FTSE All Share

    4,338.05
    +12.12 (+0.28%)
     

Twitter bots have limited success spreading anti-vaccination messages

<span>Photograph: Oscar Wong/Getty Images</span>
Photograph: Oscar Wong/Getty Images

Twitter users rarely see or retweet anti-vaccination content generated by bots, a study of millions of tweets found, suggesting the role of bots in spreading vaccine misinformation is limited.

The study was led by the University of Sydney’s associate professor Adam Dunn, who said despite growing concern about the influence of bots in spreading misinformation, they appeared to be ineffective when it came to influencing discourse around vaccines.

Dunn said while many previous studies into bots examined the number of bots, and who was behind them, it was much more important to measure their impact.

His research team examined more than 53,188 randomly-selected active Twitter users in the United States, and monitored their interaction with more than 20m vaccine-related tweets posted by both human-operated and bot-operated Twitter accounts between January 2017 and December 2019.

ADVERTISEMENT

Related: Health professionals threatened with disciplinary action if they spread anti-vaxx messages

They found a typical Twitter user saw a median of 757 vaccine-related posts, and a median of 27 posts critical of vaccination. Less than 0.5% of those critical tweets originated from bots, the study found. People were more likely to retweet anti-vaccination content that came from other people.

This is despite up to 15% of all Twitter accounts being bots – that is, accounts operated automatically by software to post, retweet or reply to users. Bot accounts vary in sophistication from simply reposting links to (often malicious) web pages, to masquerading as humans.

“There is an assumption that the more bots that post, the more impact they have, but that’s not true if you’re not measuring what reaches people,” Dunn, who heads the university’s school of biomedical informatics and digital health, said.

“Thousands of tweets may never be seen if those accounts have no human followers. If you’re simply counting up bots, you’re not measuring reach or impact.”

The paper found there was a small group of users for whom vaccine-critical tweets made up at least half of their vaccine-related retweets during the study period.

“Engagement with any vaccine-related tweets, vaccine-critical tweets, and bots was higher in the 5.8% of users who were embedded in communities where vaccine-critical content was common,” the study found.

“The overwhelming majority of the vaccine-related content seen by typical users in the US is generated by human-operated accounts, not bots.”

Dunn said that “the concentration of misinformation is in this small community and that is where the problem lies”.

“Rather than focussing on bots, we need to engage public health communication specialists whose aim is not to change opinions of vocal critics of vaccines, but to persuade silent observers of those critics and fence-sitters,” he said.

Engaging with people who were anti-vaccination and those who listened to them needed to be done by experts, Dunn added, because calling out their misinformation in the wrong way could do more harm than good, bringing their views out of a small community and into the mainstream.

Related: Millions of patients given flu drugs with little or no benefit, study finds

“We need to engage respectfully, and target the misinformation and not the person,” he said.

“Previous studies looking at misinformation show we made a huge mistake by looking at who is producing the misinformation, when in fact we need to focus on those exposed to it and engaging with it accidentally but who might be influenced.”

Dr Holly Seale, a senior lecturer with the school of population health at the University of New South Wales, said the role of trolls and bots was often sensationalised.

“We didn’t really have any sense of knock-on impact these bot systems were having and whether content was just being put out into the echo chambers of social media or if it was being picked up and actually passed on. It’s critical to know that, and this study helps us to understand that,” she said.

Seale said it would be interesting for researchers to examine the vaccine-critical content that was shared more closely.

“What were the messages in those tweets?” she said. “This paper didn’t really go into that, and that’s not a criticism, but it would be important to know what about vaccines people were critical of, and to know more about how vaccine-critical was defined.”