(Reuters Health) – Malicious bots may be the source of a majority of COVID-19 misinformation spreading across social media, a new study based on Facebook sharing patterns suggests.
After identifying a subset of Facebook groups that appeared to be most affected by automated software link sharing, researchers determined that these groups were more likely than others to receive posts saying masks harmed health with a link to a scientific study that, in fact, showed the opposite, according to the report published in JAMA Internal Medicine.
Social media companies have said they want to get rid of this automated content, or bots, said study coauthor John Ayers, co-founder of the Center for Data Driven Health and vice chief of innovation in the division of infectious diseases at the University of California, Davis. But they are going about it in a very inefficient way – scrutinizing individual posts for spreading misinformation.
“The coronavirus pandemic sparked what the World Health Organization has called an ‘infodemic’ of misinformation,” Ayers said. “But they hadn’t talked at all about bots like those used in the 2016 election.”
“Are bots more prevalent than people think in sharing misinformation?” Ayers said. “They are. And there’s an easy fix for it. It’s easy to identify bots. You just look for coordinated behavior, like Facebook groups sharing the exact same content within seconds.”
Instead of zeroing in on bots, social media companies continue to evaluate whether individual posts contain misinformation, Ayers said. “That’s like playing Whac-A-Mole,” he added.
In an analysis of 299,925 posts made to 563 Facebook groups Ayers and his team identified a subset of groups that seemed especially susceptible to bot influence. Those groups averaged 4.28 seconds between shares of identical links, as compared to 4.35 hours for the groups least influenced by bots.
Then the researchers monitored sharing within groups of a link to one of the most shared studies of all time, according to Altmetric. The November 2020 publication of results from the Danish Study to Assess Face Masks for the Protection Against COVID-19 Infection (DANMASK-19) trial (https://bit.ly/3uZTVpK) was chosen because masks are an important health measure to help control the pandemic but they have also become a source of heated debate, Ayers said.
The researchers focused on the five days following the study’s online publication by the Annals of Internal Medicine (November 18, 2020 through November 22, 2020), and logged a total of 712 posts that provided direct links to DANMASK-19 shared in the 563 public Facebook groups. Of the posts sharing the DANMASK-19 report, 39% were made to the groups most influenced by bots with 19.8% of the posts claiming that the study showed that masks harmed the wearer, contradicting the actual findings. That’s compared to groups least influenced by bots, where 9% of the posts were shared.
Posts sharing the DANMASK-19 report made to groups most influenced by bots were 2.3 times more likely to claim masks harm the wearer and 2.5 times more likely to make conspiratorial claims than those showing up on groups least influenced by bots.
“We know from our case study that bots are the primary pathogen of misinformation on social media,” Ayers said. “But we don’t know exactly what that means. But we do know that they could be undermining critical health institutions.”
Ayers doubts the social media companies will ever completely wipe out bots. In the end, the bots may be of benefit to them by creating controversies and fanning disagreements and thus driving more and more people to read the content.
The new findings are “rather spooky,” said Barun Mathema, an assistant professor of epidemiology at Columbia University’s Mailman School of Public Health in New York City. “It’s quite a significant study. The sample size is quite small but that doesn’t necessarily detract from the findings since they picked a high-profile example. It’s also telling you that this is pretty likely to be a serious underestimate of the problem.”
“Mask wearing has been a lot more polarizing than other things,” said Mathema, who wasn’t involved in the study. “I can only imagine the types of misinformation out there around vaccines. These types of subversion have real consequences, especially when you consider that there is a big chunk of individuals in the U.S. not getting vaccinated during a pandemic.”
Dealing with the problem does really require a systemic strategy, Mathema said. “The authors are right,” he added. “Playing Whac-A-Mole is a losing game.”
SOURCE: https://bit.ly/3gkw3Il JAMA Internal Medicine, online June 7, 2021.
Source: Read Full Article