At the height of the 2018–20 Ebola outbreak in the northeastern Democratic Republic of Congo, dozens of new cases were being identified each week. Trust in the government was exceptionally low, as officials had used Ebola as an excuse to disenfranchise opposition voters, and people were suspicious of the vast sums of money and international workers pouring into the country to combat a disease that many had never seen before.

Several months earlier, in the early stages of the outbreak, researchers had studied more than 80,000 WhatsApp messages and found that 10,400 of them contained misinformation about the disease. Over the next several months rumors spread that Ebola wasn’t real; that it was a way for foreigners to murder Congolese people and steal their organs; that it was nothing more than a pretext for corruption and embezzlement. In a separate study, researchers found that the more willing an individual was to believe in misinformation, the less likely they would be to take preventative measures against Ebola.

What happened in Congo was an early example of what the World Health Organization is now calling an “infodemic” — the excess of information, particularly misinformation, that often coincides with a public-health crisis and makes it difficult to distinguish credible claims from false ones. It is not uncommon for misinformation to accompany an outbreak: During the 1918 flu pandemic, which started during the final year of World War I, rumors spread on each side blaming the other for intentionally spreading the virus. But the major innovations of the past two decades — and specifically, social media — have made it easier than ever for mistruths to gain global traction.

By the time Covid-19 was declared a pandemic, it was well understood that the internet, and especially social media platforms, were ideal hosts for mis- and disinformation. The anti-vaccine movement incubated for years in Facebook groups and the feeds of wellness influencers before it went mainstream. QAnon, an increasingly mainstream movement that believes a satanic cabal controls the world and traffics children, began on the message board 4chan before hopping, viruslike, into the mainstream of social platforms. Rumors about 5G abounded online before they became a focal point of coronavirus conspiracy theories. In a 2018 piece for Nature, Dr. Heidi J. Larson, the director of the Vaccine Confidence Project at the London School of Hygiene and Tropical Medicine, noted that the combination of bad science, nefarious actors, and “super-spreaders” — individuals who promote harmful content across platforms — were likely to make the next pandemic (that is, our current one) far more difficult to control. When mixed with a lack of public trust in institutions all the elements were in place for an infodemic.

Part of the promise of social platforms is to connect us with people we already know or who share our affinities (even if they happen to include a suspicion of vaccines). The surge in these kinds of connections, says author Rachel Botsman, has been part of a wider, technology-enabled shift to an era of “distributed trust” in which we’re more likely to place our faith in individuals than in institutions. Of course, this has consequences. According to Patrick Vinck, a professor of global health at Harvard’s T.H. Chan School of Public Health, when institutional trust is low, people are more likely to believe misinformation. That, in turn, can let destructive phenomena — like pandemics or extremism — spread unchecked.

The most commonly offered solution for this crisis of trust is a return to reliable sources, such as local news outlets or community leaders. And this has proven effective. A recent study of coronavirus misinformation in Zimbabwe found that people who received coronavirus information from trusted local civil-society organizations via WhatsApp were more likely to abide by lockdown protocols than those who didn’t get the messages.

But as we’ve seen over the past year, relying on local sources is not a cure-all. In fact, it sometimes has disastrous effects.

Zapan Barua, a professor of marketing at the University of Chittagong in Bangladesh, singled out local religious leaders in Bangladesh as being particularly effective at swaying individual behavior — but not always for the better. At the beginning of the pandemic, “many faith leaders were telling people [over social media] that if they came to the mosque or the pagoda, then God would save them,” said Barua. Similar examples have surfaced around the world: In Mexico, First Draft reported that religious leaders were promoting chlorine dioxide, a potentially lethal form of bleach, as a coronavirus preventative, and in India, a doctor told his nearly one million YouTube subscribers that Covid-19 could be cured by making small dietary changes. In her recent research, Irene Pasquetto, an assistant professor of information at the University of Michigan, found that most groups on closed messaging apps like WhatsApp or Telegram are small, and usually made up of people who know each other. These trust relationships make information shared in these contexts especially believable.

In the real world, our understanding of information is contextual. When somebody knows that the neighbor critiquing their outdated decor is an interior designer looking for new clients, they’ll consider what they hear with that in mind. But online, the algorithms that decide what to show us next don’t have this kind of context. All they “know” is that if you’ve recently bought a set of tarot cards online or shared an astrology meme, you might be interested in seeing anti-vaccine content. There’s no understanding of the reasons why you might have done this — a joke, a gift — only the outcome. (Though algorithmic amplification is not an issue with closed messaging apps, context and intent are often erased when images and blocks of text are copied and shared from group to group.)

In the absence of these social cues, it’s worth considering how trust is developed in the first place: through consistent and repeated demonstrations of accountability. Social media platforms were initially praised for giving regular people a means to demand accountability from traditional institutions — a move that was especially revolutionary in parts of the world where governments and leaders have historically been able to act with impunity. But having given the people a digital megaphone, the platforms left it to individual users to regulate how they used it. Years on, we’ve seen how the platforms’ refusal to be held accountable for what has been published has created vacuums of authority that various actors are now fighting to fill.  

Even before a global outbreak strained their systems and sent (some of) their workers home, social media companies were already struggling to live up to their own moderation standards. Enforcement varies across regions and languages, and misinformation stamped out on one platform is often able to flourish on another. There’s a logic to this — social media companies are for-profit enterprises, and in general, any traffic is good traffic. These companies answer to their shareholders, and to their users only when the bottom line is at risk. Our current infodemic has compelled people around the world to question those in authority, including social media companies. Perhaps it will also help us remember what institutions are supposed to do in the first place.

Patrick Vinck, the public-health researcher, said it’s important to remember that, often, false information reflects very real fears and grievances. If institutions are smart, they can turn a crisis into an opportunity to show that they really are acting in people’s interests. “And when they do that, then they can regain some level of trust.”