Wearing a plastic face shield and his orange monastic robes, Ashin Wirathu walked into a Yangon police station on November 2 after more than a year on the run. Until he went into hiding in June of last year, Wirathu, a nationalist preacher with links to the Tatmadaw — the Burmese military — was one of Myanmar’s most prominent Islamophobes, spreading bigotry and disinformation online. In May 2019, he was charged with sedition, but he never showed up for his court date.
Wirathu’s surrender came in the final stages of an election campaign that once again exposed Myanmar’s racial and religious tensions, three years after a surge in hate speech and nationalist sentiment drove a campaign of ethnic cleansing targeting the country’s Muslim Rohingya minority.
The monk’s act was a piece of theater designed to put him and his anti-Muslim ideology back in the public eye and feed into the rumors and misinformation that was circulating online. Activists braced for a surge in hate speech on social media. It never came. The hard news of his arrest was covered by the mainstream press, but online, his supporters were strangely muted. Facebook pages sharing his views were flagged by civil society groups and quickly shut down by the platform.
“He didn’t get anything like the megaphone that he had hoped to get,” says Victoire Rio, a Yangon-based researcher and advisor to the Myanmar Tech Accountability Network, a consortium of nonprofit groups working on disinformation and hate speech. “The whole thing got fairly well contained, and that’s in big part because Facebook was able to take action.”
Facebook has faced years of criticism for its failings in Myanmar. In 2018, the United Nations alleged the platform allowed the proliferation of hate speech that incited the genocide. Campaigners in the U.S. and in Myanmar had been pressuring the company to prepare for the 2020 elections to prevent further violence.
In response, Facebook began trialing a new, more active approach to dealing with hate speech and political disinformation in the run up to the vote on November 8. It involved limiting users’ ability to reshare old pictures without context — a common misinformation tactic — and working with local partners to verify political parties’ pages and fact-check stories.
Most significantly, the “community standards” Facebook uses to police content on its platforms were expanded. Usually, the company does not remove misinformation unless it creates a threat of imminent harm, but for a few months in Myanmar, that policy was broadened to include anything that could “suppress the vote or damage the integrity of the electoral process” — for example, a baseless rumor that the country’s de facto leader, Aung San Suu Kyi, had died from Covid-19, which circulated before the election. Normally, the company’s community standards defines “hate speech” narrowly as attacks on people not concepts — Muslims, but not Islam. That was changed to prevent users from targeting the religion itself or attacking “invaders” or “illegals,” derogatory terms that refer to the Rohingya.
It is the first time the company has created country-specific community standards.
“That was a huge deal,” Rio says. “A lot of the problematic content is not framed as hate speech or as a call to violence in the way that Facebook likes it to be, like Cluedo — the weapon, the victim, the location, the whole thing.”
Myanmar has a well-documented problem with online hate speech and malicious falsehoods, which often attack the country’s Muslims. Nationalists and ultra-conservative Buddhists have successfully portrayed Muslims as a threat to the country’s culture and majority religion and as foreign interlopers. This hostility is so widespread that in the 2020 election, mainstream political parties competed to portray their opponents as more sympathetic to Islam.
One widely shared claim, originating from a candidate for the military-backed Union Solidarity and Development Party, was that Suu Kyi’s National League for Democracy (NLD) had 42 Muslim candidates. The true number was two. An image circulated on social media showing an NLD meeting in which Buddhist monks appeared to be sitting on the floor while Muslim attendees sat on chairs, which, presented without context, looked like a calculated insult. Normally, such falsehoods would not have breached Facebook’s community standards.

In October and November, disinformation that tried to exploit religious differences to distort the election result was taken down by Facebook, as were fake “leaked documents” feeding conspiracy theories linking Suu Kyi to the financier and human rights activist George Soros and unfounded claims by the opposition that the vote was rigged.
Several of Facebook’s “trusted partners,” civil society organizations that the company gives a kind of express lane for reporting violations of community standards, told Rest of World that in October and November, when they reported falsehoods and hate speech, their complaints were quickly escalated and the flagged content was removed. Previously, they found that their interventions carried little weight.
“To their credit, [Facebook has] thrown in much more effort, many more resources, for this election than we’ve seen them do … in the last few years,” says Jes Kaliebe Petersen, CEO of Phandeeyar, a social-technology accelerator in Yangon. “That’s been very positive. They’ve had boots on the ground; they’ve had people from their Singapore team dedicated to this. They’ve engaged with civil society.”
In October, Facebook removed dozens of accounts for “coordinated inauthentic behavior,” its term for spreading disinformation, and dismantled networks of fake users and clickbait pages that were pushing political misinformation. (Data for Facebook’s takedowns in November is not yet available.) Activists told Rest of World that the approach led to a significant reduction in disinformation and hate speech compared to the previous election five years ago.
“I’m not saying there is no hate speech in 2020, but I think it’s much less than in 2015,” says Harry Myo Lin, an interfaith activist. “They have a more active response to reports and hateful content. It’s not perfect, but I think they did better than before.”
A side effect of Facebook’s crackdowns on hate speech some actors migrated to other platforms, including the Russian social network VK and YouTube — exposing the latter’s own deficiencies in dealing with malicious content. “YouTube, now, is the most problematic one,” says Myat Thu, founder of the Yangon-based fact-checking site Think Before You Trust.
Google-owned YouTube did not respond to a request for comment.
With the vote now over — the incumbent NLD won in a decisive victory — Facebook is set to return to its pre-election policies on December 6, leaving activists concerned that the platform will revert to being a place where hate and disinformation continues to circulate.
“The big question is: What happens next?” Petersen says. “Is this a sign of a new level of consistent support and focus from Facebook, or is it like a SWAT team that they put in, and now … the SWAT team is going to leave again, and we’re back to where we were before.”
In a statement, Facebook said it considers the experiment a success and could adapt it for other countries in the future.
At the Myanmar Tech Accountability Network, Rio said that while Facebook’s experimental approach still had profound flaws, it demonstrated that the company does have the tools to deal with the corrosive misinformation and bigotry that have proliferated on the platform worldwide.
“We need to give them credit where it’s due. They did pretty well,” Rio says. “But if anything, it shows that when they want to, they can.”