The soldier looks barely out of his teens, in military fatigues and with his assault rifle on his lap. He holds his magazine up to the camera, showing the live rounds inside it. “You are going to demonstrate today? You can demonstrate peacefully,” he says. “If you cross the line and throw a stone or use a slingshot … these are real bullets. We will shoot you.”

The video, posted to TikTok on March 3, was liked more than 15,000 times before it was taken down five days later. It is one of hundreds of videos government soldiers in Myanmar have posted to the app since the military seized power in a coup d’état on February 1. Some of the clips are straightforward propaganda, employed to build sympathy for security forces and justify their brutality. Some are misinformation, used to divide and confuse anti-coup protesters. Others are outright threats: soldiers and police brandishing guns and ammunition, warning demonstrators that they are prepared to use them. 

Many of the videos have been removed since early March, after TikTok swung belatedly into action following reports by Vice World News and Reuters of the surge in hate speech and threats of violence on the platform. Activists say that, although TikTok is now starting to clean house, its approach to enforcing its own community standards has been slow and inconsistent, and new videos of this type are posted to the platform daily. In a statement to Rest of World, the company acknowledged that it had not taken a proactive approach in stopping the spread of potentially incendiary content. 

The proliferation of these videos comes at an extremely volatile time in the country: As of March 17, around 150 people have been killed in Myanmar amid protests in response to the coup, many shot in the street by police and soldiers.

“This platform is not taking action [or] taking responsibility,” said Htaike Htaike Aung, executive director at Myanmar ICT for Development Organization (MIDO), a Yangon-based internet-freedom group. “This will sooner or later lead to something like how Facebook [failed] in the Rohingya crisis, by being the platform hosting all these dangerous narratives.”

Activists and experts told Rest of World that TikTok’s failures were distressingly familiar to anyone acquainted with how Facebook was used to help drive an ethnic-cleansing campaign in Myanmar in the 2010s. Members of the Myanmar military, known as the Tatmadaw, spread misinformation across the platform, stoking division, hatred, and, eventually, violence. In 2018, United Nations human rights experts said that unchecked hate speech on Facebook contributed to the genocide against the country’s Rohingya minority. At the time, the company had too few Burmese-speaking moderators, and it failed to build an understanding of, and links to, civil society, blinding it to the language and tactics of malicious actors operating in the country. 

Now, members of Myanmar’s digital-rights community say that TikTok, a social media platform valued at $50 billion, is repeating these mistakes. These individuals described the company to Rest of World as a “black hole” that had not made inroads with civil society nor hired enough Burmese-speaking experts to enable it to enforce its own community standards, ahead of a wholly predictable surge in misinformation and hate speech. The company only began recruiting new Burmese-language moderators in December, months after the platform had already become fertile ground for military-linked propaganda.

The company’s failures highlight a fundamental issue with social media platforms, whose growth has often outpaced their ability — and willingness — to invest in properly moderating content. Facebook’s mistakes in Myanmar and elsewhere suggest that hate speech and misinformation are often coded and rooted in the place where they occur and can only be properly monitored and moderated by people with a deep understanding of that context.

These companies’ models also prejudice them toward trying to find scalable technological solutions to complex social problems that are often impossible to make sense of with local knowledge and human judgment. Most are gambling that they can develop algorithms to do the nuanced jobs of human moderators, but for video, audio, and livestreaming platforms, such algorithms could be years away — assuming it is even possible to develop them. In the meantime, platforms are stuck in a vicious cycle of crisis, public outcry, and eventual action. Yesterday it was Facebook — today, TikTok.

“It’s a systemic problem within these platforms,” said Ariadna Matamoros-Fernández, chief investigator at the Digital Media Research Centre at the Queensland University of Technology. “And I think we need to be aware because tomorrow it will be another platform, and the day after, another.”

A plain clothes policeman takes a photograph during protests in Myanmar in February 2021.
Panos Pictures/​Redux

U Htein Min Khine is 58 and lives in the central city of Meiktila. He was in his final-year of university when he participated in the 1988 uprising against the previous military junta, which ruled Myanmar — then Burma — from 1962 to 2011. That movement, in which hundreds of thousands participated, was met by a violent military repression. Now a representative of The 88 Generation Peace and Open Society, a pro-democracy group, he remembers how, back then, the military sent agitators into protests to create chaos and confusion and help justify their brutal crackdowns. “Now,” he said, “they are trying hard to do the same.” 

Over the past decade, the military has successfully adapted these tactics for social media. Its proxies and supporters, including ultranationalists and radical Buddhist groups, now use Facebook and other platforms to whip up hatred against the Rohingya, to create cover for the Tatmadaw’s genocide. Ahead of the country’s 2020 general election — won easily by the incumbent National League for Democracy, led by State Counselor Aung San Suu Kyi — similar networks tried to undermine the integrity of the vote by alleging fraud, laying the groundwork for the February coup. 

TikTok exploded in popularity in Myanmar in 2019 and 2020, after major telecoms networks began to bundle it with their services. Much like Facebook had done with its “free basics” model a few years prior, users could now access TikTok practically for free.

Activists began seeing military-linked content moving on TikTok before the last election, when Facebook began more actively seeking out disinformation networks

Following the coup, Facebook banned all Myanmar-military-linked pages from its platforms, further driving the Tatmadaw onto TikTok as it tries to disrupt the protests. “They are targeting the younger generation,” Khine said. “[They want] to break the younger people’s minds and make them not participate in the demonstration.”

Khine says there have been videos posted to TikTok in which people pretending to be members of the civil-disobedience movement try to dissuade citizens from taking to the streets.

“The posts that I noticed are ‘Protesting is very dangerous, you can die anytime,’and ‘Please don’t go out, we have no weapons, so if we confront the armed forces, the only way is [death],’” Htein Min Khine said. “They pretend like they are one of us.”

Many of the videos on TikTok are just propaganda: police and soldiers asking for support and sympathy by trying to highlight the hardships that they face in putting down the protest movement. The videos come from individual accounts, rather than from official channels, making it difficult to judge whether they are coordinated or spontaneous. Some accounts have propagated rumors that “5,000 kyat protesters” — paid agitators — were circulating among the demonstrators, a lie activists fear could spread discord and divide the protest movements, according to Suu Chit, a human rights activist in Mandalay.

Still, some of the messages are more openly belligerent: soldiers brandish guns and warn protesters. The soundtracks of these videos often feature military music or protest chants in which the words have been changed to threaten demonstrators with violence.

“The posts that I remember are mostly ‘We don’t care about you citizens. We are soldiers, so we have the responsibility to crack down on all of you,’ and ‘If you come out [to protest] on the street, we will [kill you],’” Suu Chit said.

TikTok’s community guidelines prohibit the display of firearms as well as “misinformation that incites hate or prejudice, misinformation related to emergencies that induces panic, content that misleads community members about elections or other civic processes, and conspiratorial content that attacks a specific protected group or includes a violent call to action, or denies a violent or tragic event occurred.”

Activists and analysts say that some posts, using racist terminology, contain insinuations that protesters are being manipulated by foreign forces and Muslims, an alarming echo of the narratives that stoked the Rohingya crisis. The danger, said one political analyst in Yangon, who asked to remain anonymous for fear of reprisals, is that military personnel themselves will believe the misinformation and be spurred on to acts of violence.

“The troops are being led to believe that this is a national cause; they are not repressing the public for nothing. This is a national cause that they have to guard against,” the analyst said.

A police officer looks out from between riot shields during protests in Myanmar.
Panos Pictures/​Redux

Ironically, one of the legacies of Facebook’s failures in Myanmar is a thriving digital-rights ecosystem in Yangon. On the front line of weaponized social media for more than half a decade, the Burmese tech community has developed tools and tactics for dealing with Big Tech. They have become experts at using Facebook’s interface to identify and flag falsehoods and hate speech, often much more effectively than the company does itself. Facebook itself has invested in this ecosystem. The company’s reckoning in Myanmar underscored that a one-size-fits-all, Anglocentric approach to content moderation simply didn’t work in a place where hate speech was heavily coded and tied to the country’s unique context: At the height of the Rohingya crisis, Facebook notoriously had only a handful of Burmese-speaking moderators.

In 2020, Facebook trialed a new system for identifying and removing problematic content. It expanded its local staff and gave its “trusted partners” — NGOs and fact checkers in Myanmar — an unprecedented mandate to identify hate speech and election-related misinformation. The strategy has attracted qualified praise from activists.

TikTok, by contrast, has not built links with local organizations. “They’ve not even reached out to the digital rights community,” Htaike said. “We don’t know who they are, what they are doing, or even if they are doing anything at all.”

Rest of World contacted TikTok to clarify how many Burmese speakers it has on its moderation teams, how it intends to strengthen enforcement of its community standards in Myanmar, and how it engages with civil society. Via its public relations agency in Singapore, the company declined to provide specific details. 

In a statement, it acknowledged that its approach had been reactive rather than predictive: “When we identified the rapidly escalating situation in Myanmar, we quickly expanded our dedicated resources and further stepped up efforts to remove violative content. We aggressively banned numerous accounts and devices that we identified promoting dangerous content at scale. Additionally, we’ve also been working closely with local partners and civil groups to identify and address concerning new trends and take appropriate and timely action. We have global fact-checking partners working aggressively to verify and remove inaccurate or false information.”

According to two people with knowledge of TikTok’s hiring process, ByteDance, TikTok’s parent company, was actively trying to recruit Burmese-speaking “QAs” — quality-assurance staff — in late December 2020, more than a month after the election in Myanmar. Those roles, based in Malaysia, involve coordinating with an external contractor to enforce community standards in Myanmar. As of March 15, TikTok was advertising for three Burmese-speaker jobs in Singapore, Malaysia, and Thailand, including a “community content management specialist,” and one other, a Singapore-based “Program Manager, Trust and Safety,” the listing for which says, “Fluency in Burmese and/or Cantonese is a plus.”

Researchers and activists have reacted with exasperation to how the Myanmar military is weaponizing TikTok.

“You would think there would be more of a playbook by this point,” said Emma Llansó, director of the free-expression project at the Center for Democracy and Technology in Washington, D.C. “In a country like Myanmar, where … there’s the very recent history of a genocide fueled by social media activity, that should be at the top of any company’s list, if they know there is a significant population on their service. That’s something that should be prioritized because the indicators are so obvious.”

This speaks to a wider problem at social media platforms, Llansó said. The companies generally try to keep their structures lean, scaling through technology rather than by adding people. They outsource human moderation where they can, buying it as a service instead of investing in staff. 

“It would be ideal to have everyone in-house to oversee everything, but if you try to be realistic, I don’t think it is likely in the near future,” said Nuurrianti Jalli, an assistant professor of communication studies at Northern State University in Ohio, who researches information disorder and social media in Southeast Asia. “But it’s a resources thing.”

For large social media companies, being reactive to crises as they emerge, rather than putting resources in place proactively, makes financial sense. “That will cost less money,” Jalli said. “Something happens in Myanmar, let’s find someone who can look into this.”

These algorithms are not really fine-tuned to identify hate and humor or other kinds of memetic practices.”

In the private conversations that these businesses prefer with journalists and researchers to on-the-record briefings, they often talk up the potential of automated-moderation systems to replace human operators altogether. But while the technology for parsing text and still images has developed rapidly, video and audio content is far more complex. Experts say that TikTok and others need to invest heavily in human moderation and in building connections with civil society — both to deal with the immediate problem and to ensure that their AI systems do not encode the same blind spots they have as outsiders. 

“Obviously there is overt racism, there is overt hate, but most of the time, [perpetrators] use coded language. And this coded language evolves,” said Matamoros-Fernández of the Queensland University of Technology. said. An algorithm would need to do more than just understand hashtags and obviously offensive or threatening terminology; it would have to be able to dynamically adapt. In Myanmar today, for example, it would need to distinguish between the chants used by the nonviolent pro-democracy movement and the co-opted versions of those chants, whose lyrics call for killings, both of which appear on the platform. 

There’s memes, there’s sounds, there’s GIFs, stickers. It can encode hate, all this visual material. But these algorithms are not really fine-tuned to identify hate and humor or other kinds of memetic practices,” Matamoros-Fernández said. “This idea of efficiency and scale without really knowing context and language, it’s not working. … Automation is not the only way to go.”

Myanmar activists said that TikTok has to act now if it wants to avoid sparking the same type of violence that Facebook did just a few years ago. 

“We don’t believe that TikTok is a responsible business,” Suu Chit, the activist in Mandalay, said. “We have nothing to say about [TikTok]. If people want to use it, I would encourage them to use it just for fun, only because they are not a responsible company.”