When Facebook whistleblower Frances Haugen was asked what motivated her to share a trove of internal company documents with 18 news organizations and researchers in India and the Middle East, her response was straightforward. “The reason I wanted to do this project is because I think the Global South is in danger,” she said.

Now, the publication of the so-called “Facebook Papers” confirms what activists, journalists, and civil society organizations in the Global South have been alleging for years: that Facebook chronically underinvests in non-Western countries, leaving millions of users exposed to disinformation, hate speech, and violent content. 

Over the last 72 hours, Rest of World combed through more than 60 pieces of reporting from the New York Times, CNN, Washington Post and the rest of the Facebook Papers consortium to offer a country-by-country view of how Facebook’s repeated underinvestment around the world has helped spur hate, violence, and even armed conflict.


Misinformation and hate speech circulate widely on Facebook in Ethiopia, inflaming ethnic violence in the country’s ongoing civil war, according to multiple stories based on the Facebook Papers.

In June 2020, before the beginning of the country’s ongoing civil war, an internal report by Facebook cited Ethiopia as one of the platform’s most at-risk countries. The report indicated the company still did not have automated systems, called classifiers, that could adequately detect harmful posts in Amharic and Oromo, the two most-spoken languages in Ethiopia, according to CNN. Facebook’s failure to hire moderators and to develop AI systems to adequately identify hate speech and misinformation in regional languages and dialects has been well-documented in countries across the Global South, and is one of the most consistent findings from the Facebook Papers. One Facebook team wrote back in December 2020 that the company’s integrity operation “doesn’t work in much of the world,” and cited high-risk countries like Ethiopia as an example, according to a Wall Street Journal story from last month.

Since the launch of the so-called Tigray War between the Ethiopian government and the Tigray People’s Liberation Front in November 2020, multiple human rights groups have documented ethnic cleansing across the northern Tigray region, where the minority Tigrayan population has been targeted in mass killings by armed ethnic Amhara forces and troops from neighboring Eritrea.

Finbarr O’Reilly/The New York Times/Redux

A report distributed in March 2021 by Facebook employees included evidence that an ethnic Amhara militia group was operating a network of accounts on Facebook to fundraise and recruit, as well as “seed calls for violence,” according to CNN. The militia, called Fano, has been linked to the killing of civilians, rape, and lootings in Tigray, and fans ethnic tension through Facebook posts. In regards to the moderation of ethinc hate speech on Facebook in Ethiopia and coordinated misinformation campaigns involving Tigrayans, the team wrote at the time, “current mitigation strategies are not enough.”

In response to the Journal’s reporting on Facebook operations around the world, Facebook said that it has since increased its capacity to review Ethiopian languages and improved automated systems to flag content. This included hiring a dedicated team focused on risks in Ethiopia. In June, for example, Facebook announced it had removed a network of coordinated accounts operating in the lead-up to the reelection of Prime Minister Abiy Ahmed.

Still, Haugen said in her congressional testimony that Facebook’s role in inciting violence in Ethiopia was a driving factor in her decision to go public, pointing to parallels between Facebook’s role in the genocide of Rohingya Muslims in Myanmar. “What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it,” she said.


India is Facebook’s largest market, with over 340 million users. It’s here that the Facebook Papers provide the most damning example of the consequences of the platform’s underinvestment in moderating harmful content and failure to enforce its own community standards. 

In 2019, a Facebook researcher set up a “test account” as a user in the southern state of Kerala and began watching videos and joining groups recommended by Facebook’s algorithm. The account was soon inundated with hate speech, misinformation, and posts glorifying violence, according to the New York Times. “Following this test user’s news feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote.

Although Facebook has been operating in India since 2006, the platform did not deploy resources to handle anti-Muslim hate speech until very recently. One document outlined that Facebook had not developed algorithms to detect hate speech in Hindi and Bengali, two of the most widely-spoken languages in the country, until 2018 and 2020. According to the Washington Post, Facebook only added systems to detect incitement and violence in both languages this year.

“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total.”

The documents presented damning evidence of how Facebook catered to India’s ruling Hindu nationalist government in an effort to forestall a political backlash. A December 2020 Facebook internal deck on political influence over content policy acknowledged the company “routinely makes exceptions for powerful actors when enforcing content policy,” and cited India as an example.

One case study focused on groups and pages of the Rashtriya Swayamsevak Sangh, an organization associated with the Hindu nationalist Bharatiya Janata Party (BJP), which compared Muslims to “pigs” and shared misinformation suggesting that the Quran urged Muslims to rape female family members. The organization’s Facebook groups had not been flagged due to “political sensitivities,” per the documents. One document titled “Lotus Mahal” showed that members with links to the BJP created multiple Facebook accounts to amplify anti-Muslim content and promote “love jihad,” a conspiracy theory accusing Muslims men of using interfaith marriage to convert Hindu women into Islam. The research found that such content was never flagged because of underinvestment in moderating content in the Hindi and Bengali languages.

Israel and Palestine

When violence broke out in May 2021 across Gaza and Israel, Palestinians began reporting that posts were being removed from Facebook platforms. This included one egregious case in which Instagram removed posts and banned hashtags mentioning the Al-Aqsa Mosque, one of Islam’s holiest sites. Facebook later said that its algorithms had confused “Al-Aqsa Mosque” for a similarly-named militant group.

Internal documents from the Facebook Papers revealed that the company knowingly failed to allocate sufficient resources to address hate speech and content related to terrorism in the region. According to the Associated Press, Facebook lacked moderators who could speak key languages and understand local contexts around the world, a gap that the company’s AI system could not fill. This led to the suppression of speech in Palestine, as well as in other countries such as Afghanistan and Syria, where the company instituted blanket bans for common words and erased content that did not violate the platform’s guidelines.

Ahmad Gharabli/AFP/Getty Images

One document from the Facebook Papers acknowledged that Facebook was “incorrectly enforcing counterterrorism content in Arabic,” and that it “limits users from participating in political speech, impeding their right to freedom of expression.” Arabic is Facebook’s third-most used language. 

Although Facebook told the AP that it had invested in hiring staff knowledgeable of local dialects and topics over the past two years, the lack of language localization transcends Arabic. The Washington Postreported that Mark Zuckerberg opposed a tool that would promote voting information in Spanish on WhatsApp ahead of the 2020 US elections, fearing that it was not “politically neutral.”


Content on international human trafficking and domestic servitude has been an issue on Facebook for years. CNN reported that in 2019, after Apple threatened to remove Facebook-owned apps from the App Store because of a BBC investigation, Facebook took action against more than 130,000 pieces of Arabic content related to domestic servitude. The investigation found that Facebook and Instagram were the primary platforms through which workers were recruited, and the Philippines was the primary country of origin for the exploited.

CNN’s report also suggested that Facebook’s efforts to expand its policies around domestic servitude, human trafficking, and modern-day slavery simply pushed traffickers to change their strategies. An internal report from February 2021 found that traffickers were no longer advertising openly on Facebook, but instead messaging potential victims directly. CNN said that it was able to identify domestic workers for sale on Instagram as recently as last week.

The report also detailed documents that revealed that Facebook’s products — its flagship platform, Messenger, Instagram, and WhatsApp — were all used in the labor trafficking industry, and that the company lacked “robust proactive detection methods … of domestic servitude in English and Tagalog to prevent recruitment.”


In late 2020, Facebook faced a crossroads in Vietnam. The government threatened to shut down the platform entirely if it did not comply with requests to censor “anti-state” content. Rather than lose access to an estimated $1 billion market, the company complied. According to the Washington Post, Zuckerberg directly authorized this decision. Despite Zuckerberg’s decision to play ball in Vietnam, government trolls weaponized Facebook’s own systems for reporting abuse to deplatform activists, journalists, and members of civil society, some of whom were later arrested.

In justifying its decision to moderate “anti-state” content, Zuckerberg reportedly argued that complying with censorship was the price of keeping Facebook online in Vietnam, saying that a total shutdown would be worse for free speech.

Vietnam is one of a handful of countries that has what experts call “hostage laws” that require platforms operating in the country to have local representation. Facebook has so far not complied with this government requirement.

Want to learn more about what Facebook means to those not living in the West and how it has affected its users? Follow our reporting on Facebook.