In November, Saif al-Islam Gaddafi, son of the deposed former Libyan leader Muammar Gaddafi, formalized his candidacy for the country’s upcoming presidential elections. Two days later, the High National Election Commission (HNEC), the government body in charge of administering the vote, appeared to reject his bid — on its Facebook page.
The post, which said Gaddafi was disqualified for reasons of “instability,” was quickly deleted, and replaced by another, which said that HNEC’s Facebook page had been hacked. The confusion caused an uproar.
“A commission that cannot protect a page, how to protect a legitimate election??” read a post on a Libyan Facebook page with more than 23,000 followers.
Facebook is the most popular social platform in Libya, with more than 5 million users in a country of 7 million people, according to DataReportal, an organization that collects data around internet and social media usage around the world. The apparent HNEC hack was just the latest example of how the platform has become a central focal point for disinformation, misinformation, and inflammatory speech leading up to the Libyan presidential election — the first in the country’s history.
Twelve experts, including social media and conflict analysts, told Rest of World that in the months leading up to the December 24th election, Facebook appears to be woefully unprepared to manage Libya’s complex combination of hate speech, polarized media, active conflict, and fragile electoral politics. Experts and analysts told Rest of World that they fear that the country might erupt into conflict at any time.
Earlier this year, Facebook labeled Libya a “Tier 3” country, meaning that the company has likely not been taking a proactive, “war-room” style approach to the upcoming election. Instead, the company would be taking action only in response to reports from users or “trusted partners,” civil society organizations that have a direct line to the company — an approach, experts said, that fails to reckon with the severity of the circumstances.
“[The] situation in Libya is illustrative of a bigger problem,” a social media analyst who requested anonymity due to the sensitive nature of their work around Libya, told Rest of World. “The system that Facebook has in place right now to determine which countries are high risk, is leading to an underinvestment in countries where the risks of violence are some of the highest.”
The 2011 fall of Muammar Gaddafi created a power vacuum inside Libya, sparking a series of civil conflicts, including a six-year-long civil war from 2014 to 2020. Today, Libya is divided between the internationally-recognized government currently led by Prime Minister Abdul Hamid Dbeibeh in the west, and a competing government led by Khalifa Haftar, the head of the Libyan National Army, in the east. The conflict between the two parties has become a regional proxy war, fueled by foreign powers, mercenaries, and local militias.
A ceasefire agreement in 2020 and the subsequent Libyan Political Dialogue Forum — a U.N.-brokered peace conference in Tunisia — led to the establishment of an interim government in 2021, but left many loose ends, including new eligibility requirements for who could run in the elections and a lack of a formalized and agreed-upon constitution.
The upcoming presidential elections have the support of the U.N. and international community, though rights groups have flagged that the country’s institutions remain weak, making it hard to ensure a free and fair vote. There are currently five candidates: Gaddafi, Haftar, the former Minister of the Interior, the interim Prime Minister, and the Speaker of the House of Representatives.
The ongoing conflict and the lack of clarity around election rules have made Libya particularly susceptible to influence campaigns.
Much of the political machinations have played out over Facebook. Last year, the Stanford Internet Observatory identified a pro-Gaddafi network that repeatedly shared content from the Facebook Page Oya Agency Press, which has administrators in Egypt, and pro-LNA networks linked to Russia. Another paper from Stanford Internet Observatory researchers found that from April 2019 to June 2020, more than half the Facebook posts about Haftar’s seige of Tripoli originated outside Libya. And Rest of World identified several pages supporting candidates for Libya’s presidential election with page administrators based elsewhere in and beyond the Middle East.
Many experts and analysts who spoke to Rest of World said they were most concerned about social media narratives aimed at undermining the already tenuous elections — the kind of content that might not lead to immediate harm, but could lay the foundation for contesting the election results, and subsequent violence.
That includes content like a recent post on the Facebook page of a popular blogger and war photographer with more than 50,000 followers, who claimed that the the Muslim Brotherhood had either kidnapped or killed the HNEC chairman in Tripoli. The post was quickly deleted, but not before the claim went viral, spreading quickly to other pages and to news outlets in Libya’s heavily polarized media landscape. Facebook pages that appear to support Haftar, who has vowed to crush Islamist groups like the Muslim Brotherhood, helped amplify the narrative.
“These networks are painting the picture that the HNEC is not secure and it’s not independent because it’s under constant threat,” which could weaken public trust in the results of the elections, said Jake Hazen, an analyst with the analytics and intelligence firm Novetta. Hazen has been monitoring the Libyan media landscape in the lead-up to the elections.
Elections have proven to be difficult moments for Facebook. In the Philippines, the 2016 elections were marked by rampant misinformation and trolling, and India’s 2019 elections saw the platform mired in hate speech. “Things like elections and any kind of political instability deepen divisions, and it leads to an increase in hate speech and disinformation,” said Jacqueline Lacroix, an analyst with Moonshot and the co-author of a 2019 lexicon of hate speech in the Libyan social media ecosystem.
Recently released internal Facebook documents shed light on how Facebook prioritizes moderation and enforcement resources for countries around the world through a tiered system.
In a document contained in the Facebook Papers, titled “Integrity Inputs – Country Prioritization for 2021,” Facebook outlined that the platform classifies countries as Tier 1 if they have weak local media systems, social cohesion, and protections for freedom of expression, as well as upcoming elections, a vulnerability to COVID-19, or the possibility of an outbreak of violence. The company also used the Tier 1 designation where Facebook had a substantial presence that was impacting communities. The document suggests the company should prioritize instances where it’s clear that Facebook “causes or directly contributes to harm.” Meanwhile, Tier 3 countries are defined as “stable and competitive democracies,” with democratic processes developing “as usual,” and having “no history of election interference.” As a result, Facebook said that it offers Tier 1 countries quarterly risk assessments, while Tier 3 countries are dependent on user complaints, although the documents indicate the company would be proactive about taking down coordinated inauthentic behavior.
Although the Tier 1 designations would seem to describe the current climate within Libya, as of earlier this year Facebook had the country labeled under Tier 3, meaning that Facebook only takes moderation action in response to reports from users or “trusted partners.”
“They haven’t been transparent about the extent to which they are prioritizing Libya,” said the social media analyst, who has been in touch with the company. “It’s a common issue that [the] people in-country don’t know where that country falls on their prioritization framework, and so don’t know what level of resourcing they’re providing.”
“The prioritization framework Facebook uses is very linear, but conflict isn’t linear,” they said. For a situation like Libya, where conflict can escalate quickly, “the way they’re choosing to allocate resources doesn’t reflect the unpredictable ebbs and flows of conflict. They’re trying to do something predictable.”
Ashraf Zeitoon, who served as Facebook’s Middle East and North Africa policy lead from 2014 to 2017, said that during his time at the company, it was employee initiative — and public pressure — that often determined which countries were prioritized for support and moderation rather than internal designations.
If a country didn’t have someone on the Facebook team to advocate for it, “then the company literally is going to do nothing about that,” said Zeitoon. “They’re going to deal with the human-trafficking aspect, the illegal immigration, because they get pressured for that from Western governments.”
Zeitoon said that when he left Facebook in 2017, there was no Arabic speaker of Libyan origin in the company. In a statement to Rest of World, a spokesperson from Meta, the newly-renamed company that owns the Facebook platform, said that they now have content reviewers from Libya “to help us remove harmful content, as well as proactive detection technology to help us catch violating content at scale.”
“We have a dedicated team with experts in misinformation and hate speech working to stop abuse on our platforms in the lead-up to, during, and after the elections in Libya,” the spokesperson added.
The Facebook Papers also revealed that the company has struggled to track dangerous speech and harmful narratives across the Arabic-speaking world. This is especially difficult in countries like Libya, where the nature of the conflict means that inflammatory terms can change over time, or that hate speech can be coded so that simply pointing out someone’s regional origins can be interpreted as a slur. That means that filtering for “protected categories” like race and religion is often ineffective.
“Events on the ground obviously affect the online hate speech and the different terms that are being used,” said Lacroix. “Some terms may only be applicable for a certain period of time.” Understanding whether a term or narrative is becoming dangerous then requires consistent monitoring.
“[Facebook] is the fuel that feeds the fire,” said Taha Almsallati, community outreach coordinator for Libya at Democracy Reporting International. “There’s a huge [amount of] hate speech, against the people who support Haftar, against the Muslim Brotherhood, hate speech against the western cities and eastern cities. And no one is dealing with it.”
Libya’s ongoing political and armed conflicts make efforts to quell misinformation dangerous — even fact-checking can be a risky business. Mourad Bilal, who leads the Truth Seekers Center, an initiative that checks claims on social media in Libya, told Rest of World that his team focuses on non-political misinformation because he’s worried that debunking certain sensitive claims could anger the wrong people. “The kind of claims that an army or a militia might make,” he said. “We avoid checking those because we care about our staff’s safety.”
A spokesperson from Meta told Rest of World that it has now partnered with two third-party fact-checking partners in Libya, the AFP and Fatabyyano. “When they rate a piece of content as false,” the spokesperson said, “we add a prominent label warning people before they share it, and show it lower in people’s feed, so they are less likely to see it.”
The social media analyst underscored that the concern extends far beyond the upcoming election. “Facebook has a tendency to view this as if elections are in December, and that’s the time period,” said the social media analyst. “But many risks occur after voting: the risk of delegitimizing the results, the risk of inciting violence.”
And, they added, even if Libya were to be classified as a Tier 1 country, that wouldn’t resolve the conflict on the ground.
“It’s not all on Facebook to solve the Libya social media landscape issue. There’s also a huge amount that’s on political actors, media actors from all sides.”