Meta ran political ads in the days prior to the August 9 Kenyan election, despite a national ban on political advertising, according to a new report from the Mozilla Foundation. The report found that its platforms, Facebook and Instagram, facilitated violations of local election laws by political candidates.
“Advertising platforms are supposed to be the most stringent and most watched environments. There is money exchanging hands, and the liability that comes with that,” Odanga Madung, a Nairobi-based researcher and Mozilla Fellow, told Rest of World, while describing Meta’s failure to filter out these adverts. “It’s very frustrating. They are not obeying local electoral laws.”
Kenyan law states political candidates cannot campaign in the 48 hours before an election day. Candidates for both major political parties did just that, with paid promotions on Facebook and Instagram, which are both owned by Meta. Meta itself requires advertisers to abide by these blackout periods. Some ads from the opposition Azimio la Umoja party reached as many as 50,000 impressions and one gubernatorial candidate alone ran some 17 violating ads.
The finding is one of several from the Mozilla Foundation’s new report on moderation failures in the days preceding and following Kenya’s August presidential election. The porousness of moderation filters during this time contributed to what Madung calls a “post-election twilight zone,” the report said. Despite public commitments to ramp up moderation resources before Kenyans headed to the polls, Meta, Twitter, and Tiktok all saw breaches in their moderation systems, according to the report. In the days after the polls closed on August 9, election rumors on social media were exacerbated by the release of 43,000 polling station results publicly by the country’s Independent Electoral and Boundaries Commission (IEBC). Political parties and media companies released their own tallies of these votes, leading to conflicting declarations of the winner. Breaches included the circulation of misleading electoral tallies by opposing political parties and conspiracy theories about election fraud.
Meta, attributing a statement to an unidentified spokesperson, told Rest of World: “We prepared extensively for the Kenyan elections over the past year and implemented a number of measures to keep people safe and informed – including tools to make political ads more transparent, so people can scrutinize them and hold those responsible to account. We make clear in our Advertising Standards that advertisers must ensure they comply with the relevant electoral laws in the country they want to issue ads.”
The advertising violations occurred despite the creation of a “war room” Meta operated to tackle the challenges of the Kenyan election season. In July, the company publicized its moderation resources for Kenya, including a dedicated team of Swahili speakers. Meta did not clarify how many moderators were hired, or if there were any moderators hired for widely spoken languages in Kenya, namely Dholuo and Kikuyu. In a statement to the Mozilla Foundation in the report, an unnamed Meta spokesperson said, “We prepared extensively for the Kenyan elections this past year and implemented a number of measures to keep people safe and informed.” The spokesperson also cited the company’s public ad library and third-party fact-checking partners in the country, including AFP and Africa Check.
In the coming days, these mitigation efforts will be tested in the U.S. midterm elections. Meta has committed to banning ads on social issues, elections, and politics for a full week before the nationwide midterm elections on November 8. An alert for this policy even appears in Meta’s ad library, just above the archived advertisements that violated Kenya’s own blackout period.
In total, the Mozilla Foundation identified 52 ads on Meta platforms that either violated the “election silence period” or violated advertising content policies, by prematurely calling Kenyan election results.
Both Tiktok and Twitter have banned political advertising on their platforms since 2019. Meta, however, accepts the advertising dollars of political candidates, but there are restrictions on the content of these ads. For one, Meta does not allow ads to prematurely declare the winner of an election. But Madung identified seven ads on Meta that breached this rule, none of which had appropriate warning labels.
The Mozilla Foundation found Meta’s shortcomings in moderating election content were not limited to advertisements. In one instance, Dennis Itumbi, a strategist for the Kenya Kwanza coalition, prematurely broadcasted on Facebook Live that William Ruto had won the election. The video reached 370,000 viewers, with no moderation intervention or content label from Facebook.
In the days following the election, some prominent politicians and commentators posted misinformation that was labeled as misleading by Twitter. Those same accounts went on to post several other blatantly false posts that were not labeled.
Both TikTok and Twitter said they made concerted efforts to label misinformation, with Twitter even launching a specific warning label for misleading posts for the Kenyan election. But a Mozilla Foundation dataset of election misinformation revealed spotty follow-through on this strategy, which Rest of World reviewed.
“A case where you have repeat offenders not getting the same treatment despite releasing the same content, especially given the kind of spotlight on this election, that is not a case of slipping through the cracks,” said Madung.
He suggests that the safety measures put in place by platforms in Kenya had not been properly tested on the users and the political landscape involved.
“Look, if you are going to carry out tests on the effectiveness of interventions, you must be able to take into account that election environments are incredibly unique depending on the country that you’re working with,” he said.