Around the world, internet users are caught between governments’ efforts to regulate online harms and social media platforms, which alternate between ignoring the problem and launching broad crackdowns on user speech. Accountability challenges persist across both contexts. But while these challenges are common around the world, both governments and social media platforms face peculiar regulatory challenges in the African context.
In the last decade, public protests have been amplified on social media platforms while the space for public engagement has continued to grow, all of which are efforts to consolidate democratic development both offline and online. However, these efforts are undercut in African countries by problematic regulation.
Cyberharassment, bullying, online gender violence, and other forms of targeted online violence are already present on the continent and are often directed at vulnerable persons, like women, children, migrants, and minorities. Governments are fond of conflating these harms and misapplying the permissible limitations of online expression allowed under international human rights standards.
Most laws enacted by governments on the continent that ostensibly seek to regulate harms, like disinformation and online violence, end up targeting legitimate online speech instead. Various sections of Nigeria’s Cybercrimes (Prohibition, Prevention Etc) Act, Uganda’s Computer Misuse Act, the Kenyan Computer and Cybercrimes Act, and the Malawian Electronic Transactions and Cybersecurity Act are so vaguely worded that they can be weaponized to stifle dissent. In some instances, African governments have also used these harms as a pretext to shut down internet access. In other words, real online harms are not only not being adequately addressed by governments but the regulatory responses to these problems are harming freedom of expression.
Social media platforms are also failing to live up to their responsibilities, especially considering their enormous influence across Africa’s struggling democracies. A recent report from the Mozilla Foundation documented how agents spread disinformation on Twitter in Kenya. In Ethiopia, online hate speech on Facebook now accounts for offline violence. In Uganda, women politicians use social media less, due to online abuse.
Just as speech regulation is now being determined by private companies that often fail to protect human rights, governments are moving too fast on regulating these harms and breaking fragile democracies as they go. Even though conversations on human rights and online harms regulation are yielding fruit, these gains are slow. Governing online speech is becoming increasingly difficult and complex.
Despite these complexities, we must work harder to find solutions. One of the ways we can do that is to rethink our traditional approaches to governance. Governments can no longer insist on content removal, just as social media companies can no longer insist on their corporate-speak rules without complying with international human rights standards. Both approaches prioritize the interests of governments and companies, without considering the fundamental rights of the users being impacted.
The traditional roles of state and non-state actors, like the end users, governments, social media platforms, internet service providers, civil society, academics, and international human rights systems, will have to be rethought to meaningfully regulate online harms. And they don’t have to look far for inspiration. The Santa Clara Principles, the Business for Social Responsibility’s paper on content governance, Global Network Initiative’s Analysis and Recommendations on content moderation and human rights, the revised Declaration of Principles on Freedom of Expression and Access to Information in Africa, and the Association for Progressive Communications’ Declaration on internet freedoms could all serve as background documents.
For example, Principle 17(4) of the Declaration of Principles on Freedom of Expression and Access to Information in Africa points out that: “A multi-stakeholder model of regulation shall be encouraged to develop shared principles, rules, decision-making procedures and programmes to shape the use and evolution of the internet.” This means that online content regulation must be developed by as many stakeholders as possible. These stakeholders may include governments, social media platforms, civil society, treaty-monitoring bodies, and vulnerable groups. The Declaration also requires governments to ensure that social media platforms mainstream human rights safeguards into their rules, processes, and procedures used in moderating content.
In order to prevent online harms, state and non-state actors have to change how they think about them, and, in this case, they must creatively design a body of rules that are not only anchored in international human rights standards but also governed by as many actors as possible. Perhaps this body of rules might first be applied by actors as a soft law, to learn from its success and processes. After this, actors could begin to think of a hard human rights law that could help prevent online harms and promote online expression in African contexts.
The lack of accountability by social media platforms continues to embolden African governments to harm Africa’s developing democracies. Therefore, the measures that will prevent online harms in Africa will have to be designed beyond traditional governance roles and respect human rights in their processes. This involves moving toward more inclusion of civil society, academia, and the end user, which would not only offer diverse contributions but also ensure that the rules that would guide online expression accommodate the realities of vulnerable groups that will be affected by them.