Since 2014, social media has become central to how elections and campaigning for state and national elections take place in India. Facebook, Twitter, and WhatsApp strategies have become key for political parties, which is evidenced by the large amounts of money — about 5 billion Indian rupees (around $66,000,000) — that was spent on targeted political advertising on digital platforms that year. Several studies have pointed out the impact social media had, particularly on first-time voters, who engaged strongly with this content in the run-up to the 2019 national elections. The ruling Bharatiya Janata Party, which won the 2014 and 2019 elections, has been repeatedly lauded for its ability to organize and influence discourse in online spaces.

It may seem that social media is a boon for democracies — it enables greater participation and access to information. While this is true, a closer look is warranted. At the heart of democracy is the possibility of exchanging ideas, concerns, and possible solutions. But as several reports have pointed out, social media platforms and the way in which their algorithms are deployed result in the creation of deep echo chambers, where individuals are repeatedly shown similar narratives and content that confirm their preexisting biases. In these echo chambers, voters lose exposure to varied thinking, ultimately leading to polarization and intolerance. While all of the blame should not be placed on algorithms, the challenge remains in relation to the lack of transparency on how they are developed and deployed.

Recent revelations by a Facebook whistleblower about the company’s decision to prioritize profit over human rights point to the platform monetizing hate at the cost of user safety. This confirms concerns raised by activists in India over the past few years. In India, research shows that Facebook has been a vital platform for the spread of hate speech, misinformation, and calls for violence against Muslims and other minorities. 

That Facebook was aware of anti-Muslim content on its platform comes as no surprise; a Wall Street Journal investigation in 2020 showed that Facebook had knowingly allowed hateful content from members of the ruling party in India to stay on its platform, and there have been several calls from civil society, including the Association for Progressive Communications, for Facebook to conduct an audit of hate speech on its platform and take immediate action against such content. 

Facebook’s response has remained superficial and unsatisfactory, to say the least. Facebook’s refusal to meaningfully engage with the Delhi Assembly’s Committee on Peace and Harmony in the wake of riots is one such example. Following a court order, Facebook was left with no choice but to appear before the committee. Hate content against Muslims or other religious minorities continues to thrive on its platform, especially around elections, and its failure to allocate resources to cope with content in multiple languages emboldens offenders with a sense of impunity. 

This raises serious questions around what these online environments mean for minorities and other vulnerable groups as well as individuals who belong to the majority community but don’t necessarily subscribe to these hateful narratives.

As pointed out by the U.N.’s Office of the High Commissioner for Human Rights, information and communication technologies could negatively affect public participation, for example, when disinformation and propaganda are spread “to mislead a population or to interfere with the right to seek and receive, and to impart information and ideas of all kinds.” Failure by the government and the private sector to curb consistent attacks, as witnessed in relation to how Muslims were targeted during the pandemic on social media for allegedly spreading the virus, goes to show that both entities have failed to discharge their duties and responsibilities to uphold human rights.

Part of the reason that social media platforms are not committing to sustained reforms goes to their business models, which are centered around monetization of users. Our content, interests, and existence on these platforms and reliance on their tools create a monopoly over how these companies can be run with little to no consequences for irresponsible behavior. Additionally, large companies that are able to buy out competition and monopolize various verticals of digital services leave little room for viable alternatives. The paltry amount of resources that they allocate to ensuring user safety, in comparison to the billions that they earn in revenue, is indicative of their priorities. 

As with most industries, technology-based companies will eventually come under regulatory regimes and public scrutiny. Unfortunately, recent measures in India with heavy-handed regulation, such as the 2021 Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, do little to inspire faith in real accountability. The government is failing to stop and, in some cases, is actively engaging in perpetrating abuse of technology with the help of companies. To make matters worse, when civil society members or platforms stand up to the ruling party, it results in a severe backlash: for example, when Twitter refused to comply with the directions of the government to take down accounts of dissidents and critical media outlets. Comedians, journalists, academics, and student activists have often faced the full brunt of the government machinery for even minor expressions of dissent online, leading to an atmosphere of fear and self-censorship. 

As the recent success of the farmers’ protests in India show, sustained movements are the only hope.

We are just beginning to uncover the impact that technology and social media platforms have on democracy and democratic institutions. While whistleblower disclosures can be helpful, it may well be years, if not decades, before we can fully grasp the extent of the consequences for diverse and polarized societies like India. While governments and the private sector may reform the way they function eventually, if it suits their agenda or if they are forced to, ultimately the source of change must come from elsewhere. As the recent success of the farmers’ protests in India — resulting in the repeal of laws that posed serious challenges to their livelihood — show, sustained movements are the only hope. 

Despite coordinated and concerted social media smear campaigns riddled with hate, vilification, and disinformation, the farmers community sustained pressure on the state. And while the challenges posed by technology and social media might be new, they intersect with similar questions of human rights, public accountability, and principles of democracy. In India’s rich history of civic movements, the responsibility and power to hold the government and the private sector accountable for upholding democracy or protecting our institutions has always been vested in the people. 

Hedging all our bets on democratic institutions like the electoral bodies and the judiciary to come to our rescue may come at a significant cost, and its impact will be felt by people with their lives and livelihoods. Instead, we must move toward a future where people are cognizant of their rights and emboldened to exercise them, democratic movements are bolstered and interconnected across borders, and the collective strength of people is realized.