Melissa Ingle was a senior data scientist at Twitter, working on civic integrity and political misinformation. As a contract employee, she wrote algorithms that moderated harmful content on Twitter ahead of the U.S. and Brazil elections. Earlier this month, Ingle was one of the 4,400 contract staff who lost access to Twitter’s internal systems without being notified. Many expect the cuts to the moderation team to have a crippling effect on the health of Twitter. Ingle talked to Rest of World on her fears about the porous future of Twitter moderation.

Could you describe your role at Twitter? Which team were you part of, and what was your day-to-day like?

I was a senior data scientist, working in civic integrity and political misinformation. I wrote and monitored the algorithms which scanned Twitter for political misinformation. We continuously trained and updated our models. We also sent a subsection of the tweets we flagged for human review. The core content moderation operation staff was a team of 30 total; they checked for all kinds of content: hate speech, harassment, pornography, child abuse or trafficking, etc. We were mostly data scientists and interfaced with many other groups at Twitter.

What were the two biggest regions you overlooked, and could you talk about the nature of political misinformation in those regions compared to the U.S.?

We monitored and flagged tweets around the U.S. and Brazilian elections. Each country with a large group of users had different policies. We wrote our algorithms based on local policies and in consultation with people who knew the language and culture very well. I think we missed many countries in Africa, and also were hampered by working in parts of Southeast Asia due to local government interference.

How would you counter the belief that Twitter’s 4,000-people moderation operation was bloated?  

It is very possible there was some bloat. I am not a CEO, I’m an individual contributor. But content moderation needs both algorithms — because there are 37.5 million tweets per hour — and human review. As we get farther and farther away from people being laid off, there is no longer anyone at the switch. Also, machine learning needs constant updating and change as the nature of political discourse changes. We have not yet seen the negative impact of these policies.

Could you give an example of some of the mission-critical operations/teams, present to counter misinformation in non-Western regions, that no longer exist? 

To date, it’s the lack of human reviewers — the people who monitor timelines and look at tweets which get reported. Also, the algorithms will get more and more porous and let more misinfo in.

What are you most fearful about in light of the current cuts to the moderation team? Can platform integrity be maintained with automated moderation alone?

Unfortunately, automation alone is simply not enough. I’m not saying we can’t get there — maybe we can. But right now, we need both machine learning and human review. As stated above, the misinfo and harassment will get worse and worse as time goes on, unless something is done about it.