Imagine a world with no elephants. Resolve, a nonprofit committed to ending poaching of endangered animals, warns that this fiction could become fact in a mere decade, as an elephant is killed every 15 minutes by a poacher.
Intel Corporation’s AI for Social Good project is implementing a tech solution to the crisis of illegal poaching: TrailGuard artificial intelligence (AI) captures images of suspected poachers and alerts park rangers. This system is one of several, including the Microsoft-supported Elephant Listening Project, Google’s Wildlife Insights AI, and Alibaba’s cloud computing for conservation, created by tech giants. These computational networks are designed to process significant amounts of incoming security data at extraordinary speed and accuracy. Where humans fail, there is renewed faith in AI to save our planet.
Here’s the glitch. While tech saviors work hard at making AI for rangers, rangers are hard at work to get their basic needs met. According to a 2016 World Wildlife Fund report that surveyed 570 rangers across 12 African countries, 82% had faced a life-threatening situation while on duty. Despite these high risks, many stated that they were inadequately armed and had limited access to vehicles and training to combat organized crime. Other grievances included insufficient boots, shelter, and clean water supplies. Their pay was low and intermittent, their work conditions were poor, and many reported having limited or no health and disability insurance. Tracking information on poachers is the least of their problems. Investing in humans could have even more impact than investing in machines.
Poverty, in addition to rampant corruption and an insatiable global demand, is what drives poachers to poach in the first place. AI solutions and the rangers’ problems are massively mismatched. It also demands the question: who decides what is “good” in the first place?
When tech companies set out to do good, they bring with them centuries of baggage of what doing good even means. Both the process and the ideal have a toxic legacy. The centuries-long colonial project was perhaps the most ambitious in its agenda for doing good. “Civilizing” colonial subjects was at the heart of the modernizing mission. In many cases, Anglo-Saxon and Christian values became synonymous with the common good. Templates for compliance and conformity during those times included domesticating a “good slave” and raising a woman who was to be “virtuous, pure, resigned to her lot in life.”
Empires were built on the desire to expand and the hunger to capture new markets, convincing the natives of the innate virtue of technological progress and Western cultural exports to breed good societies. History doesn’t die — it gets reborn. Today, the rubric of empires syncs well with computational culture, as we try to automate the good and track the bad. We delude ourselves in thinking that AI is a mere technology and not an ideological apparatus that spans all possibilities, including what could be the “worst event in the history of our civilization,” according to physicist Stephen Hawking.
As we weigh efficiency against justice, modern decision-making is at the crossroads. We have become numbingly familiar with algorithmic discrimination, as predictive analytics and automation infiltrates welfare communities, policing, health care, and surveillance of public spaces. While the world is in moral turmoil over deepening inequality, the climate crisis, and political upheaval, Fei-Fei Li, chief scientist at Google AI, appears unfazed. “I believe AI and its benefits have no borders,” she wrote. “Whether a breakthrough occurs in Silicon Valley, Beijing, or anywhere else, it has the potential to make everyone’s life better for the entire world.”
The AI-enabled solutionism emerging from the tech titans promises to repeat a toxic history of self-defined goodness. It is time we break away from this legacy of moral certainty. Google’s “Don’t be evil” motto reads more like satire. It is no wonder that, in 2018, this phrase was quietly retired from the organization’s preface. Clay Tarver, a writer and producer for the uncomfortably funny HBO series “Silicon Valley” recently remarked, “I’ve been told that, at some of the big companies, the PR departments have ordered their employees to stop saying ‘We’re making the world a better place,’ specifically because we have made fun of that phrase so mercilessly.” Sometimes, a Silicon Valley solution only exacerbates the problem.
Whether it is fintech or 5G networks, the playing field has diversified. Singapore leads in smart citification, China commands the 5G networks space, and India’s Reliance Jio has radically disrupted the global telecom sector, making data extraordinarily accessible to the next billion market. Innovation outside of the fortress isn’t new. India pioneered humane technologies, such as the Jaipur Foot, which is among the cheapest prosthetic legs in the world, or Aravind Eye Care System, which has shown that it is possible for cataract operations to scale to millions of people at the cost of a few dollars. The idea that the West is the sole seat of innovation is detrimental to progress as a whole. From former Hewlett Packard CEO Carly Fiorina’s perspective: “I will tell you that, yeah, the Chinese can take a test, but what they can’t do is innovate. … They are not terribly imaginative.” It’s time to put this kind of hubris aside. The fastest way for a tech company to do good is to stop being a kill zone for innovation by blocking startups. Reverse innovation is here to stay.
All technology is innately assistive. If I were to have one instruction for the Big Tech AI labs of the world, it would be this: Invest in the human and not just the machine. We need nuance in problem-solving, which demands messiness. It’s time to move slow and build things. And the best things can be built only if the people who are experiencing the problem firsthand are invited into the room.