Since the beginning of May, the Brazilian state of Rio Grande do Sul has been devastated by heavy rain. At one point, four months worth of rainfall fell over the course of three days. Weeks later, much of the state capital is still flooded, and over half a million people are displaced. It’s the worst flood in Brazil’s history, and given the ongoing rainy season, it may take weeks more before the waters fully subside.
Amid the chaos of the disaster, misinformation has taken root. One image flagged by a popular Brazilian account shows a helicopter draped in the branding of a local retailer rescuing victims from floodwaters. “The photo that’s being shared on WhatsApp is actually a prompt-generated image,” @mphistoria wrote on X. “These so-called AIs must be regulated urgently.”
The Brazilian tech outlet Nucleo found 21 other AI-generated posts on Facebook and Instagram purporting to show imagery from the flood. Many were fantastical in nature, showing Christ comforting flood victims, but others were ambiguous enough to appear close to reality. This kind of AI-generated chum has become inescapable on Facebook in recent months. From the bizarre Shrimp Jesus meme to more banal manipulations of celebrity faces, AI imagery has become a fixture on social platforms.
It’s not surprising that the same tools might be successfully applied to a disaster like the floods in Brazil — but it’s significantly more impactful. Photos and information from the Rio Grande do Sul region are scarce, which means there’s both a high demand for images and few ways to debunk false claims. AI-generated content is filling the gap, and the structure of social media means the most sensational content travels the farthest. The images spotted by Nucleo are relatively mild, but it’s easy to imagine a more alarming take — a broken dam or a mass migration, for instance — that would alarm viewers to the point of sending them into harm’s way. There’s a reason that humanitarian groups and governments treat natural disasters as one of the most dangerous settings for misinformation.
The question is if there will be any safeguards to stop it. Meta’s approach, announced in April, is to label AI content but not remove it unless it violates platform policies. In theory, this would allow creative uses but block posts with harmful misinformation. But it’s hard to see that in effect, given that the labels are rarely seen on Meta platforms. (There’s no indication that any of the posts flagged by Nucleo have been labeled, for instance.)
There’s also little help from image generators, which still seem to be struggling with implementing a system of watermarks for AI images. It’s not hard to imagine moderation systems directly reading watermarks to learn the source of an image; given Meta’s in-house AI efforts, that system could be created entirely by the company. It just hasn’t happened yet.
All this comes as AI experts from around the world gather in Seoul this week for a global AI safety summit. It also comes a week after the collapse of OpenAI’s safety team amid accusations of a lack of resources. These events make headlines with talk of the existential risk of AI. In comparison, much less attention is paid to the more immediate issue of misinformation in a disaster zone. If I have one hope for that summit in Seoul, it’s that policymakers start to take those risks a little more seriously.