By the third day after the levees broke in August 2005, misinformation in New Orleans about lawlessness and looting was rampant. It became so pervasive that many recovery efforts following Hurricane Katrina’s landfall were halted or delayed.
In 2018, a wave of wildfires in California sparked its own misinformation surge. U.S. Rep. Marjorie Taylor Greene (R-Ga.) famously asserted that “Jewish space lasers” and the Rothschild family were to blame for the devastation.
From rumors about lawlessness to tweets about space laser wildfires, natural disasters have a proven tendency to create a storm of misinformation that can impact emergency response.
A study by the International Institute for Applied System Analysis (IIASA), an international research institution based in Laxenburg, Austria, seeks to understand how AI tools can be leveraged to mitigate misinformation spread during emergency situations. Led by Nadejda Komendantova and Dmitry Erokhin, the study is part of a growing field of research at the intersection of machine learning and misinformation.
“This research originated from the understanding that misinformation during natural disasters poses a serious threat to public safety and the effectiveness of emergency response,” Komendantova said. “The urgency and complexity of this issue have become especially apparent in recent years,” due to the role of social media and the increased threat of extreme weather—weather that may become increasingly severe due to the staggering energy demand of AI technology itself.
Funded by the European Union, the study employs a case study analysis and narrative literature review design. Komendantova and Erokhin examine three AI tools: natural language processing (NLP), machine learning algorithms, and real time monitoring systems. Each tool has a different role to play in identifying misinformation and mitigating its spread.
NLP allows computers to interpret and analyze human language at scale. “One of the primary applications of NLP in misinformation detection is sentiment analysis,” the researchers explain. That means these systems can assess the tone of online posts—categorizing them as positive, negative or neutral. According to the researchers, a spike in negative sentiment around a specific topic might suggest a barrage of false claims. Because NLP can scan massive amounts of content quickly, it’s a powerful tool for tracking misinformation online.
Machine learning algorithms take things a step further. By training on large datasets, these systems learn to recognize patterns in how misinformation typically spreads. They can then flag similar content in the future and predict which false narratives or conspiracy theories might emerge before disaster occurs. Machine learning algorithms also improve over time as they are fed new data.
Finally, real-time monitoring systems provide constant surveillance of the digital information landscape. These tools automatically scan websites, news outlets and social media for specific keywords or types of content. “By continuously collecting data, real-time monitoring systems can ensure that they have up-to-date information on the current state of misinformation,” the researchers state. After data is collected, the system can alert necessary authorities.
Despite recent progress in AI’s ability to detect and fight misinformation, each of these tools have significant limitations.
Misinformation is often intentionally created and spread. “Constantly evolving tactics of those who create and disseminate misinformation present a persistent challenge to AI systems,” Komendantova said. “As misinformation actors adapt by developing new language, imagery, or dissemination techniques, AI models must be updated and retrained to recognize these new patterns.”
AI itself has been increasingly used to spread misinformation through bot accounts and generative imaging, but this study did not specifically focus on these dissemination techniques.
AI also has difficulty understanding cultural nuances in language and the use of irony and sarcasm by misinformation actors, limiting its detection abilities.
Regardless of these limitations,“the study found that AI can play a vital role in detecting and mitigating misinformation during natural disasters,” said Komendantova, potentially helping communities at the front line of the climate crisis build resilience. Their research found that machine learning algorithms trained off of social media posts during Hurricane Harvey could be used to identify and predict the spread of false information online—such as inaccurate reports about mandatory evacuations and shelter availability—helping to curb panic and confusion.
Even before the rise of advanced AI, technology demonstrated its potential in combating disaster-related misinformation. After the devastating 2010 Haiti earthquake, crisis mapping proved vital in coordinating relief efforts, the study found. The nonprofit Ushahidi launched a mapping platform within hours of the 7.0 magnitude quake, using crowdsourced data from text messages, social media and news outlets to pinpoint areas in urgent need. Today, AI can automate much of that process, rapidly gathering, verifying and mapping information in real time.
By freeing up vital time, AI tools “allow emergency responders to prioritize their efforts and allocate resources more effectively,” Komendantova and Erokhin write.
This story is funded by readers like you.
Our nonprofit newsroom provides award-winning climate coverage free of charge and advertising. We rely on donations from readers like you to keep going. Please donate now to support our work.
AI chatbots—the computer programs that use AI to simulate human-like conversations and respond to users in real-time—have also been deployed in recent years to fight natural disaster misinformation.
During the COVID pandemic, AI chatbots, like the Center for Disease Control’s “CoronaBot,” were used to help spread accurate and timely information in an attempt to combat widespread conspiracy theories and build public trust.
Similarly, the Red Cross deployed its chatbot, Clara, to counter conspiracy theories during the dual hurricanes of Milton and Helene in 2024. Named after Red Cross founder Clara Harlowe Barton, the chatbot provided users with accurate information about shelters, financial assistance and emergency services.
“Our research demonstrates that deploying AI can help improve decision-making during crises by ensuring that accurate, timely information reaches those affected,” Komendantova said.
The overall success of AI usage in disaster settings depends on tech developers and emergency management agencies’ ability to create public trust, according to Komendantova and Erokhin.
“As AI becomes more explainable and transparent, public trust in these tools is likely to grow,” Komendantova said.
But public trust may be hard to build through AI during natural disasters, said Joseph Uscinski, a professor at the University of Miami and an expert on conspiracy theories and misinformation.
“It is certainly true that AI may be able to talk people out of conspiracy theories, but it may be difficult during natural disasters,” states Uscinski. “People may be dealing with anxiety in such a way that changing their minds might be difficult.”
Those experiencing the effects of natural disasters and misinformation may not have the time or resources to access chatbots or trustworthy social media posts, he said.
Furthermore, conspiratorial thoughts are a product of people’s complex worldviews, including their group identity and ideological beliefs. These beliefs are not quickly changed, and “people don’t walk around waiting for AI to change their minds,” Uscinski said.
Despite his concerns about AI use in natural disaster settings, he said, “it is certainly worth a try.”
Komendantova advocates for a number of actions to be taken to improve public trust and AI’s ability to fight natural disaster misinformation.These include transparent, ethical AI practices, data governance, user education and AI regulation.
As AI usage continues to develop, misinformation tactics continue to change and severe weather continues to worsen, further research is needed in this area.
“Future research should focus on enhancing AI’s contextual understanding in disaster scenarios, developing more robust and less-biased models and addressing privacy, transparency and fairness concerns,” Komendantova said. “The field will benefit from increased interdisciplinary collaboration, bringing together expertise from computer science, social sciences, emergency management and ethics to develop practical and holistic solutions.”
With these improvements in mind, Komendantova is confident that AI will be successfully incorporated in emergency response management and information platforms in the near future. She thinks AI will move beyond simply detecting misinformation and will be used to effectively counter it through chatbots, crisis mapping and as a “real-time information partner” for emergency responders.
About This Story
Perhaps you noticed: This story, like all the news we publish, is free to read. That’s because Inside Climate News is a 501c3 nonprofit organization. We do not charge a subscription fee, lock our news behind a paywall, or clutter our website with ads. We make our news on climate and the environment freely available to you and anyone who wants it.
That’s not all. We also share our news for free with scores of other media organizations around the country. Many of them can’t afford to do environmental journalism of their own. We’ve built bureaus from coast to coast to report local stories, collaborate with local newsrooms and co-publish articles so that this vital work is shared as widely as possible.
Two of us launched ICN in 2007. Six years later we earned a Pulitzer Prize for National Reporting, and now we run the oldest and largest dedicated climate newsroom in the nation. We tell the story in all its complexity. We hold polluters accountable. We expose environmental injustice. We debunk misinformation. We scrutinize solutions and inspire action.
Donations from readers like you fund every aspect of what we do. If you don’t already, will you support our ongoing work, our reporting on the biggest crisis facing our planet, and help us reach even more readers in more places?
Please take a moment to make a tax-deductible donation. Every one of them makes a difference.
Thank you,
Great Job By Ryan Krugman & the Team @ Inside Climate News Source link for sharing this story.