How AI can actually be helpful in disaster response
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
We often hear big (and unrealistic) promises about the potential of AI to solve the world’s ills, and I was skeptical when I first learned that AI might be starting to aid disaster response, including following the earthquake that has devastated Turkey and Syria.
But one effort from the US Department of Defense does seem to be effective: xView2. Though it’s still in its early phases of deployment, this visual computing project has already helped with disaster logistics and on the ground rescue missions in Turkey.
An open-source project that was sponsored and developed by the Pentagon’s Defense Innovation Unit and Carnegie Mellon University’s Software Engineering Institute in 2019, xView2 has collaborated with many research partners, including Microsoft and the University of California, Berkeley. It uses machine-learning algorithms in conjunction with satellite imagery from other providers to identify building and infrastructure damage in the disaster area and categorize its severity much faster than is possible with current methods.
Ritwik Gupta, the principal AI scientist at the Defense Innovation Unit and a researcher at Berkeley, tells me this means the program can directly help first responders and recovery experts on the ground quickly get an assessment that can aid in finding survivors and help coordinate reconstruction efforts over time.
In this process, Gupta often works with big international organizations like the US National Guard, the United Nations, and the World Bank. Over the past five years, xView2 has been deployed by the California National Guard and the Australian Geospatial-Intelligence Organisation in response to wildfires, and more recently during recovery efforts after flooding in Nepal, where it helped identify damage created by subsequent landslides.
In Turkey, Gupta says xView2 has been used by at least two different ground teams of search and rescue personnel from the UN’s International Search and Rescue Advisory Group in Adiyaman, Turkey, which has been devastated by the earthquake and where residents have been frustrated by the delayed arrival of search and rescue. xView2 has also been utilized elsewhere in the disaster zone, and was able to successfully help workers on the ground be “able to find areas that were damaged that they were unaware of,” he says, noting Turkey’s Disaster and Emergency Management Presidency, the World Bank, the International Federation of the Red Cross, and the United Nations World Food Programme have all used the platform in response to the earthquake.
“If we can save one life, that’s a good use of the technology,” Gupta tells me.
How AI can help
The algorithms employ a technique similar to object recognition, called “semantic segmentation,” which evaluates each individual pixel of an image and its relationship to adjacent pixels to draw conclusions.
Below, you can see snapshots of how this looks on the platform, with satellite images of the damage on the left and the model’s assessment on the right—the darker the red, the worse the wreckage. Atishay Abbhi, a disaster risk management specialist at the World Bank, tells me that this same degree of assessment would typically take weeks and now takes hours or minutes.
This is an improvement over more traditional disaster assessment systems, in which rescue and emergency responders rely on eyewitness reports and calls to identify where help is needed quickly. In some more recent cases, fixed-wing aircrafts like drones have flown over disaster areas with cameras and sensors to provide data reviewed by humans, but this can still take days, if not longer. The typical response is further slowed by the fact that different responding organizations often have their own siloed data catalogues, making it challenging to create a standardized, shared picture of which areas need help. xView2 can create a shared map of the affected area in minutes, which helps organizations coordinate and prioritize responses—saving time and lives.
This technology, of course, is far from a cure-all for disaster response. There are several big challenges to xView2 that currently consume much of Gupta’s research attention.
First and most important is how reliant the model is on satellite imagery, which delivers clear photos only during the day, when there is no cloud cover, and when a satellite is overhead. The first usable images out of Turkey didn’t come until February 9, three days after the first quake. And there are far fewer satellite images taken in remote and less economically developed areas—just across the border in Syria, for example. To address this, Gupta is researching new imaging techniques like synthetic aperture radar, which creates images using microwave pulses rather than light waves.
Second, while the xView2 model is up to 85 or 90% accurate in its precise evaluation of damage and severity, it also can’t really spot damage on the sides of buildings, since satellite images have an aerial perspective.
Lastly, Gupta says getting on-the-ground organizations to use and trust an AI solution has been difficult. “First responders are very traditional,” he says. “When you start telling them about this fancy AI model, which isn’t even on the ground and it’s looking at pixels from like 120 miles in space, they’re not gonna trust it whatsoever.”
xView2 assists with multiple stages of disaster response, from immediately mapping out damaged areas to evaluating where safe temporary shelter sites could go to scoping longer-term reconstruction. Abbhi, for one, says he hopes xView2 “will be really important in our arsenal of damage assessment tools” at the World Bank moving forward.
Since the code is open source and the program is free, anyone could use it. And Gupta intends to keep it that way. “When companies come in and start saying, We could commercialize this, I hate that,” he says. “This should be a public service that’s operated for the good of everyone.” Gupta is working on a web app so any user can run assessments; currently, organizations reach out to xView2 researchers for the analysis.
Rather than writing off or over-hyping the role that emerging technologies can play in big problems, Gupta says, researchers should focus on the types of AI that can make the biggest humanitarian impact. “How do we shift the focus of AI as a field to these immensely hard problems?” he asks. “[These are], in my opinion, much harder than—for example—generating new text or new images.”
What else I’m reading
Teenage girls are not all right. New research from the CDC shows that mental health for high school girls has significantly worsened recently—a crisis experts think has been intensified by social media and the pandemic.
- Almost 1 in 3 reported that they seriously considered suicide in 2021, which is up 60% from 2011. Girls fared worse than boys in almost every measure that the CDC tracked, including higher levels of online bullying.
- This reminds me of several reports from recent years that show visual social media platforms like Instagram, TikTok, and SnapChat have had an outsize negative impact on how girls deal with an image-obsessed culture.
- Last year, I investigated the effects of augmented-reality technologies like face filters on young girls: there are real risks, like the increase of anxiety and challenges to healthy identity formation.
Russia has moved thousands of children out of Ukraine, according to new research based on open-source intelligence (OSINT) from the Humanitarian Research Lab based at the Yale School of Public Health.
- The lab’s Conflict Observatory project identified the “systematic relocation of at least 6,000 children from Ukraine” to a network of 43 facilities in Russia, including summer camps and adoption centers that appear to conduct “political re-education.”
- OSINT, the process of gathering publicly accessible information from sources like social media sites and satellite imagery, has been massively important in chronicling war crimes throughout the now year-long conflict. The lab used a combination of firsthand accounts, photographs and information about the camps from the web, and high-resolution satellite imagery to document and research onsite activities.
What I learned this week
Speaking of Russia, I recently learned about an obscure government office called the Main Radio Frequency Center that attempts to control how the country and its occupied areas use the internet. This is the unit that the Kremlin relies on to run its sweeping efforts to censor and surveil digital spaces, and it uses surprisingly manual and blunt tools.
In an investigation published earlier this month, Daniil Belovodyev and Anton Bayev of RadioFreeEurope/RadioLiberty’s Russian Investigation Unit reviewed more than 700,000 letters from the unit and 2 million internal documents that were obtained by a Belarusian hacker organization in November 2022. They reveal how the office scours Russian social networks like VK and Odnoklassniki, as well as YouTube and Telegram, to run daily reports on user-generated content and look for signs of internal dissent among Russian citizens (which the center eerily calls “protest moods”). The office has ramped up its efforts since the beginning of the Ukrainian invasion. The Main Radio Frequency Center has invested in bots in an attempt to automate its censorship, but the office also coordinates directly with engineers at web hosting companies and search engines based in Russia, like Yandex, by flagging sites it deems problematic. The investigation reveals just how much effort Russia is putting into its attempt at a great firewall, and how unsophisticated and patchy its tactics can be.
This piece has been updated since it was sent as part of The Technocrat to more clearly reflect xView2’s level of precision and the technology’s development process.