AI system identifies buildings damaged by wildfire
People around the globe have suffered the nerve-wracking anxiety of waiting weeks or months to find out if their homes have been damaged by wildfires that scorch with increased intensity. Now, once the smoke has cleared for aerial photography, researchers have found a way to identify building damage within minutes.
Through a system they call DamageMap, a team at Stanford University and the California Polytechnic State University (Cal Poly) has brought an artificial intelligence approach to building assessment: Instead of comparing before-and-after photos, they’ve trained a program using machine learning to rely solely on post-fire images. The findings appear in the International Journal of Disaster Risk Reduction.
“We wanted to automate the process and make it much faster for first responders or even for citizens that might want to know what happened to their house after a wildfire,” said lead study author Marios Galanis, a graduate student in the Civil and Environmental Engineering Department at Stanford’s School of Engineering. “Our model results are on par with human accuracy.”
The current method of assessing damage involves people going door-to-door to check every building. While DamageMap is not intended to replace in-person damage classification, it could be used as a scalable supplementary tool by offering immediate results and providing the exact locations of the buildings identified. The researchers tested it using a variety of satellite, aerial and drone photography with at least 92 percent accuracy.
“With this application, you could probably scan the whole town of Paradise in a few hours,” said senior author G. Andrew Fricker, an assistant professor at Cal Poly, referencing the Northern California town destroyed by the 2018 Camp Fire. “I hope this can bring more information to the decision-making process for firefighters and emergency responders, and also assist fire victims by getting information to help them file insurance claims and get their lives back on track.”
A different approach
Most computational systems cannot efficiently classify building damage because the AI compares post-disaster photos with pre-disaster images that must use the same satellite, camera angle and lighting conditions, which can be expensive to obtain or unavailable. Current hardware is not advanced enough to record high-resolution surveillance daily, so the systems can’t rely on consistent photos, according to the researchers. More