Hexbyte Glen Cove Machine learning model doubles accuracy of global landslide 'nowcasts' thumbnail

Hexbyte Glen Cove Machine learning model doubles accuracy of global landslide ‘nowcasts’

Hexbyte Glen Cove

Image shows a map of potential landslide risk output by NASA’s Landslide Hazard Assessment Model (LHASA) in June 2021. Red indicates the highest risk and dark blue indicates the lowest risk. Credit: NASA

Every year, landslides—the movement of rock, soil, and debris down a slope—cause thousands of deaths, billions of dollars in damages, and disruptions to roads and power lines. Because terrain, characteristics of the rocks and soil, weather, and climate all contribute to landslide activity, accurately pinpointing areas most at risk of these hazards at any given time can be a challenge. Early warning systems are generally regional—based on region-specific data provided by ground sensors, field observations, and rainfall totals. But what if we could identify at-risk areas anywhere in the world at any time?

Enter NASA’s Global Landslide Hazard Assessment (LHASA) and mapping tool.

LHASA Version 2, released last month along with corresponding research, is a -based model that analyzes a collection of individual variables and satellite-derived datasets to produce customizable “nowcasts.” These timely and targeted nowcasts are estimates of potential activity in near-real time for each 1-square-kilometer area between the poles. The model factors in the slope of the land (higher slopes are more prone to landslides), distance to geologic faults, the makeup of rock, past and present rainfall, and satellite-derived soil moisture and snow mass data.

“The model processes all of this data and outputs a probabilistic estimate of landslide hazard in the form of an interactive map,” said Thomas Stanley, Universities Space Research Association scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, who led the research. “This is valuable because it provides a relative scale of landslide hazard, rather than just saying there is or is not landslide risk. Users can define their area of interest and adjust the categories and probability threshold to suit their needs.”

In order to “teach” the model, researchers input a table with all of the relevant landslide variables and many locations that have recorded landslides in the past. The machine learning algorithm takes the table and tests out different possible scenarios and outcomes, and when it finds the one that fits the data most accurately, it outputs a decision tree. It then identifies the errors in the decision tree and calculates another tree that fixes those errors. This process continues until the model has “learned” and improved 300 times.

“The result is that this version of the model is roughly twice as accurate as the first version of the model, making it the most accurate global nowcasting tool available,” said Stanley. “While the accuracy is highest—often 100%—for major landslide events triggered by tropical cyclones, it improved significantly across all inventories.”

Version 1, released in 2018, was not a machine learning model. It combined satellite precipitation data with a global landslide susceptibility map to produce its nowcasts. It made its predictions using one decision tree largely based on rainfall data from the preceding week and categorized each grid cell as low, moderate, or high risk.

This image shows a landslide “nowcast” for Nov. 18, 2020 during the passage of Hurricane Iota through Nicaragua and Honduras. Credit: NASA

“In this new version, we have 300 trees of better and better information compared with the first , which was based on just one decision tree,” Stanley said. “Version 2 also incorporates more variables than its predecessor, including soil moisture and snow mass data.”

Generally speaking, soil can only absorb so much water before becoming saturated, and combined with other conditions, posing a landslide risk. By incorporating soil moisture data, the model can discern how much water is already present in the soil and how much additional rainfall would push it past that threshold. Likewise, if the model knows the amount of snow present in a given area, it can factor in the additional water entering the soil as the snow melts. This data comes from the Soil Moisture Active Passive (SMAP) satellite, which is managed by NASA’s Jet Propulsion Laboratory in Southern California. It launched in 2015 and provides continuous coverage.

LHASA Version 2 also adds a new exposure feature that analyzes the distribution of roads and population in each grid cell to calculate the number of people or infrastructure exposed to landslide hazards. The exposure data is downloadable and has been integrated into the interactive map. Adding this type of information about exposed roads and populations vulnerable to landslides helps improve situational awareness and actions by stakeholders from international organizations to local officials.

Building on years of research and applications, LHASA Version 2 was tested by the NASA Disasters program and stakeholders in real-world situations leading up to its formal release. In November 2020, when hurricanes Eta and Iota struck Central America within a span of two weeks, researchers working with NASA’s Earth Applied Sciences Disasters program used LHASA Version 2 to generate maps of predicted landslide hazard for Guatemala and Honduras. The researchers overlaid the model with district-level population data so they could better assess the proximity between potential hazards and densely populated communities. Disasters program coordinators shared the information with national and international emergency response agencies to provide better insight of the hazards to personnel on the ground.

While it is a useful tool for planning and risk mitigation purposes, Stanley says the model is meant to be used with a global perspective in mind rather than as a local emergency warning system for any specific area. However, future research may expand that goal.

“We are working on incorporating a precipitation forecast into LHASA Version 2, and we hope it will provide further information for advanced planning and actions prior to major rainfall events,” said Stanley. One challenge, Stanley notes, is obtaining a long-enough archive of forecasted precipitation data from which the model can learn.

In the meantime, governments, relief agencies, emergency responders, and other stakeholders (as well as the general public) have access to a powerful risk assessment tool in LHASA Version 2.



Citation:
Machine learning model doubles accuracy of global landslide ‘nowcasts’ (2021, June 10)
retrieved 10 June 2021
from https://phys.org/news/2021-06-machine

Read More Hexbyte Glen Cove Educational Blog Repost With Backlinks —

Hexbyte Glen Cove New take on machine learning helps us 'scale up' phase transitions thumbnail

Hexbyte Glen Cove New take on machine learning helps us ‘scale up’ phase transitions

Hexbyte Glen Cove

A correlation configuration (top left) is reduced using a newly developed block-cluster transformation (top right). Both the original and reduced configurations have an improved estimator technique applied to give configuration pairs of different size (bottom row). Using these training pairs, a CNN can learn to convert small patterns to large ones, achieving a successful inverse RG transformation. Credit: Tokyo Metropolitan University

Researchers from Tokyo Metropolitan University have enhanced “super-resolution” machine learning techniques to study phase transitions. They identified key features of how large arrays of interacting particles behave at different temperatures by simulating tiny arrays before using a convolutional neural network to generate a good estimate of what a larger array would look like using correlation configurations. The massive saving in computational cost may realize unique ways of understanding how materials behave.

We are surrounded by different states or phases of matter, i.e. gases, liquids, and solids. The study of , how one phase transforms into another, lies at the heart of our understanding of matter in the universe, and remains a hot topic for physicists. In particular, the idea of universality, in which wildly different materials behave in similar ways thanks to a few shared features, is a powerful one. That’s why physicists study model systems, often simple grids of particles on an array that interact via simple rules. These models distill the essence of the common physics shared by materials and, amazingly, still exhibit many of the properties of real materials, like phase transitions. Due to their elegant simplicity, these rules can be encoded into simulations that tell us what materials look like under different conditions.

However, like all simulations, the trouble starts when we want to look at lots of particles at the same time. The computation time required becomes particularly prohibitive near phase transitions, where dynamics slows down, and the correlation length, a measure of how the state of one atom relates to the state of another some distance away, grows larger and larger. This is a real dilemma if we want to apply these findings to the real world: real materials generally always contain many more orders of magnitude of atoms and molecules than simulated matter.

That’s why a team led by Professors Yutaka Okabe and Hiroyuki Mori of Tokyo Metropolitan University, in collaboration with researchers in Shibaura Institute of Technology and Bioinformatics Institute of Singapore, have been studying how to reliably extrapolate smaller simulations to larger ones using a concept known as an inverse renormalization group (RG). The renormalization group is a fundamental concept in the understanding of phase transitions and led Wilson to be awarded the 1982 Nobel Prize in Physics. Recently, the field met a powerful ally in convolutional neural networks (CNN), the same machine learning tool helping computer vision identify objects and decipher handwriting. The idea would be to give an algorithm the state of a small array of particles and get it to estimate what a larger array would look like. There is a strong analogy to the idea of super-resolution images, where blocky, pixelated images are used to generate smoother images at a higher resolution.

Trends found from simulations of larger systems are faithfully reproduced by the trained CNNs for both Ising (left) and three-state Potts (right) models. (inset) Correct temperature rescaling is achieved using data at some arbitrary system size. Credit: Tokyo Metropolitan University

The team has been looking at how this is applied to spin models of matter, where particles interact with other nearby particles via the direction of their spins. Previous attempts have particularly struggled to apply this to systems at temperatures above a phase transition, where configurations tend to look more random. Now, instead of using spin configurations i.e. simple snapshots of which direction the particle spins are pointing, they considered correlation configurations, where each particle is characterized by how similar its own spin is to that of other particles, specifically those which are very far away. It turns out correlation configurations contain more subtle queues about how particles are arranged, particularly at higher temperatures.

Like all machine learning techniques, the key is to be able to generate a reliable training set. The team developed a new algorithm called the block-cluster transformation for correlation configurations to reduce these down to smaller patterns. Applying an improved estimator technique to both the original and reduced patterns, they had pairs of configurations of different size based on the same information. All that’s left is to train the CNN to convert the small patterns to larger ones.

The group considered two systems, the 2D Ising model and the three-state Potts model, both key benchmarks for studies of condensed matter. For both, they found that their CNN could use a simulation of a very small array of points to reproduce how a measure of the correlation g(T) changed across a phase transition point in much larger systems. Comparing with direct simulations of larger systems, the same trends were reproduced for both systems, combined with a simple temperature rescaling based on data at an arbitrary system size.

A successful implementation of inverse RG transformations promises to give scientists a glimpse of previously inaccessible system sizes, and help physicists understand the larger scale features of materials. The team now hopes to apply their method to other models which can map more complex features such as a continuous range of spins, as well as the study of quantum systems.



More information:
Kenta Shiina et al, Inverse renormalization group based on image super-resolution using deep convolutional networks, Scientific Reports (2021). DOI: 10.1038/s41598-021-88605-w

Provided by
Tokyo Metropolitan University

Citation:
New take on machine learning helps us ‘scale up’ phase transitions (2021, May 31)
retrieved 31 May 2021
from https://phys.org/news/2021-05-machine-scale-phase-transitions.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Read More Hexbyte Glen Cove Educational Blog Repost With Backlinks —