More stories

  • in

    To find the right network model, compare all possible histories

    Two family members test positive for COVID-19 — how do we know who infected whom? In a perfect world, network science could provide a probable answer to such questions. It could also tell archaeologists how a shard of Greek pottery came to be found in Egypt, or help evolutionary biologists understand how a long-extinct ancestor metabolized proteins.
    As the world is, scientists rarely have the historical data they need to see exactly how nodes in a network became connected. But a new paper published in Physical Review Letters offers hope for reconstructing the missing information, using a new method to evaluate the rules that generate network models.
    “Network models are like impressionistic pictures of the data,” says physicist George Cantwell, one of the study’s authors and a postdoctoral researcher at the Santa Fe Institute. “And there have been a number of debates about whether the real networks look enough like these models for the models to be good or useful.”
    Normally when researchers try to model a growing network — say, a group of individuals infected with a virus — they build up the model network from scratch, following a set of mathematical instructions to add a few nodes at a time. Each node could represent an infected individual, and each edge a connection between those individuals. When the clusters of nodes in the model resemble the data drawn from the real-world cases, the model is considered to be representative — a problematic assumption when the same pattern can result from different sets of instructions.
    Cantwell and co-authors Guillaume St-Onge (University Laval, Quebec) and Jean-Gabriel Young (University of Vermont) wanted to bring a shot of statistical rigor to the modeling process. Instead of comparing features from a snapshot of the network model against the features from the real-world data, they developed methods to calculate the probability of each possible history for a growing network. Given competing sets of rules, which could represent real-world processes such as contact, droplet, or airborne transmission, the authors can apply their new tool to determine the probability of specific rules resulting in the observed pattern.
    “Instead of just asking ‘does this picture look more like the real thing?'” Cantwell says, “We can now ask material questions like, ‘did it grow by these rules?'” Once the most likely network model is found, it can be rewound to answer questions such as who was infected first.
    In their current paper, the authors demonstrate their algorithm on three simple networks that correspond to previously-documented datasets with known histories. They are now working to apply the tool to more complicated networks, which could find applications across any number of complex systems.

    Story Source:
    Materials provided by Santa Fe Institute. Note: Content may be edited for style and length. More

  • in

    Cell 'bones' mystery solved with supercomputers

    Our cells are filled with ‘bones,’ in a sense. Thin, flexible protein strands called actin filaments help support and move around the bulk of the cells of eukaryotes, which includes all plants and animals. Always on the go, actin filaments constantly grow, shrink, bind with other things, and branch off when cells move.
    Supercomputer simulations have helped solve the mystery of how actin filaments polymerize, or chain together. This fundamental research could be applied to treatments to stop cancer spread, develop self-healing materials, and more.
    “The major findings of our paper explain a property of actin filaments that has been known for about 50 years or so,” said Vilmos Zsolnay, co-author of a study published in November 2020 in the Proceedings of the National Academy of Sciences.
    Researchers have known for decades that one end of the actin filament is very different from the other end, and that this polarization is needed for the actin filament to function like it needs to. The mystery has been how the filaments used this polarization to grow, shrink, and bind.
    “One end of the actin filament elongates much, much faster than the other at a physiological actin protein concentration,” Zsolnay said. “What our study shows is that there is a structural basis for the different polymerization kinetics.” Vilmos Zsolnay is a graduate student in the Department of Biophysical Sciences at the University of Chicago, developing simulations in the group of Gregory Voth.
    “Actin filaments are what give the cell its shape and many other properties,” said Voth, the corresponding author of the study and the Haig P. Papazian Distinguished Service Professor at the University of Chicago. Over time, a cell’s shape changes in a dynamic process.

    advertisement

    “When a cell wants to move forward, for instance, it will polymerize actin in a particular direction. Those actin filaments then push on the cell membrane, which allow the cell to move in a particular direction.” Voth said. What’s more, other proteins in the cell help align the actin ends that polymerize, or build up more quickly to push in the exact same direction, directing the cell’s path.
    “We find that one end of the filament has a very loose connection between actin subunits,” Zsolnay said. “That’s the fast end. The loose connection within the actin polymer allows incoming actin monomers to have access to a binding site, such that it can make a new connection quickly and the polymerization reaction can continue.” He contrasted this to the slow end with very tight connections between actin subunits that block an incoming monomer’s ability to polymerize onto that end.
    Zsolnay developed the study’s all-atom molecular dynamics simulation with the Voth Group on the Midway2 computing cluster at the University of Chicago Research Computing Center. He used GROMACS and NAMD software to investigate the equilibrium conformations of the subunits at the filament ends. “This was one of my first projects using a high performance computing cluster,” he said.
    XSEDE, the NSF-funded Extreme Science and Engineering Discovery Environment, then awarded the scientists allocations on the Stampede2 supercomputer at the Texas Advanced Computing Center. “It was very straightforward to test the code on our local cluster here, and then drop a couple of files onto the machines at Stampede2 to start running again within a day,” Zsolnay said.
    “The high performance computing clusters of Stampede2 are really what allowed this work to take place,” he added. “They were able to reach the time and length scales in our simulations that we were interested in. Without the resources provided by XSEDE, we would not have been able to analyze as large of a dataset or have had as much confidence in our findings.”
    They ran nine simulations, each of roughly a million atoms propagated for about a microsecond. “There are three nucleotide states that we were interested in — the ATP, the ADP plus the gamma phosphate, and once phosphate is released, it’s in an ADP nucleotide state.” Zsolnay said.

    advertisement

    The simulations showed the smoking gun of the mystery — distinct equilibrium conformations between the barbed end and the pointed end subunits, which led to meaningful differences in the contacts between neighboring actin monomer subunits.
    An actin monomer in solution has a conformation that’s a little wider than when it’s part of a longer actin polymer. The previous model, said Zsolnay, assumed that the wide shape transitions into the flattened shape once it polymerizes, almost discretely.
    “What we saw when we started the filament with all of the subunits in the flattened state, the ones at the end relaxed to resemble the monomeric state characterized by a wider shape,” Zsolnay explained. “At both of the ends, that same mechanism of the widening of the actin molecule led to very different contacts between subunits.” At the fast, barbed end there was a separation between the two molecules. Whereas at the pointed end, there was a very tight connection between them.
    Research into actin filaments could find wide-ranging applications, such as improving therapeutics. “What’s in the news right now is coronavirus,” Voth said, referring to the role of the innate immune system. It involves white blood cells called neutrophils that gobble up bacteria or other pathogens in one’s blood stream. “What’s critical to their ability to sniff out and seek pathogens is their ability to move through an environment and find the pathogens wherever they are. In the immune response, it’s very important,” he added
    And then there’s metastatic cancer, where one or a couple of tumor cells will start to migrate, spreading to other parts of the body. “If you could disrupt that in some way, or make it so that it’s not as reliable for your cancerous cells, then you could make a cancer treatment based off of that information,” Voth said.
    “One angle that Prof. Voth and I find particularly interesting is from a materials science standpoint,” said Zsolnay. The amino acids in the actin molecule are roughly the same throughout plants, animals, and yeasts. “That gives a hint to us that there’s something special about the material properties of actin molecules that can’t be reproduced using a different set of amino acids,” he added.
    This understanding could help advance development of biomimetic materials that repair themselves. “You can imagine, in the future, a new type of material that heals itself. For instance, if a bucket gets a hole in it, the material could sense that a wound has occurred and heal itself, just like human tissue would,” Zsolnay added.
    Said Voth: “People are really very keen on biomimetic materials — things that behave like these polymers. Our work is explaining a critical thing, which is the polarization of actin filaments.” More

  • in

    AI used to predict early symptoms of schizophrenia in relatives of patients

    University of Alberta researchers have taken another step forward in developing an artificial intelligence tool to predict schizophrenia by analyzing brain scans.
    In recently published research, the tool was used to analyze functional magnetic resonance images of 57 healthy first-degree relatives (siblings or children) of schizophrenia patients. It accurately identified the 14 individuals who scored highest on a self-reported schizotypal personality trait scale.
    Schizophrenia, which affects 300,000 Canadians, can cause delusions, hallucinations, disorganized speech, trouble with thinking and lack of motivation, and is usually treated with a combination of drugs, psychotherapy and brain stimulation. First-degree relatives of patients have up to a 19 per cent risk of developing schizophrenia during their lifetime, compared with the general population risk of less than one per cent.
    “Our evidence-based tool looks at the neural signature in the brain, with the potential to be more accurate than diagnosis by the subjective assessment of symptoms alone,” said lead author Sunil Kalmady Vasu, senior machine learning specialist in the Faculty of Medicine & Dentistry.
    Kalmady Vasu noted that the tool is designed to be a decision support tool and would not replace diagnosis by a psychiatrist. He also pointed out that while having schizotypal personality traits may cause people to be more vulnerable to psychosis, it is not certain that they will develop full-blown schizophrenia.
    “The goal is for the tool to help with early diagnosis, to study the disease process of schizophrenia and to help identify symptom clusters,” said Kalmady Vasu, who is also a member of the Alberta Machine Intelligence Institute.
    The tool, dubbed EMPaSchiz (Ensemble algorithm with Multiple Parcellations for Schizophrenia prediction), was previously used to predict a diagnosis of schizophrenia with 87 per cent accuracy by examining patient brain scans. It was developed by a team of researchers from U of A and the National Institute of Mental Health and Neurosciences in India. The team also includes three members of the U of A’s Neuroscience and Mental Health Institute — computing scientist and Canada CIFAR AI Chair Russ Greiner from the Faculty of Science, and psychiatrists Andrew Greenshaw and Serdar Dursun, who are authors on the latest paper as well.
    Kalmady Vasu said next steps for the research will test the tool’s accuracy on non-familial individuals with schizotypal traits, and to track assessed individuals over time to learn whether they develop schizophrenia later in life.
    Kalmady Vasu is also using the same principles to develop algorithms to predict outcomes such as mortality and readmissions for heart failure in cardiovascular patients through the Canadian VIGOUR Centre.
    “Severe mental illness and cardiovascular problems cause functional disability and impair quality of life,” Kalmady Vasu said. “It is very important to develop objective, evidence-based tools for these complex disorders that afflict humankind.”

    Story Source:
    Materials provided by University of Alberta Faculty of Medicine & Dentistry. Original written by Gillian Rutherford. Note: Content may be edited for style and length. More

  • in

    Anonymous cell phone data can quantify behavioral changes for flu-like illnesses

    Cell phone data that is routinely collected by telecommunications providers can reveal changes of behavior in people who are diagnosed with a flu-like illness, while also protecting their anonymity, a new study finds. The Proceedings of the National Academy of Sciences (PNAS) published the research, led by computer scientists at Emory University and based on data drawn from a 2009 outbreak of H1N1 flu in Iceland.
    “To our knowledge, our project is the first major, rigorous study to individually link passively-collected cell phone metadata with actual public health data,” says Ymir Vigfusson, assistant professor in Emory University’s Department of Computer Science and a first author of the study. “We’ve shown that it’s possible to do so without comprising privacy and that our method could potentially provide a useful tool to help monitor and control infectious disease outbreaks.”
    The researchers collaborated with a major cell phone service provider in Iceland, along with public health officials of the island nation. They analyzed data for more than 90,000 encrypted cell phone numbers, which represents about a quarter of Iceland’s population. They were permitted to link the encrypted cell phone metadata to 1,400 anonymous individuals who received a clinical diagnosis of a flu-like illness during the H1N1 outbreak.
    “The individual linkage is key,” Vigfusson says. “Many public-health applications for smartphone data have emerged during the COVID-19 pandemic but tend to be based around correlations. In contrast, we can definitively measure the differences in routine behavior between the diagnosed group and the rest of the population.”
    The results showed, on average, those who received a flu-like diagnosis changed their cell phone usage behavior a day before their diagnosis and the two-to-four days afterward: They made fewer calls, from fewer unique locations. On average, they also spent longer time than usual on the calls that they made on the day following their diagnosis.
    The study, which began long before the COVID-19 pandemic, took 10 years to complete. “We were going into new territory and we wanted to make sure we were doing good science, not just fast science,” Vigfusson says. “We worked hard and carefully to develop protocols to protect privacy and conducted rigorous analyses of the data.”
    Vignusson is an expert on data security and developing software and programming algorithms that work at scale.

    advertisement

    He shares first authorship of the study with two of his former students: Thorgeir Karlsson, a graduate student at Reykjavik University who spent a year at Emory working on the project, and Derek Onken, a Ph.D. student in the Computer Science department. Senior author Leon Danon — from the University of Bristol, and the Alan Turing Institute of the British Library — conceived of the study.
    While only about 40 percent of humanity has access to the Internet, cell phone ownership is ubiquitous, even in lower and middle-income countries, Vigfusson notes. And cell phone service providers routinely collect billing data that provide insights into the routine behaviors of a population, he adds.
    “The COVID pandemic has raised awareness of the importance of monitoring and measuring the progression of an infectious disease outbreak, and how it is essentially a race against time,” Vigfusson says. “More people also realize that there will likely be more pandemics during our lifetimes. It is vital to have the right tools to give us the best possible information quickly about the state of an epidemic outbreak.”
    Privacy concerns are a major reason why cell phone data has not been linked to public health data in the past. For the PNAS paper, the researchers developed a painstaking protocol to minimize these concerns.
    The cell phone numbers were encrypted, and their owners were not identified by name, but by a unique numerical identifier not revealed to the researchers. These unique identifiers were used to link the cell phone data to de-identified health records.

    advertisement

    “We were able to maintain anonymity for individuals throughout the process,” Vigfusson says. “The cell phone provider did not learn about any individual’s health diagnosis and the health department did not learn about any individual’s phone behaviors.”
    The study encompassed 1.5 billion call record data points including calls made, the dates of the calls, the cell tower location where the calls originated and the duration of the calls. The researchers linked this data to clinical diagnoses of a flu-like illness made by a health providers in a central database. Laboratory confirmation of influenza was not required.
    The analyses of the data focused on 29 days surrounding each clinical diagnosis, and looked at changes in mobility, the number of calls made and the duration of the calls. They measured these same factors during the same time period for location-matched controls.
    “Even though individual cell phones generated only a few data points per day, we were able to see a pattern where the population was behaving differently near the time they were diagnosed with a flu-like illness,” Vigfusson says.
    While the findings are significant, they represent only a first step for the possible broader use of the method, Vigfusson adds. The current work was limited to the unique environment of Iceland: An island with only one port of entry and a fairly homogenous, affluent and small population. It was also limited to a single infectious disease, H1N1, and those who received a clinical diagnosis for a flu-like illness.
    “Our work contributes to the discussion of what kinds of anonymous data lineages might be useful for public health monitoring purposes,” Vigfusson says. “We hope that others will build on our efforts and study whether our method can be adapted for use in other places and for other infectious diseases.”
    The work was funded by the Icelandic Center for Research, Emory University, the National Science Foundation, the Leverhulme Trust, the Alan Turing Institute, the Medical Research Council and a hardware donation from NVIDIA Corporation. More

  • in

    Biodegradable displays for sustainable electronics

    In the next years, increasing use of electronic devices in consumables and new technologies for the internet of things will increase the amount of electronic scrap. To save resources and minimize waste volumes, an eco-friendlier production and more sustainable lifecycle will be needed. Scientists of Karlsruhe Institute of Technology (KIT) have now been the first to produce displays, whose biodegradability has been checked and certified by an independent office. The results are reported in the Journal of Materials Chemistry.
    “For the first time, we have demonstrated that it is possible to produce sustainable displays that are largely based on natural materials with the help of industrially relevant production methods. After use, these displays are no electronic scrap, but can be composted. In combination with recycling and reuse, this might help minimize or completely prevent some of the environmental impacts of electronic scrap,” says Manuel Pietsch, first author of the publication and researcher of KIT’s Light Technology Institute (LTI), who is working at the Heidelberg InnovationLab.
    Low Energy Consumption, Simple Component Architecture
    Functioning of the display is based on the so-called electrochromic effect of the initial organic material. When voltage is applied, light absorption is modified and the material changes its color. Electrochromic displays have a low energy consumption and simple component architecture compared to commercially available displays, such as LED, LCD, and E-paper. Another advantage is that these displays can be produced by inkjet printing in a customized, inexpensive, and material-efficient way. Moreover, this process is suited for scaling with a high throughput. The materials used mainly are of natural origin or biocompatible. Sealing with gelatine makes the display adhesive and flexible, such that it can be worn directly on the skin.
    Use in Medical Diagnostics and Food Packagings
    The display is generally suited for short-lifecycle applications in various sectors. In medical diagnostics, for instance, where hygiene plays an important role, sensors and their indicators have to be cleaned or disposed of after use. The newly developed display will not be dumped as electronic scrap, but is compostable. It can also be used for quality monitoring in food packagings, where reuse is not permitted. Digital printing allows the displays to be adapted to persons or complex shapes without any expensive modification of the process. This reduces the consumption of resources.
    “As far as we know, this is the first demonstration of a biodegradable display produced by inkjet printing. It will pave the way to sustainable innovations for other electronic components and to the production of eco-friendlier electronics,” says Gerardo Hernandez-Sosa, Head of LTI’s Printed Electronics Group at the Heidelberg InnovationLab.

    Story Source:
    Materials provided by Karlsruher Institut für Technologie (KIT). Note: Content may be edited for style and length. More

  • in

    Toddlers who use touchscreens may be more distractible

    Toddlers with high daily touchscreen use are quicker to look at objects when they appear and are less able to resist distraction compared to toddlers with no or low touchscreen use — according to new research from Birkbeck, University of London, King’s College London and University of Bath.
    The research team say the findings are important for the growing debate around the role of screen time on toddlers’ development especially given the increased levels of screen time seen during the COVID-19 pandemic.
    Lead researcher Professor Tim Smith, from Birkbeck’s Centre for Brain and Cognitive Development, said: “The use of smartphones and tablets by babies and toddlers has accelerated rapidly in recent years. The first few years of life are critical for children to learn how to control their attention and ignore distraction, early skills that are known to be important for later academic achievement. There has been growing concern that toddler touchscreen use may negatively impact their developing attention but previously there was no empirical evidence to support this.”
    To provide such evidence, Professor Smith’s TABLET Project, at Birkbeck’s Centre for Brain and Cognitive Development, recruited 12-month-old infants who had different levels of touchscreen usage. The study followed them over the next 2.5 years, bringing them into the lab three times, at 12 months, 18 months and 3.5 years. During each visit the toddlers took part in computer tasks with an eye-tracker to measure their attention. Objects appeared in different screen locations. How quickly toddlers looked at the objects and how well they could ignore distracting objects were measured.
    Professor Smith states: “We found that infants and toddlers with high touchscreen use were faster to look at objects when they appeared and were less able to ignore distracting objects compared to the low users.”
    Dr Ana Maria Portugal, main researcher on the project points out “We are currently unable to conclude that the touchscreen use caused the differences in attention as it could also be that children who are more distractible may be more attracted to the attention-grabbing features of touchscreen devices than those who are not.”
    Co-investigator Dr Rachael Bedford, from the Department of Psychology at University of Bath commented: “What we need to know next is how this pattern of increased looking to distracting objects on screens relates to attention in the real-world: is it a positive sign that the children have adapted to the multitasking demands of their complex everyday environment or does it relate to difficulties during tasks that require concentration?”

    Story Source:
    Materials provided by University of Bath. Note: Content may be edited for style and length. More

  • in

    Domino effects and synchrony in seizure initiation

    Epilepsy, a neurological disease that causes recurring seizures with a wide array of effects, impacts approximately 50 million people across the world. This condition has been recognized for a long time — written records of epileptic symptoms date all the way back to 4000 B.C.E. But despite this long history of knowledge and treatment, the exact processes that occur in the brain during a seizure remain elusive.
    Scientists have observed distinctive patterns in the electrical activity of neuron groups in healthy brains. Networks of neurons move through states of similar behavior (synchronization) and dissimilar behavior (desynchronization) in a process that is associated with memory and attention. But in a brain with a neurological disorder like epilepsy, synchronization can grow to a dangerous extent when a collection of brain cells begins to emit excess electricity. “Synchronization is thought to be important for information processing,” Jennifer Creaser of the University of Exeter said. “But too much synchronization — such as what occurs in epileptic seizures or Parkinson’s disease — is associated with disease states and can impair brain function.”
    Measurements of epileptic seizures have revealed that desynchronization in brain networks often occurs before or during the early stages of a seizure. As the seizure progresses, networks become increasingly more synchronized as additional regions of the brain get involved, leading to high levels of synchronization towards the seizure’s end. Understanding the interactions between the increased electrical activity during a seizure and changes in synchronization is an important step towards improving the diagnosis and treatment of epilepsy.
    Jennifer Creaser, Peter Ashwin (University of Exeter), and Krasimira Tsaneva-Atanasova (University of Exeter, Technical University of Munich, and Bulgarian Academy of Sciences) explored the mechanisms of synchronization that accompany seizure onset in a paper that published in December in the SIAM Journal on Applied Dynamical Systems. In their study — which took place at the Engineering and Physical Science Research Council’s Centre for Predictive Modelling in Healthcare at the University of Exeter and University of Birmingham — the researchers used mathematical modeling to explore the interplay between groups of neurons in the brain that leads to transitions in synchronization changes during seizure onset. “Although this is a theoretical study of an idealized model, it is inspired by challenges posed by understanding transitions between healthy and pathological activity in the brain,” Ashwin said.
    The authors utilize an extended version of an existing mathematical model that represents the brain as a network connecting multiple nodes of neuron groups. The model network consists of bistable nodes, meaning that each node is able to switch between two stable states: resting (a quiescent state) and seizure (an active and oscillatory state). These nodes remain in their current state until they receive a stimulus that gives them a sufficient kick to escape to the other state. In the model, this stimulus comes from other connected nodes or appears in the form of “noise” — outside sources of neural activity, such as endocrine responses that are associated with an emotional state or physiological changes due to disease.
    The influence between neighboring nodes is governed by a coupling function that represents the way in which the nodes in the network communicate with each other. The first of the two possible types of coupling is amplitude coupling, which is governed by the “loudness” of the neighboring nodes. The second is phase coupling, which is related to the speed at which the neighbors are firing. Although the researchers needed to utilize a simple formulation on a small network to even make their analysis possible — a more complex and realistic system would be too computationally taxing — they expected their model to exhibit the same types of behaviors that clinical recordings of real brain activity have revealed.
    The nodes in the modeled system all begin in the healthy resting state. In previous research, the authors found that adding a small amount of noise to the system caused each node to transition to the active state — but the system’s geometry was such that returning to the resting state took much longer than leaving. Because of this, these escapes can spread sequentially as a “domino effect” when a number of nodes are connected. This leads to a cascade of escapes to the active state — much like a falling line of dominos — that spreads activity across the network.
    Creaser, Ashwin, and Tsaneva-Atanasova’s new paper builds upon this previous research on the domino effect to explore the transitions into and out of synchrony that occur during cascades of escapes. The team used their model to identify the circumstances that bring about these changes in synchrony and investigate how the type of coupling in a network affects its behavior.
    When the model incorporated only amplitude coupling, it exhibited a new phenomenon in which the domino effect could accelerate or decelerate. However, this effect had no bearing on synchronization changes in the network; all of the nodes started and remained synchronized. But when the model incorporated more general amplitude and phase coupling, the authors found that the nodes’ synchrony could change between consecutive escapes during the domino effect. They then determined which conditions would cause changes in synchrony under phase-amplitude coupling. This change in synchrony throughout the sequence of escapes was the study’s most novel result.
    The results of this work could facilitate further studies on seizures and their management. “The mathematical modeling of seizure initiation and propagation can not only help to uncover seizures’ complex underlying mechanisms, but also provide a means for enabling in silico experiments to predict the outcome of manipulating the neural system,” Tsaneva-Atanasova said. Understanding the interplay between synchronized and desynchronized dynamics in brain networks could help identify clinically-relevant measures for seizure treatment. For example, Creaser and Tsaneva-Atanasova recently served as the lead and senior author, respectively, on a paper that utilized a simpler version of the model to classify patterns of seizure onset that were recorded in a clinical setting. In the future, these kinds of modeling studies may lead to the personalization of seizure identification and treatment for individuals with epilepsy.

    Story Source:
    Materials provided by Society for Industrial and Applied Mathematics. Original written by Jillian Kunze. Note: Content may be edited for style and length. More

  • in

    Simulating 800,000 years of California earthquake history to pinpoint risks

    Massive earthquakes are, fortunately, rare events. But that scarcity of information blinds us in some ways to their risks, especially when it comes to determining the risk for a specific location or structure.
    “We haven’t observed most of the possible events that could cause large damage,” explained Kevin Milner, a computer scientist and seismology researcher at the Southern California Earthquake Center (SCEC) at the University of Southern California. “Using Southern California as an example, we haven’t had a truly big earthquake since 1857 — that was the last time the southern San Andreas broke into a massive magnitude 7.9 earthquake. A San Andreas earthquake could impact a much larger area than the 1994 Northridge earthquake, and other large earthquakes can occur too. That’s what we’re worried about.”
    The traditional way of getting around this lack of data involves digging trenches to learn more about past ruptures, collating information from lots of earthquakes all around the world and creating a statistical model of hazard, or using supercomputers to simulate a specific earthquake in a specific place with a high degree of fidelity.
    However, a new framework for predicting the likelihood and impact of earthquakes over an entire region, developed by a team of researchers associated with SCEC over the past decade, has found a middle ground and perhaps a better way to ascertain risk.
    A new study led by Milner and Bruce Shaw of Columbia University, published in the Bulletin of the Seismological Society of America in January 2021, presents results from a prototype Rate-State earthquake simulator, or RSQSim, that simulates hundreds of thousands of years of seismic history in California. Coupled with another code, CyberShake, the framework can calculate the amount of shaking that would occur for each quake. Their results compare well with historical earthquakes and the results of other methods, and display a realistic distribution of earthquake probabilities.
    According to the developers, the new approach improves the ability to pinpoint how big an earthquake might occur in a given location, allowing building code developers, architects, and structural engineers to design more resilient buildings that can survive earthquakes at a specific site.

    advertisement

    “For the first time, we have a whole pipeline from start to finish where earthquake occurrence and ground-motion simulation are physics-based,” Milner said. “It can simulate up to 100,000s of years on a really complicated fault system.”
    Applying massive computer power to big problems
    RSQSim transforms mathematical representations of the geophysical forces at play in earthquakes — the standard model of how ruptures nucleate and propagate — into algorithms, and then solves them on some of the most powerful supercomputers on the planet. The computationally-intensive research was enabled over several years by government-sponsored supercomputers at the Texas Advanced Computing Center, including Frontera — the most powerful system at any university in the world — Blue Waters at the National Center for Supercomputing Applications, and Summit at the Oak Ridge Leadership Computing Facility.
    “One way we might be able to do better in predicting risk is through physics-based modeling, by harnessing the power of systems like Frontera to run simulations,” said Milner. “Instead of an empirical statistical distribution, we simulate the occurrence of earthquakes and the propagation of its waves.”
    “We’ve made a lot of progress on Frontera in determining what kind of earthquakes we can expect, on which fault, and how often,” said Christine Goulet, Executive Director for Applied Science at SCEC, also involved in the work. “We don’t prescribe or tell the code when the earthquakes are going to happen. We launch a simulation of hundreds of thousands of years, and just let the code transfer the stress from one fault to another.”
    The simulations began with the geological topography of California and simulated over 800,000 virtual years how stresses form and dissipate as tectonic forces act on the Earth. From these simulations, the framework generated a catalogue — a record that an earthquake occurred at a certain place with a certain magnitude and attributes at a given time. The catalog that the SCEC team produced on Frontera and Blue Waters was among the largest ever made, Goulet said. The outputs of RSQSim were then fed into CyberShake that again used computer models of geophysics to predict how much shaking (in terms of ground acceleration, or velocity, and duration) would occur as a result of each quake.

    advertisement

    “The framework outputs a full slip-time history: where a rupture occurs and how it grew,” Milner explained. “We found it produces realistic ground motions, which tells us that the physics implemented in the model is working as intended.” They have more work planned for validation of the results, which is critical before acceptance for design applications.
    The researchers found that the RSQSim framework produces rich, variable earthquakes overall — a sign it is producing reasonable results — while also generating repeatable source and path effects.
    “For lots of sites, the shaking hazard goes down, relative to state-of-practice estimates” Milner said. “But for a couple of sites that have special configurations of nearby faults or local geological features, like near San Bernardino, the hazard went up. We are working to better understand these results and to define approaches to verify them.”
    The work is helping to determine the probability of an earthquake occurring along any of California’s hundreds of earthquake-producing faults, the scale of earthquake that could be expected, and how it may trigger other quakes.
    Support for the project comes from the U.S. Geological Survey (USGS), National Science Foundation (NSF), and the W.M. Keck Foundation. Frontera is NSF’s leadership-class national resource. Compute time on Frontera was provided through a Large-Scale Community Partnership (LSCP) award to SCEC that allows hundreds of U.S. scholars access to the machine to study many aspects of earthquake science. LSCP awards provide extended allocations of up to three years to support long-lived research efforts. SCEC — which was founded in 1991 and has computed on TACC systems for over a decade — is a premier example of such an effort.
    The creation of the catalog required eight days of continuous computing on Frontera and used more than 3,500 processors in parallel. Simulating the ground shaking at 10 sites across California required a comparable amount of computing on Summit, the second fastest supercomputer in the world.
    “Adoption by the broader community will be understandably slow,” said Milner. “Because such results will impact safety, it is part of our due diligence to make sure these results are technically defensible by the broader community,” added Goulet. But research results such as these are important in order to move beyond generalized building codes that in some cases may be inadequately representing the risk a region face while in other cases being too conservative.
    “The hope is that these types of models will help us better characterize seismic hazard so we’re spending our resources to build strong, safe, resilient buildings where they are needed the most,” Milner said.
    Video: https://www.youtube.com/watch?v=AdGctQsjKpU&feature=emb_logo More