More stories

  • in

    Breakthrough in mobile determination of QT prolongation

    Researchers from Mayo Clinic and AliveCor Inc. have been using artificial intelligence (AI) to develop a mobile device that can identify certain patients at risk of sudden cardiac death. This research has yielded a breakthrough in determining the health of the electrical recharging system in a patient’s heart. The researchers determined that a smartphone-enabled mobile EKG device can rapidly and accurately determine a patient’s QTc, thereby identifying patients at risk of sudden cardiac death from congenital long QT syndrome (LQTS) or drug-induced QT prolongation.
    The heart beats by a complex system of electrical signals triggering regular and necessary contractions. Clinicians evaluate the heart’s rate-corrected QT interval, or QTc, as a vital health barometer of the heart’s electrical recharging system. A potentially dangerous prolonged QTc, which is equal to or longer than 50 milliseconds, can be caused by:
    More than 100 drugs approved by the Food and Drug Administration (FDA).
    Genetics, including congenital long QT syndrome.
    Many systemic diseases, including even SARS-CoV-2-mediated COVID-19.
    Such a prolonged QTc can predispose people to dangerously fast and chaotic heartbeats, and even sudden cardiac death. For over 100 years, QTc assessment and monitoring has relied heavily on the 12-lead electrocardiogram (EKG). But that could be about to change, according to this research.
    Under the direction of Michael Ackerman, M.D., Ph.D., a genetic cardiologist at Mayo Clinic, researchers trained and validated an AI-based deep neural network to detect QTc prolongation using AliveCor’s KardiaMobile 6L EKG device. The findings, which were published in Circulation, compared the ability of an AI-enabled mobile EKG to a traditional 12-lead EKG in detecting QT prolongation.
    “This collaborative effort with investigators from academia and industry has yielded what I call a ‘pivot’ discovery,” says Dr. Ackerman, who is director of Mayo Clinic’s Windland Smith Rice Comprehensive Sudden Cardiac Death Program. “Whereby, we will pivot from the old way that we have been obtaining the QTc to this new way. Since Einthoven’s first major EKG paper in 1903, 2021 will mark the new beginning for the QT interval.”
    The team used more than 1.6 million 12-lead EKGs from over a half-million patients to train and validate an AI-based deep neural network to recognize and accurately measure the QTc. Next this newly developed AI-based QTc assessment ?the “QT meter” ? was tested prospectively on nearly 700 patients evaluated by Dr. Ackerman in Mayo Clinic’s Windland Smith Rice Genetic Heart Rhythm Clinic. Half of these patients had congenital long QT syndrome.

    advertisement

    The object was to compare the QTc values from a 12-lead EKG to those from the prototype hand-held EKG device used with a smartphone. Both sets of EKGs were given at the same clinical visit, typically within five minutes of each other.
    The AI algorithm’s ability to recognize clinically meaningful QTc prolongation on a mobile EKG device was similar to the EKG assessments made by a trained QT expert and a commercial laboratory specializing in QTc measurements for drug studies. The mobile device effectively detected a QTc value of greater than or equal to 500 milliseconds, performing with:
    80% sensitivity This means that fewer cases of QTc prolongation were missed.
    94.4% specificity
    This means that it was highly accurate in predicting who did not have a prolonged QTc.
    “The ability to equip mobile EKG devices with accurate AI-powered approaches capable of calculating accurately the QTc represents a potential paradigm shift regarding how and where the QT interval can be assessed,” says John Giudicessi, M.D., Ph.D., a Mayo Clinic cardiology fellow and first author of the study.
    “Currently, AliveCor’s KardiaMobile 6L EKG device is FDA-cleared for detection of atrial fibrillation, bradycardia, and tachycardia. Once FDA clearance is received for this AI-based QTc assessment, we will have a true QT meter that can enable this emerging vital sign to be obtained easily and accurately,” says Dr. Ackerman. “Akin to a glucose meter for diabetics, for example, this QT meter will provide an early warning system, enabling patients with congenital or acquired LQTS to be identified and potentially lifesaving adjustments to their medications and electrolytes to be made.”
    “This point-of-care application of artificial intelligence is massively scalable, since it is linked to a smartphone. It can save lives by telling a person that a specific medication may be harmful before he or she takes the first pill,” says Paul Friedman, M.D., chair of the Department of Cardiovascular Medicine at Mayo Clinic in Rochester. “This allows a potentially life threatening condition to be detected before symptoms are manifest.”
    “Regularly monitoring for LQTS using KardiaMobile 6L allows for accurate, real-time data collection outside the walls of a hospital,” says David Albert, M.D., founder and chief medical officer at AliveCor Inc. “Because LQTS can be intermittent and elusive, the ability to detect this rhythm abnormality without a 12-lead EKG — which requires the patient be in-hospital — can improve patient outcomes and save on hospital resources, while still providing the reliable and timely data physicians and their patients need.”
    This research was sponsored by the Mayo Clinic Windland Smith Rice Comprehensive Sudden Cardiac Death Program. Mayo Clinic; Zachi Attia, Ph.D.; Peter Noseworthy, M.D.; Dr. Ackerman; and Dr. Friedman have a financial interest with AliveCor, Inc. related to this research.

    Story Source:
    Materials provided by Mayo Clinic. Original written by Terri Malloy. Note: Content may be edited for style and length. More

  • in

    Photonics for artificial intelligence and neuromorphic computing

    Scientists have given a fascinating new insight into the next steps to develop fast, energy-efficient, future computing systems that use light instead of electrons to process and store information — incorporating hardware inspired directly by the functioning of the human brain.
    A team of scientists, including Professor C. David Wright from the University of Exeter, has explored the future potential for computer systems by using photonics in place of conventional electronics.
    The article is published today (January 29th 2021) in the journal Nature Photonics.
    The study focuses on potential solutions to one of the world’s most pressing computing problems — how to develop computing technologies to process this data in a fast and energy efficient way.
    Contemporary computers are based on the von Neumann architecture in which the fast Central Processing Unit (CPU) is physically separated from the much slower program and data memory.
    This means computing speed is limited and power is wasted by the need to continuously transfer data to and from the memory and processor over bandwidth-limited and energy-inefficient electrical interconnects — known as the von Neumann bottleneck.
    As a result, it has been estimated that more than 50 % of the power of modern computing systems is wasted simply in this moving around of data.
    Professor C David Wright, from the University of Exeter’s Department of Engineering, and one of the co-authors of the study explains “Clearly, a new approach is needed — one that can fuse together the core information processing tasks of computing and memory, one that can incorporate directly in hardware the ability to learn, adapt and evolve, and one that does away with energy-sapping and speed-limiting electrical interconnects.”
    Photonic neuromorphic computing is one such approach. Here, signals are communicated and processed using light rather than electrons, giving access to much higher bandwidths (processor speeds) and vastly reducing energy losses.
    Moreover, the researchers try to make the computing hardware itself isomorphic with biological processing system (brains), by developing devices to directly mimic the basic functions of brain neurons and synapses, then connecting these together in networks that can offer fast, parallelised, adaptive processing for artificial intelligence and machine learning applications.

    Story Source:
    Materials provided by University of Exeter. Note: Content may be edited for style and length. More

  • in

    Chumash Indians were using highly worked shell beads as currency 2,000 years ago

    As one of the most experienced archaeologists studying California’s Native Americans, Lynn Gamble(link is external) knew the Chumash Indians had been using shell beads as money for at least 800 years.
    But an exhaustive review of some of the shell bead record led the UC Santa Barbara professor emerita of anthropology to an astonishing conclusion: The hunter-gatherers centered on the Southcentral Coast of Santa Barbara were using highly worked shells as currency as long as 2,000 years ago.
    “If the Chumash were using beads as money 2,000 years ago,” Gamble said, “this changes our thinking of hunter-gatherers and sociopolitical and economic complexity. This may be the first example of the use of money anywhere in the Americas at this time.”
    Although Gamble has been studying California’s indigenous people since the late 1970s, the inspiration for her research on shell bead money came from far afield: the University of Tübingen in Germany. At a symposium there some years ago, most of the presenters discussed coins and other non-shell forms of money. Some, she said, were surprised by the assumptions of California archaeologists about what constituted money.
    Intrigued, she reviewed the definitions and identifications of money in California and questioned some of the long-held beliefs. Her research led to “The origin and use of shell bead money in California” in the Journal of Anthropological Archaeology.
    Gamble argues that archaeologists should use four criteria in assessing whether beads were used for currency versus adornment: Shell beads used as currency should be more labor-intensive than those for decorative purposes; highly standardized beads are likely currency; bigger, eye-catching beads were more likely used as decoration; and currency beads are widely distributed.

    advertisement

    “I then compared the shell beads that had been accepted as a money bead for over 40 years by California archaeologists to another type that was widely distributed,” she said. “For example, tens of thousands were found with just one individual up in the San Francisco Bay Area. This bead type, known as a saucer bead, was produced south of Point Conception and probably on the northern [Santa Barbara] Channel Islands, according to multiple sources of data, at least most, if not all of them.
    “These earlier beads were just as standardized, if not more so, than those that came 1,000 years later,” Gamble continued. “They also were traded throughout California and beyond. Through sleuthing, measurements and comparison of standardizations among the different bead types, it became clear that these were probably money beads and occurred much earlier than we previously thought.”
    As Gamble notes, shell beads have been used for over 10,000 years in California, and there is extensive evidence for the production of some of these beads, especially those common in the last 3,000 to 4,000 years, on the northern Channel Islands. The evidence includes shell bead-making tools, such as drills, and massive amounts of shell bits — detritus — that littered the surface of archaeological sites on the islands.
    In addition, specialists have noted that the isotopic signature of the shell beads found in the San Francisco Bay Area indicate that the shells are from south of Point Conception.
    “We know that right around early European contact,” Gamble said, “the California Indians were trading for many types of goods, including perishable foods. The use of shell beads no doubt greatly facilitated this wide network of exchange.”
    Gamble’s research not only resets the origins of money in the Americas, it calls into question what constitutes “sophisticated” societies in prehistory. Because the Chumash were non-agriculturists — hunter-gatherers — it was long held that they wouldn’t need money, even though early Spanish colonizers marveled at extensive Chumash trading networks and commerce.
    Recent research on money in Europe during the Bronze Age suggests it was used there some 3,500 years ago. For Gamble, that and the Chumash example are significant because they challenge a persistent perspective among economists and some archaeologists that so-called “primitive” societies could not have had “commercial” economies.
    “Both the terms ‘complex’ and ‘primitive’ are highly charged, but it is difficult to address this subject without avoiding those terms,” she said. “In the case of both the Chumash and the Bronze Age example, standardization is a key in terms of identifying money. My article on the origin of money in California is not only pushing the date for the use of money back 1,000 years in California, and possibly the Americas, it provides evidence that money was used by non-state level societies, commonly identified as ‘civilizations.’ ” More

  • in

    How the brain is programmed for computer programming?

    Countries around the world are seeing a surge in the number of computer science students. Enrolment in related university programs in the U.S. and Canada tripled between 2006-2016 and Europe too has seen rising numbers. At the same time, the age to start coding is becoming younger and younger because governments in many different countries are pushing K-12 computer science education. Despite the increasing popularity of computer programming, little is known about how our brains adapt to this relatively new activity. A new study by researchers in Japan has examined the brain activity of thirty programmers of diverse levels of expertise, finding that seven regions of the frontal, parietal and temporal cortices in expert programmer’s brain are fine-tuned for programming. The finding suggests that higher programming skills are built upon fine-tuned brain activities on a network of multiple distributed brain regions.
    “Many studies have reported differences between expert and novice programmers in behavioural performance, knowledge structure and selective attention. What we don’t know is where in the brain these differences emerge,” says Takatomi Kubo, an associate professor at Nara Institute of Science and Technology, Japan, and one of the lead authors of the study.
    To answer this question, the researchers observed groups of novices, experienced, and expert programmers. The programmers were shown 72 different code snippets while under the observation of functional MRI (fMRI) and asked to place each snippet into one of four functional categories. As expected, programmers with higher skills were better at correctly categorizing the snippets. A subsequent searchlight analysis revealed that the amount of information in seven brain regions strengthened with the skill level of the programmer: the bilateral inferior frontal gyrus pars triangularis (IFG Tri), left inferior parietal lobule (IPL), left supramarginal gyrus (SMG), left middle and inferior temporal gyri (MTG/IT), and right middle frontal gyrus (MFG).
    “Identifying these characteristics in expert programmers’ brains offers a good starting point for understanding the cognitive mechanisms behind programming expertise. Our findings illuminate the potential set of cognitive functions constituting programming expertise,” Kubo says.
    More specifically, the left IFG Tri and MTG are known to be associated with natural language processing and, in particular, semantic knowledge retrieval in a goal-oriented way. The left IPL and SMG are associated with episodic memory retrieval. The right MFG and IFG Tri are functionally related to stimulus-driven attention control.
    “Programming is a relatively new activity in human history and the mechanism is largely unknown. Connecting the activity to other well-known human cognitive functions will improve our understanding of programming expertise. If we get more comprehensive theory about programming expertise, it will lead to better methods for learning and teaching computer programming,” Kubo says.

    Story Source:
    Materials provided by Nara Institute of Science and Technology. Note: Content may be edited for style and length. More

  • in

    'Liquid' machine-learning system adapts to changing conditions

    MIT researchers have developed a type of neural network that learns on the job, not just during its training phase. These flexible algorithms, dubbed “liquid” networks, change their underlying equations to continuously adapt to new data inputs. The advance could aid decision making based on data streams that change over time, including those involved in medical diagnosis and autonomous driving.
    “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” says Ramin Hasani, the study’s lead author. “The potential is really significant.”
    The research will be presented at February’s AAAI Conference on Artificial Intelligence. In addition to Hasani, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), MIT co-authors include Daniela Rus, CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, and PhD student Alexander Amini. Other co-authors include Mathias Lechner of the Institute of Science and Technology Austria and Radu Grosu of the Vienna University of Technology.
    Time series data are both ubiquitous and vital to our understanding the world, according to Hasani. “The real world is all about sequences. Even our perception — you’re not perceiving images, you’re perceiving sequences of images,” he says. “So, time series data actually create our reality.”
    He points to video processing, financial data, and medical diagnostic applications as examples of time series that are central to society. The vicissitudes of these ever-changing data streams can be unpredictable. Yet analyzing these data in real time, and using them to anticipate future behavior, can boost the development of emerging technologies like self-driving cars. So Hasani built an algorithm fit for the task.
    Hasani designed a neural network that can adapt to the variability of real-world systems. Neural networks are algorithms that recognize patterns by analyzing a set of “training” examples. They’re often said to mimic the processing pathways of the brain — Hasani drew inspiration directly from the microscopic nematode, C. elegans. “It only has 302 neurons in its nervous system,” he says, “yet it can generate unexpectedly complex dynamics.”
    Hasani coded his neural network with careful attention to how C. elegans neurons activate and communicate with each other via electrical impulses. In the equations he used to structure his neural network, he allowed the parameters to change over time based on the results of a nested set of differential equations.

    advertisement

    This flexibility is key. Most neural networks’ behavior is fixed after the training phase, which means they’re bad at adjusting to changes in the incoming data stream. Hasani says the fluidity of his “liquid” network makes it more resilient to unexpected or noisy data, like if heavy rain obscures the view of a camera on a self-driving car. “So, it’s more robust,” he says.
    There’s another advantage of the network’s flexibility, he adds: “It’s more interpretable.”
    Hasani says his liquid network skirts the inscrutability common to other neural networks. “Just changing the representation of a neuron,” which Hasani did with the differential equations, “you can really explore some degrees of complexity you couldn’t explore otherwise.” Thanks to Hasani’s small number of highly expressive neurons, it’s easier to peer into the “black box” of the network’s decision making and diagnose why the network made a certain characterization.
    “The model itself is richer in terms of expressivity,” says Hasani. That could help engineers understand and improve the liquid network’s performance.
    Hasani’s network excelled in a battery of tests. It edged out other state-of-the-art time series algorithms by a few percentage points in accurately predicting future values in datasets, ranging from atmospheric chemistry to traffic patterns. “In many applications, we see the performance is reliably high,” he says. Plus, the network’s small size meant it completed the tests without a steep computing cost. “Everyone talks about scaling up their network,” says Hasani. “We want to scale down, to have fewer but richer nodes.”
    Hasani plans to keep improving the system and ready it for industrial application. “We have a provably more expressive neural network that is inspired by nature. But this is just the beginning of the process,” he says. “The obvious question is how do you extend this? We think this kind of network could be a key element of future intelligence systems.”
    This research was funded, in part, by Boeing, the National Science Foundation, the Austrian Science Fund, and Electronic Components and Systems for European Leadership. More

  • in

    A metalens for virtual and augmented reality

    Despite all the advances in consumer technology over the past decades, one component has remained frustratingly stagnant: the optical lens. Unlike electronic devices, which have gotten smaller and more efficient over the years, the design and underlying physics of today’s optical lenses haven’t changed much in about 3,000 years.
    This challenge has caused a bottleneck in the development of next-generation optical systems such as wearable displays for virtual reality, which require compact, lightweight, and cost-effective components.
    At the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), a team of researchers led by Federico Capasso, the Robert L. Wallace Professor of Applied Physics and Vinton Hayes Senior Research Fellow in Electrical Engineering, has been developing the next generation of lenses that promise to open that bottleneck by replacing bulky curved lenses with a simple, flat surface that uses nanostructures to focus light.
    In 2018, the Capasso’s team developed achromatic, aberration-free metalenses that work across the entire visible spectrum of light. But these lenses were only tens of microns in diameter, too small for practical use in VR and augmented reality systems.
    Now, the researchers have developed a two-millimeter achromatic metalenses that can focus RGB (red, blue, green) colors without aberrations and developed a miniaturized display for virtual and augmented reality applications.
    The research is published in Science Advances.

    advertisement

    “This state-of-the-art lens opens a path to a new type of virtual reality platform and overcomes the bottleneck that has slowed the progress of new optical device,” said Capasso, the senior author of the paper.
    “Using new physics and a new design principle, we have developed a flat lens to replace the bulky lenses of today’s optical devices,” said Zhaoyi Li, a postdoctoral fellow at SEAS and first author of the paper. “This is the largest RGB-achromatic metalens to date and is a proof of concept that these lenses can be scaled up to centimeter size, mass produced, and integrated in commercial platforms.”
    Like previous metalenses, this lens uses arrays of titanium dioxide nanofins to equally focus wavelengths of light and eliminate chromatic aberration. By engineering the shape and pattern of these nanoarrays, the researchers could control the focal length of red, green and blue color of light. To incorporate the lens into a VR system, the team developed a near-eye display using a method called fiber scanning.
    The display, inspired by fiber-scanning-based endoscopic bioimaging techniques, uses an optical fiber through a piezoelectric tube. When a voltage is applied onto the tube, the fiber tip scans left and right and up and down to display patterns, forming a miniaturized display. The display has high resolution, high brightness, high dynamic range, and wide color gamut.
    In a VR or AR platform, the metalens would sit directly in front of the eye, and the display would sit within the focal plane of the metalens. The patterns scanned by the display are focused onto the retina, where the virtual image forms, with the help of the metalens. To the human eye, the image appears as part of the landscape in the AR mode, some distance from our actual eyes.

    advertisement

    “We have demonstrated how meta-optics platforms can help resolve the bottleneck of current VR technologies and potentially be used in our daily life,” said Li.
    Next, the team aims to scale up the lens even further, making it compatible with current large-scale fabrication techniques for mass production at a low cost.
    The Harvard Office of Technology Development has protected the intellectual property relating to this project and is exploring commercialization opportunities.
    The research was co-authored by Yao-Wei Huang, Joon-Suh Park, Wei Ting Chen, and Zhujun Shi from Harvard University, Peng Lin and Ji-Xin Cheng from Boston University, and Cheng-Wei Qiu from the National University of Singapore.
    The research was supported in part by the Defense Advanced Research Projects Agency under award no. HR00111810001, the National Science Foundation under award no. 1541959 and the SAMSUNG GRO research program under award no. A35924. More

  • in

    A NEAT reduction of complex neuronal models accelerates brain research

    Neurons, the fundamental units of the brain, are complex computers by themselves. They receive input signals on a tree-like structure — the dendrite. This structure does more than simply collect the input signals: it integrates and compares them to find those special combinations that are important for the neurons’ role in the brain. Moreover, the dendrites of neurons come in a variety of shapes and forms, indicating that distinct neurons may have separate roles in the brain.
    A simple yet faithful model
    In neuroscience, there has historically been a tradeoff between a model’s faithfulness to the underlying biological neuron and its complexity. Neuroscientists have constructed detailed computational models of many different types of dendrites. These models mimic the behavior of real dendrites to a high degree of accuracy. The tradeoff, however, is that such models are very complex. Thus, it is hard to exhaustively characterize all possible responses of such models and to simulate them on a computer. Even the most powerful computers can only simulate a small fraction of the neurons in any given brain area.
    Researchers from the Department of Physiology at the University of Bern have long sought to understand the role of dendrites in computations carried out by the brain. On the one hand, they have constructed detailed models of dendrites from experimental measurements, and on the other hand they have constructed neural network models with highly abstract dendrites to learn computations such as object recognition. A new study set out to find a computational method to make highly detailed models of neurons simpler, while retaining a high degree of faithfulness. This work emerged from the collaboration between experimental and computational neuroscientists from the research groups of Prof. Thomas Nevian and Prof. Walter Senn, and was led by Dr Willem Wybo. “We wanted the method to be flexible, so that it could be applied to all types of dendrites. We also wanted it to be accurate, so that it could faithfully capture the most important functions of any given dendrite. With these simpler models, neural responses can more easily be characterized and simulation of large networks of neurons with dendrites can be conducted,” Dr Wybo explains.
    This new approach exploits an elegant mathematical relation between the responses of detailed dendrite models and of simplified dendrite models. Due to this mathematical relation, the objective that is optimized is linear in the parameters of the simplified model. “This crucial observation allowed us to use the well-known linear least squares method to find the optimized parameters. This method is very efficient compared to methods that use non-linear parameter searches, but also achieves a high degree of accuracy,” says Prof. Senn.
    Tools available for AI applications
    The main result of the work is the methodology itself: a flexible yet accurate way to construct reduced neuron models from experimental data and morphological reconstructions. “Our methodology shatters the perceived tradeoff between faithfulness and complexity, by showing that extremely simplified models can still capture much of the important response properties of real biological neurons,” Prof. Senn explains. “Which also provides insight into ‘the essential dendrite’, the simplest possible dendrite model that still captures all possible responses of the real dendrite from which it is derived,” Dr Wybo adds.
    Thus, in specific situations, hard bounds can be established on how much a dendrite can be simplified, while retaining its important response properties. “Furthermore, our methodology greatly simplifies deriving neuron models directly from experimental data,” Prof. Senn highlights, who is also a member of the steering committe of the Center for Artifical Intelligence (CAIM) of the University of Bern. The methodology has been compiled into NEAT (NEural Analysis Toolkit) — an open-source software toolbox that automatizes the simplification process. NEAT is publicly available on GitHub.
    The neurons used currently in AI applications are exceedingly simplistic compared to their biological counterparts, as they don’t include dendrites at all. Neuroscientists believe that including dendrite-like operations in artificial neural networks will lead to the next leap in AI technology. By enabling the inclusion of very simple, but very accurate dendrite models in neural networks, this new approach and toolkit provide an important step towards that goal.
    This work was supported by the Human Brain Project, by the Swiss National Science foundation and by the European Research Council.

    Story Source:
    Materials provided by University of Bern. Note: Content may be edited for style and length. More

  • in

    Mira's last journey: Exploring the dark universe

    A team of physicists and computer scientists from the U.S. Department of Energy’s (DOE) Argonne National Laboratory performed one of the five largest cosmological simulations ever. Data from the simulation will inform sky maps to aid leading large-scale cosmological experiments.
    The simulation, called the Last Journey, follows the distribution of mass across the universe over time — in other words, how gravity causes a mysterious invisible substance called “dark matter” to clump together to form larger-scale structures called halos, within which galaxies form and evolve.
    The scientists performed the simulation on Argonne’s supercomputer Mira. The same team of scientists ran a previous cosmological simulation called the Outer Rim in 2013, just days after Mira turned on. After running simulations on the machine throughout its seven-year lifetime, the team marked Mira’s retirement with the Last Journey simulation.
    The Last Journey demonstrates how far observational and computational technology has come in just seven years, and it will contribute data and insight to experiments such as the Stage-4 ground-based cosmic microwave background experiment (CMB-S4), the Legacy Survey of Space and Time (carried out by the Rubin Observatory in Chile), the Dark Energy Spectroscopic Instrument and two NASA missions, the Roman Space Telescope and SPHEREx.
    “We worked with a tremendous volume of the universe, and we were interested in large-scale structures, like regions of thousands or millions of galaxies, but we also considered dynamics at smaller scales,” said Katrin Heitmann, deputy division director for Argonne’s High Energy Physics (HEP) division.
    The code that constructed the cosmos
    The six-month span for the Last Journey simulation and major analysis tasks presented unique challenges for software development and workflow. The team adapted some of the same code used for the 2013 Outer Rim simulation with some significant updates to make efficient use of Mira, an IBM Blue Gene/Q system that was housed at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.

    advertisement

    Specifically, the scientists used the Hardware/Hybrid Accelerated Cosmology Code (HACC) and its analysis framework, CosmoTools, to enable incremental extraction of relevant information at the same time as the simulation was running.
    “Running the full machine is challenging because reading the massive amount of data produced by the simulation is computationally expensive, so you have to do a lot of analysis on the fly,” said Heitmann. “That’s daunting, because if you make a mistake with analysis settings, you don’t have time to redo it.”
    The team took an integrated approach to carrying out the workflow during the simulation. HACC would run the simulation forward in time, determining the effect of gravity on matter during large portions of the history of the universe. Once HACC determined the positions of trillions of computational particles representing the overall distribution of matter, CosmoTools would step in to record relevant information — such as finding the billions of halos that host galaxies — to use for analysis during post-processing.
    “When we know where the particles are at a certain point in time, we characterize the structures that have formed by using CosmoTools and store a subset of data to make further use down the line,” said Adrian Pope, physicist and core HACC and CosmoTools developer in Argonne’s Computational Science (CPS) division. “If we find a dense clump of particles, that indicates the location of a dark matter halo, and galaxies can form inside these dark matter halos.”
    The scientists repeated this interwoven process — where HACC moves particles and CosmoTools analyzes and records specific data — until the end of the simulation. The team then used features of CosmoTools to determine which clumps of particles were likely to host galaxies. For reference, around 100 to 1,000 particles represent single galaxies in the simulation.

    advertisement

    “We would move particles, do analysis, move particles, do analysis,” said Pope. “At the end, we would go back through the subsets of data that we had carefully chosen to store and run additional analysis to gain more insight into the dynamics of structure formation, such as which halos merged together and which ended up orbiting each other.”
    Using the optimized workflow with HACC and CosmoTools, the team ran the simulation in half the expected time.
    Community contribution
    The Last Journey simulation will provide data necessary for other major cosmological experiments to use when comparing observations or drawing conclusions about a host of topics. These insights could shed light on topics ranging from cosmological mysteries, such as the role of dark matter and dark energy in the evolution of the universe, to the astrophysics of galaxy formation across the universe.
    “This huge data set they are building will feed into many different efforts,” said Katherine Riley, director of science at the ALCF. “In the end, that’s our primary mission — to help high-impact science get done. When you’re able to not only do something cool, but to feed an entire community, that’s a huge contribution that will have an impact for many years.”
    The team’s simulation will address numerous fundamental questions in cosmology and is essential for enabling the refinement of existing models and the development of new ones, impacting both ongoing and upcoming cosmological surveys.
    “We are not trying to match any specific structures in the actual universe,” said Pope. “Rather, we are making statistically equivalent structures, meaning that if we looked through our data, we could find locations where galaxies the size of the Milky Way would live. But we can also use a simulated universe as a comparison tool to find tensions between our current theoretical understanding of cosmology and what we’ve observed.”
    Looking to exascale
    “Thinking back to when we ran the Outer Rim simulation, you can really see how far these scientific applications have come,” said Heitmann, who performed Outer Rim in 2013 with the HACC team and Salman Habib, CPS division director and Argonne Distinguished Fellow. “It was awesome to run something substantially bigger and more complex that will bring so much to the community.”
    As Argonne works towards the arrival of Aurora, the ALCF’s upcoming exascale supercomputer, the scientists are preparing for even more extensive cosmological simulations. Exascale computing systems will be able to perform a billion billion calculations per second — 50 times faster than many of the most powerful supercomputers operating today.
    “We’ve learned and adapted a lot during the lifespan of Mira, and this is an interesting opportunity to look back and look forward at the same time,” said Pope. “When preparing for simulations on exascale machines and a new decade of progress, we are refining our code and analysis tools, and we get to ask ourselves what we weren’t doing because of the limitations we have had until now.”
    The Last Journey was a gravity-only simulation, meaning it did not consider interactions such as gas dynamics and the physics of star formation. Gravity is the major player in large-scale cosmology, but the scientists hope to incorporate other physics in future simulations to observe the differences they make in how matter moves and distributes itself through the universe over time.
    “More and more, we find tightly coupled relationships in the physical world, and to simulate these interactions, scientists have to develop creative workflows for processing and analyzing,” said Riley. “With these iterations, you’re able to arrive at your answers — and your breakthroughs — even faster.” More