More stories

  • in

    These weird, thin ice crystals are springy and bendy

    Try to bend an icicle and it’ll snap in two. With its tendency to crack into shards, ice’s reputation for being stiff and brittle seems well-established. But thin, pristine threads of ice are bendy and elastic, scientists report in the July 9 Science.

    To create the flexible ice, Peizhen Xu of Zhejiang University in Hangzhou, China and colleagues used a needle with an electric voltage applied to it, which attracted water vapor within a chilled chamber. The resulting ice whiskers were a few micrometers in diameter or less, a fraction of the width of a typical human hair.

    Usually, ice contains defects: tiny cracks, pores or misaligned sections of crystal. But the specially grown ice threads consisted of near-perfect ice crystals with atypical properties. When manipulated at temperatures of –70° Celsius and –150° C, the ice could be curved into a partial circle with a radius of tens of micrometers. When the bending force was released, the fibers sprang back to their original shape.

    [embedded content]
    Researchers bent a tiny fiber of ice (thin white line) into a loop, showing that the usually brittle material can be flexible under certain conditions.

    Bending the fibers compresses the ice on its inside edge. The new measurements indicate that the compression induces the ice to take on a different structure. That’s to be expected for ice, which is known to morph into a variety of phases depending on pressure and temperature (SN: 1/11/09). The discovery could give researchers a new way to study ice’s properties when squeezed.

    Thin ice strands form naturally in snowflakes. Unlike the ice in the experiment, snowflakes don’t consist of single, flawless ice crystals. But small sections of the flakes could be single crystals, the researchers say, suggesting that tiny bits of snowflakes could also bend. More

  • in

    Handwriting beats typing and watching videos for learning to read

    Though writing by hand is increasingly being eclipsed by the ease of computers, a new study finds we shouldn’t be so quick to throw away the pencils and paper: handwriting helps people learn certain skills surprisingly faster and significantly better than learning the same material through typing or watching videos.
    “The question out there for parents and educators is why should our kids spend any time doing handwriting,” says senior author Brenda Rapp, a Johns Hopkins University professor of cognitive science. “Obviously, you’re going to be a better hand-writer if you practice it. But since people are handwriting less then maybe who cares? The real question is: Are there other benefits to handwriting that have to do with reading and spelling and understanding? We find there most definitely are.”
    The work appears in the journal Psychological Science.
    Rapp and lead author Robert Wiley, a former Johns Hopkins University Ph.D. student who is now a professor at the University of North Carolina, Greensboro, conducted an experiment in which 42 people were taught the Arabic alphabet, split into three groups of learners: writers, typers and video watchers.
    Everyone learned the letters one at a time by watching videos of them being written along with hearing names and sounds. After being introduced to each letter, the three groups would attempt to learn what they just saw and heard in different ways. The video group got an on-screen flash of a letter and had to say if it was the same letter they’d just seen. The typers would have to find the letter on the keyboard. The writers had to copy the letter with pen and paper.
    At the end, after as many as six sessions, everyone could recognize the letters and made few mistakes when tested. But the writing group reached this level of proficiency faster than the other groups — a few of them in just two sessions. More

  • in

    Simulations of turbulence's smallest structures

    When you pour cream into a cup of coffee, the viscous liquid seems to lazily disperse throughout the cup. Take a mixing spoon or straw to the cup, though, and the cream and coffee seem to quickly and seamlessly combine into a lighter color and, at least for some, a more enjoyable beverage.
    The science behind this relatively simple anecdote actually speaks to a larger truth about complex fluid dynamics and underpins many of the advancements made in transportation, power generation, and other technologies since the industrial era — the seemingly random chaotic motions known as turbulence play a vital role in chemical and industrial processes that rely on effective mixing of different fluids.
    While scientists have long studied turbulent fluid flows, their inherent chaotic natures have prevented researchers from developing an exhaustive list of reliable “rules,” or universal models for accurately describing and predicting turbulence. This tall challenge has left turbulence as one of the last major unsolved “grand challenges” in physics.
    In recent years, high-performance computing (HPC) resources have played an increasingly important role in gaining insight into how turbulence influences fluids under a variety of circumstances. Recently, researchers from the RWTH Aachen University and the CORIA (CNRS UMR 6614) research facility in France have been using HPC resources at the Jülich Supercomputing Centre (JSC), one of the three HPC centres comprising the Gauss Centre for Supercomputing (GCS), to run high-resolution direct numerical simulations (DNS) of turbulent setups including jet flames. While extremely computationally expensive, DNS of turbulence allows researchers to develop better models to run on more modest computing resources that can help academic or industrial researchers using turbulence’s effects on a given fluid flow.
    “The goal of our research is to ultimately improve these models, specifically in the context of combustion and mixing applications,” said Dr. Michael Gauding, CORIA scientist and researcher on the project. The team’s recent work was just named the distinguished paper from the “Turbulent Flames” colloquium, which happened as part of the 38th International Symposium on Combustion.
    Starts and stops
    Despite its seemingly random, chaotic characteristics, researchers have identified some important properties that are universal, or at least very common, for turbulence under specific conditions. Researchers studying how fuel and air mix in a combustion reaction, for instance, rely on turbulence to ensure a high mixing efficiency. Much of that important turbulent motion may stem from what happens in a thin area near the edge of the flame, where its chaotic motions collide with the smoother-flowing fluids around it. This area, the turbulent-non-turbulent interface (TNTI), has big implications for understanding turbulent mixing. More

  • in

    Human-driven climate change sent Pacific Northwest temperatures soaring

    The deadly heat wave that baked the Pacific Northwest in late June would have been “virtually impossible” without human-caused climate change, an international team of scientists announced July 7.

    In fact, the temperatures were so extreme — Portland, Ore., reached a staggering 47° Celsius (116° Fahrenheit) on June 29, while Seattle surged to 42° C (108° F) — that initial analyses suggested they were impossible even with climate change, Geert Jan van Oldenborgh, a climate scientist with the Royal Netherlands Meteorological Institute in De Bilt, said at a news conference to announce the team’s findings. “This was an extraordinary event. I don’t know what English word covers it.”

    Climate change due to greenhouse gas emissions made the region’s heat wave at least 150 times more likely to occur, the team found. As emissions and global temperatures continue to rise, such extreme heat events could happen in the region as often as every five to 10 years by the end of the century.  

    It’s not just that numerous temperature records were broken, van Oldenborgh said. It’s that the observed temperatures were so far outside of historical records, breaking those records by as much as 5 degrees C in many places — and a full month before usual peak temperatures for the region. The observations were also several degrees higher than the upper temperature limits predicted by most climate simulations for the heat waves, even taking global warming into account.

    Coming just about a week after the heat wave broke, the new study is the latest real-time climate attribution effort by scientists affiliated with the World Weather Attribution network. Van Oldenborgh and University of Oxford climate scientist Friederike Otto founded the group in 2014 to conduct quick analyses of extreme events such as the 2020 Siberian heat wave (SN: 7/15/20).

    In the current study, 27 researchers focused on how the observed temperatures from June 27 to June 29 compared with annual maximum temperatures over the last 50 years for locations across the northwestern United States and southwestern Canada. The team then used 21 different climate simulations of temperatures to analyze the intensity of such a heat wave in the region with and without the influence of greenhouse gas warming.

    Earth has already warmed by about 1.2 degrees C relative to preindustrial times. That warming, the researchers determined, increased the intensity of the heat wave by about 2 degrees C. Once global warming increases to 2 degrees C, future heat waves may become even more intense (SN: 12/17/18). Those heat waves could be another 1.3 degrees C hotter, the researchers found.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    That poses a real danger. The late June heat wave took a painful toll (SN: 6/29/21), killing several hundred people — “almost certainly” an underestimate, the researchers say. On June 29, Lytton, a small village in British Columbia, set an all-time Canadian temperature record of 49.6° C (121.3° F). The heat may have exacerbated wildfires that, a day later, swept through British Columbia’s Fraser Canyon region, burning 90 percent of the village, according to local officials. Meanwhile, the U.S. West and southwestern Canada are already bracing for another round of soaring temperatures.

    One possible reason for the startling intensity of this heat wave is that, while climate change amped up the temperatures, what happened was still a very rare, unlucky event for the region. How rare isn’t easy to say, given that the observed temperatures were so far off the charts, the researchers say. Under current climate conditions, simulations suggest that such a heat wave might occur once every 1,000 years — but these events will become much more common in future as the climate changes.

    By the end of June 2021, more than 40 wildfires burned across Canada’s British Columbia, exacerbated by extreme dryness and the intense heat. One fire burned 90 percent of the town of Lytton, which had set a new temperature record for the country the day before. The fire also generated a massive storm-producing plume of smoke called a pyrocumulonimbus cloud.NASA

    Another possibility is grimmer: Climate simulations may not accurately capture what really happens during extreme heat waves. “Climate science has been a bit complacent” about simulating heat waves, assuming that heat wave temperatures would increase linearly along with rising global temperatures, Otto said. But now, Earth’s climate system may have entered a new state in which other climatic factors, such as drier soils or changes to jet stream circulation, are exacerbating the heat in more difficult-to-predict, less linear ways.

    The new study didn’t seek to determine which of these possibilities is true, though the team plans to tackle this question over the next few months. However, many scientists have already noted the inability of current climate models to capture what’s really going on.  

    “I agree that it is virtually impossible that the [Pacific Northwest] heat wave would have occurred with the observed intensity in the absence of climate change,” Michael Mann, a climate scientist at Penn State who wasn’t involved in the attribution study, commented via e-mail. “But the models used don’t capture the jet stream phenomenon … that WE KNOW played an important role in this event.”

    Disproportionate warming of the Arctic region alters temperature gradients high in the atmosphere, which can lead to a wavier jet stream, Mann wrote in the New York Times June 29. That waviness can exacerbate and prolong extreme weather events, such as the heat dome centered over the Pacific Northwest in late June.

    This recent heat wave wasn’t just a major disaster, but also posed major scientific questions, van Oldenborgh said. Such an event “would have been judged impossible last year. All of us have just dialed down our certainty of how heat waves behave,” he added. “[We] are much less certain of how the climate affects heat waves than we were two weeks ago.” More

  • in

    Researchers record brainwaves to measure 'cybersickness'

    If a virtual world has ever left you feeling nauseous or disorientated, you’re familiar with cybersickness, and you’re hardly alone. The intensity of virtual reality (VR) — whether that’s standing on the edge of a waterfall in Yosemite or engaging in tank combat with your friends — creates a stomach-churning challenge for 30-80% of users.
    In a first-of-its kind study, researchers at the University of Maryland recorded VR users’ brain activity using electroencephalography (EEG) to better understand and work toward solutions to prevent cybersickness. The research was conducted by Eric Krokos, who received his Ph.D. in computer science in 2018, and Amitabh Varshney, a professor of computer science and dean of UMD’s College of Computer, Mathematical, and Natural Sciences.
    Their study, “Quantifying VR cybersickness using EEG,” was recently published in the journal Virtual Reality.
    The term cybersickness derives from motion sickness, but instead of physical movement, it’s the perception of movement in a virtual environment that triggers physical symptoms such as nausea and disorientation. While there are several theories about why it occurs, the lack of a systematic, quantified way of studying cybersickness has hampered progress that could help make VR accessible to a broader population.
    Krokos and Varshney are among the first to use EEG — which records brain activity through sensors on the scalp — to measure and quantify cybersickness for VR users. They were able to establish a correlation between the recorded brain activity and self-reported symptoms of their participants. The work provides a new benchmark — helping cognitive psychologists, game developers and physicians as they seek to learn more about cybersickness and how to alleviate it.
    “Establishing a strong correlation between cybersickness and EEG-measured brain activity is the first step toward interactively characterizing and mitigating cybersickness, and improving the VR experience for all,” Varshney said. More

  • in

    Machine learning tool sorts the nuances of quantum data

    An interdisciplinary team of Cornell and Harvard University researchers developed a machine learning tool to parse quantum matter and make crucial distinctions in the data, an approach that will help scientists unravel the most confounding phenomena in the subatomic realm.
    The Cornell-led project’s paper, “Correlator Convolutional Neural Networks as an Interpretable Architecture for Image-like Quantum Matter Data,” published June 23 in Nature Communications. The lead author is doctoral student Cole Miles.
    The Cornell team was led by Eun-Ah Kim, professor of physics in the College of Arts and Sciences, who partnered with Kilian Weinberger, associate professor of computing and information science in the Cornell Ann S. Bowers College of Computing and Information Science and director of the TRIPODS Center for Data Science for Improved Decision Making.
    The collaboration with the Harvard team, led by physics professor Markus Greiner, is part of the National Science Foundation’s 10 Big Ideas initiative, “Harnessing the Data Revolution.” Their project, “Collaborative Research: Understanding Subatomic-Scale Quantum Matter Data Using Machine Learning Tools,” seeks to address fundamental questions at the frontiers of science and engineering by pairing data scientists with researchers who specialize in traditional areas of physics, chemistry and engineering.
    The project’s central aim is to find ways to extract new information about quantum systems from snapshots of image-like data. To that end, they are developing machine learning tools that can identify relationships among microscopic properties in the data that otherwise would be impossible to determine at that scale.
    Convolutional neural networks, a kind of machine learning often used to analyze visual imagery, scan an image with a filter to find characteristic features in the data irrespective of where they occur — a step called “convolution.” The convolution is then sent through nonlinear functions that make the convolutional neural networks learn all sorts of correlations among the features. More

  • in

    Scientists use artificial intelligence to detect gravitational waves

    When gravitational waves were first detected in 2015 by the advanced Laser Interferometer Gravitational-Wave Observatory (LIGO), they sent a ripple through the scientific community, as they confirmed another of Einstein’s theories and marked the birth of gravitational wave astronomy. Five years later, numerous gravitational wave sources have been detected, including the first observation of two colliding neutron stars in gravitational and electromagnetic waves.
    As LIGO and its international partners continue to upgrade their detectors’ sensitivity to gravitational waves, they will be able to probe a larger volume of the universe, thereby making the detection of gravitational wave sources a daily occurrence. This discovery deluge will launch the era of precision astronomy that takes into consideration extrasolar messenger phenomena, including electromagnetic radiation, gravitational waves, neutrinos and cosmic rays. Realizing this goal, however, will require a radical re-thinking of existing methods used to search for and find gravitational waves.
    Recently, computational scientist and lead for translational artificial intelligence (AI), Eliu Huerta of the U.S. Department of Energy’s (DOE) Argonne National Laboratory, in conjunction with collaborators from Argonne, the University of Chicago, the University of Illinois at Urbana-Champaign, NVIDIA and IBM, has developed a new production-scale AI framework that allows for accelerated, scalable and reproducible detection of gravitational waves.
    This new framework indicates that AI models could be as sensitive as traditional template matching algorithms, but orders of magnitude faster. Furthermore, these AI algorithms would only require an inexpensive graphics processing unit (GPU), like those found in video gaming systems, to process advanced LIGO data faster than real time.
    The AI ensemble used for this study processed an entire month — August 2017 — of advanced LIGO data in less than seven minutes, distributing the dataset over 64 NVIDIA V100 GPUs. The AI ensemble used by the team for this analysis identified all four binary black hole mergers previously identified in that dataset, and reported no misclassifications.
    “As a computer scientist, what’s exciting to me about this project,” said Ian Foster, director of Argonne’s Data Science and Learning (DSL) division, “is that it shows how, with the right tools, AI methods can be integrated naturally into the workflows of scientists — allowing them to do their work faster and better — augmenting, not replacing, human intelligence.”
    Bringing disparate resources to bear, this interdisciplinary and multi-institutional team of collaborators has published a paper in Nature Astronomy showcasing a data-driven approach that combines the team’s collective supercomputing resources to enable reproducible, accelerated, AI-driven gravitational wave detection.
    “In this study, we’ve used the combined power of AI and supercomputing to help solve timely and relevant big-data experiments. We are now making AI studies fully reproducible, not merely ascertaining whether AI may provide a novel solution to grand challenges,” Huerta said.
    Building upon the interdisciplinary nature of this project, the team looks forward to new applications of this data-driven framework beyond big-data challenges in physics.
    “This work highlights the significant value of data infrastructure to the scientific community,” said Ben Blaiszik, a research scientist at Argonne and the University of Chicago. “The long-term investments that have been made by DOE, the National Science Foundation (NSF), the National Institutes of Standards and Technology and others have created a set of building blocks. It is possible for us to bring these building blocks together in new and exciting ways to scale this analysis and to help deliver these capabilities to others in the future.”
    Huerta and his research team developed their new framework through the support of the NSF, Argonne’s Laboratory Directed Research and Development (LDRD) program and DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.
    “These NSF investments contain original, innovative ideas that hold significant promise of transforming the way scientific data arriving in fast streams are processed. The planned activities are bringing accelerated and heterogeneous computing technology to many scientific communities of practice,” said Manish Parashar, director of the Office of Advanced Cyberinfrastructure at NSF. More

  • in

    Study gauges hospital preparedness for the next national medical crisis

    As the COVID-19 pandemic wanes in the U.S., a new study from the University of Maryland School of Medicine (UMSOM) and University of Maryland Medical Center (UMMC) finds that hospitals nationwide may not be adequately prepared for the next pandemic. A 10-year analysis of hospitals’ preparedness for pandemics and other mass casualty events found only marginal improvements in a measurement to assess preparedness during the years leading up to the COVID-19 pandemic. The study was published last month in the Journal of Healthcare Management.
    “Our work links objective healthcare data to a hospital score that assesses the ability to save lives in a disaster,” said study lead author David Marcozzi, MD, Professor of Emergency Medicine at UMSOM and Chief Clinical Officer/Senior Vice President at UMMC. “It attempts to fill a glaring gap in the national conversation on the need for improved assessments of and the opportunity for better hospital planning to assure readiness.”
    To conduct the research, Dr. Marcozzi, who is also the COVID-19 Incident Commander for the University of Maryland Medical System, and his colleagues first developed and published a surge index tool that linked standard reported hospital information to healthcare preparedness elements. The tool, called the Hospital Medical Surge Preparedness Index (HMSPI), used data from 2005 to 2014 to produce a score designed to predict how well a hospital can handle a sudden influx in patients due to a mass shooting or infectious disease outbreak. Such data included the size of the medical staff, the number of hospital beds, and the amount of equipment and supplies.
    Medical surge capacity is an important measure to assess a hospital’s ability to expand quickly beyond normal services to meet an increased demand for healthcare. The Las Vegas mass shooting in 2017, for example, sent more than 500 concertgoers to local hospitals. During the early weeks of the COVID-19 pandemic, New York City hospitals were under siege with 4,000 patients hospitalized. To calculate the HMSPI, researchers input data from four important metrics: Staff: Doctors, nurses, pharmacists, respiratory technicians and others Supplies: Personal protective equipment, cardiac monitors, sterile bandages, and ventilators Space: Total beds and number of beds that current staff can handle Systems: Framework for enabling electronic sharing of files and information between departments and multiple hospitalsIn the new study, Dr. Marcozzi and his colleagues used data from the American Hospital Association’s annual surveys of more than 6,200 hospitals nationwide that were collected from 2005 to 2014. They also employed data from the U.S. Census Bureau to determine population estimates in cities and the Dartmouth Atlas Project to establish the geographic service area of each hospital. They combined the hospital metrics gleaned from the AHA’s annual surveys with the geographic data to calculate HMSPI composite scores for hospitals in each state.
    Their evaluation found varying levels of increases in HMSPI scores from 2005 to 2014 in every state, which could indicate that states are becoming better prepared to handle a medical surge. The scores also indicated that ideal readiness had not yet been achieved in any state before the COVID-19 pandemic.
    “This is just the starting point. We need to better understand the ability of our nation’s hospitals to save lives in times of crisis,” said Dr. Marcozzi. This information, and follow-up studies building from this work, will be key to better matching states’ healthcare resources to their population to assure optimal care is delivered. Dr. Marcozzi described one follow-up study that would be impactful would be to use data from the COVID-19 pandemic to see whether the index was predictive to indicate which hospitals were most prepared for the pandemic surge based on their patient outcomes.
    “This pioneering work is a needed advancement that could allow for a transparent assessment of a hospital’s ability to save lives in a large-scale emergency,” Dr. Marcozzi said. “The COVID-19 pandemic demonstrated that there is still plenty of room for improvement in the ability of our nation’s healthcare system to triage and manage multiple patients in a crisis and that translates into lives lost, unnecessarily. Our research is dedicated to those who lost their lives in this tragedy and other mass casualty events. We can do better.”
    National health leadership organizations, such as the U.S. Centers for Medicare and Medicaid Services, the Assistant Secretary for Preparedness and Response, the Joint Commission, and the American Medical Association, as well as state and local emergency planners, could all potentially benefit from the use of HMSPI scores, according to Dr. Marcozzi. The tool could be used to support data-driven policy development and resource allocation to close gaps and assure that individuals get the care they need, when then need it, during a crisis.
    Ricardo Pietrobon, MD, PhD, MBA, Adjunct Associate Professor of Emergency Medicine at UMSOM, Nicole Baehr, Manager of Operations at UMMC, and Brian J. Browne, MD, Professor and Chair of the Department of Emergency Medicine, were co-authors on this study. Researchers from the University of Nebraska Medical Center, University of Miami, and the U.S. Department of Veterans Affairs also participated in this research. The study was funded by the Bipartisan Commission on Biodefense.
    “The COVID-19 pandemic taught us that we need to be better prepared for the unexpected crisis,” said E. Albert Reece, MD, PhD, MBA, Executive Vice President for Medical Affairs, UM Baltimore, and the John Z. and Akiko K. Bowers Distinguished Professor and Dean, University of Maryland School of Medicine. “Having an important metric like the HMSPI could be a game changer that ultimately saves lives during a surge by helping hospitals identify and fix their vulnerabilities. More