More stories

  • in

    Computer model predicts dominant SARS-CoV-2 variants

    Scientists at the Broad Institute of MIT and Harvard and the University of Massachusetts Medical School have developed a machine learning model that can analyze millions of SARS-CoV-2 genomes and predict which viral variants will likely dominate and cause surges in COVID-19 cases. The model, called PyR0 (pronounced “pie-are-nought”), could help researchers identify which parts of the viral genome will be less likely to mutate and hence be good targets for vaccines that will work against future variants. The findings appear today in Science.
    The researchers trained the machine-learning model using 6 million SARS-CoV-2 genomes that were in the GISAID database in January 2022. They showed how their tool can also estimate the effect of genetic mutations on the virus’s fitness — its ability to multiply and spread through a population. When the team tested their model on viral genomic data from January 2022, it predicted the rise of the BA.2 variant, which became dominant in many countries in March 2022. PyR0 would have also identified the alpha variant (B.1.1.7) by late November 2020, a month before the World Health Organization listed it as a variant of concern.
    The research team includes first author Fritz Obermeyer, a machine learning fellow at the Broad Institute when the study began, and senior authors Jacob Lemieux, an instructor of medicine at Harvard Medical School and Massachusetts General Hospital, and Pardis Sabeti, an institute member at Broad, a professor at the Center for Systems Biology and the Department of Organismic and Evolutionary Biology at Harvard University, and a professor in the Department of Immunology and Infectious Disease at the Harvard T. H. Chan School of Public Health. Sabeti is also a Howard Hughes Medical Institute investigator.
    PyR0 is based on a machine learning framework called Pyro, which was originally developed by a team at Uber AI Labs. In 2020, three members of that team including Obermeyer and Martin Jankowiak, the study’s second author, joined the Broad Institute and began applying the framework to biology.
    “This work was the result of biologists and geneticists coming together with software engineers and computer scientists,” Lemieux said. “We were able to tackle some really challenging questions in public health that no single disciplinary approach could have answered on its own.”
    “This kind of machine learning-based approach that looks at all the data and combines that into a single prediction is extremely valuable,” said Sabeti. “It gives us a leg up on identifying what’s emerging and could be a potential threat.”
    The future of SARS-CoV-2 More

  • in

    Traveling wave of light sparks simple liquid crystal microposts to perform complex dance

    When humans twist and turn it is the result of complex internal functions: the body’s nervous system signals our intentions; the musculoskeletal system supports the motion; and the digestive system generates the energy to power the move. The body seamlessly integrates these activities without our even being aware that coordinated, dynamic processes are taking place. Reproducing similar, integrated functioning in a single synthetic material has proven difficult -few one-component materials naturally encompass the spatial and temporal coordination needed to mimic the spontaneity and dexterity of biological behavior.
    However, through a combination of experiments and modeling, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and University of Pittsburgh Swanson School of Engineering created a single-material, self-regulating system that controllably twists and bends to undergo biomimetic motion. 
    Senior author is Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS. Inspired by experiments performed in the Aizenberg lab, contributing authors at the University of Pittsburgh, Anna Balazs and James Waters, developed the theoretical and computational models to design liquid crystal elastomers (LCEs) that imitate the seamless coupling of dynamic processes observed in living systems.
    “Our movements occur spontaneously because the human body contains several interconnected structures, and the performance of each structure is highly coordinated in space and time, allowing one event to instigate the behavior in another part of the body,” explained Balazs, Distinguished Professor of Chemical Engineering and the John A. Swanson Chair of Engineering. “For example, the firing of neurons in the spine triggers a signal that causes a particular muscle to contract; the muscle expands when the neurons have stopped firing, allowing the body to return to its relaxed shape. If we could replicate this level of interlocking, multi-functionality in a synthetic material, we could ultimately devise effective self-regulating, autonomously operating devices.”
    The LCE material used in this collaborative Harvard- Pitt study was composed of long polymer chains with rod-like groups (mesogens) attached via side branches; photo-responsive crosslinkers were used to make the LCE responsive to UV light. The material was molded into a micron-scale posts anchored to an underlying surface. The Harvard team then demonstrated an extremely diverse set of complex motions that the microstructures can display when exposed to light. “The coupling among microscopic units — the polymers, side chains, meogens and crosslinkers — within this material could remind you of the interlocking of different components within a human body” said Balazs, “suggesting that with the right trigger, the LCE might display rich spatiotemporal behavior.”
    To devise the most effective triggers, Waters formulated a model that describes the simultaneous optical, chemical and mechanical phenomena occurring over the range of length and time scales that characterize the LCE. The simulations also provided an effective means of uncovering and visualizing the complex interactions within this responsive opto-chemo-mechanical system.
    “Our model can accurately predict the spatial and temporal evolution of the posts and reveal how different behaviors can be triggered by varying the materials’ properties and features of the imposed light,” Waters said, further noting “The model serves as a particularly useful predictive tool when the complexity of the system is increased by, for example, introducing multiple interacting posts, which can be arranged in an essentially infinite number of ways.”
    According to Balazs, these combined modeling and experimental studies pave the way for creating the next generation of light-responsive, soft machines or robots that begin to exhibit life-like autonomy. “Light is a particularly useful stimulus for activating these materials since the light source can be easily moved to instigate motion in different parts of the post or collection of posts,” she said.
    In future studies, Waters and Balazs will investigate how arrays of posts and posts with different geometries behave under the influence of multiple or more localized beams of light. Preliminary results indicate that in the presence of multiple light beams, the LCE posts can mimic the movement and flexibility of fingers, suggesting new routes for designing soft robotic hands that can be manipulated with light.
    “The vast design space for individual and collective motions is potentially transformative for soft robotics, micro-walkers, sensors, and robust information encryption systems,” said Aizenberg.
    Story Source:
    Materials provided by University of Pittsburgh. Note: Content may be edited for style and length. More

  • in

    Emulating impossible 'unipolar' laser pulses paves the way for processing quantum information

    A laser pulse that sidesteps the inherent symmetry of light waves could manipulate quantum information, potentially bringing us closer to room temperature quantum computing.
    The study, led by researchers at the University of Regensburg and the University of Michigan, could also accelerate conventional computing.
    Quantum computing has the potential to accelerate solutions to problems that need to explore many variables at the same time, including drug discovery, weather prediction and encryption for cybersecurity. Conventional computer bits encode either a 1 or 0, but quantum bits, or qubits, can encode both at the same time. This essentially enables quantum computers to work through multiple scenarios simultaneously, rather than exploring them one after the other. However, these mixed states don’t last long, so the information processing must be faster than electronic circuits can muster.
    While laser pulses can be used to manipulate the energy states of qubits, different ways of computing are possible if charge carriers used to encode quantum information could be moved around — including a room-temperature approach. Terahertz light, which sits between infrared and microwave radiation, oscillates fast enough to provide the speed, but the shape of the wave is also a problem. Namely, electromagnetic waves are obliged to produce oscillations that are both positive and negative, which sum to zero.
    The positive cycle may move charge carriers, such as electrons. But then the negative cycle pulls the charges back to where they started. To reliably control the quantum information, an asymmetric light wave is needed.
    “The optimum would be a completely directional, unipolar ‘wave,’ so there would be only the central peak, no oscillations. That would be the dream. But the reality is that light fields that propagate have to oscillate, so we try to make the oscillations as small as we can,” said Mackillo Kira, U-M professor of electrical engineering and computer science and leader of the theory aspects of the study in Light: Science & Applications. More

  • in

    Developing next-generation superconducting cables

    Researchers at Florida State University’s Center for Advanced Power Systems (CAPS), in collaboration with Colorado-based Advanced Conductor Technologies, have demonstrated a new, ready-to-use superconducting cable system — an improvement to superconductor technology that drives the development of technologies such as all-electric ships or airplanes.
    In a paper published in Superconductor Science and Technology, the researchers demonstrated a system that uses helium gas for crucial cooling. Superconducting cables can move electrical current with no resistance, but they need very cold temperatures to function.
    “We want to make these cables smaller, with lower weight and lower volume,” said paper co-author Sastry Pamidi, a FAMU-FSU College of Engineering professor and CAPS associate director. “These are very efficient power cables, and this research is focused on improving efficiency and practicality needed to achieve the promise of next-generation superconductor technology.”
    Previous work showed that the body of superconducting cables could be cooled with helium gas, but the cable ends needed another medium for cooling, such as liquid nitrogen. In this paper, researchers overcame that obstacle and were able to cool an entire cable system with helium gas.
    The work gives engineers more design flexibility because helium remains a gas in a wider range of temperatures than other mediums. Liquid nitrogen, for example, isn’t a suitable cooling medium for some applications, and this research moves superconducting technology closer to practical solutions for those scenarios.
    The paper is the latest outcome of the partnership between researchers at CAPS and Advanced Conductor Technologies (ACT). Previous teamwork has led to other publications and to the development of Conductor on Round Core (CORC®) cables that were the subject of this research.
    “Removing the need for liquid nitrogen to pre-cool the current leads of the superconducting cable and instead using the same helium gas that cools the cable allowed us to make a highly compact superconducting power cable that can be operated in a continuous mode,” said Danko van der Laan, ACT’s founder. “It therefore has become an elegant system that’s small and lightweight and it allows much easier integration into electric ships and aircraft.”
    The ongoing collaboration has been funded by Small Business Innovation Research (SBIR) grants from the U.S. Navy. The grants encourage businesses to partner with universities to conduct high-level research.
    The collaboration provides benefits for all involved. Companies receive help creating new products. Students see how their classwork applies to real-life engineering problems. Taxpayers get the technical and economic benefits that come from the innovations. And faculty members receive a share of a company’s research funding and the opportunity to tackle exciting work.
    “We like challenges,” Pamidi said. “These grants come with challenges that have a clear target. The company says ‘This is what we want to develop. Can you help us with this?’ It is motivating, and it also provides students with connections. The small businesses we work with not only provide money, but they also see the skills our students are gaining.”
    CAPS researcher Chul Kim and ACT researcher Jeremy Weiss were co-authors on this work. Along with the U.S. Navy grant, this research was supported by the U.S. Department of Energy.
    Story Source:
    Materials provided by Florida State University. Original written by Bill Wellock. Note: Content may be edited for style and length. More

  • in

    Scientists use quantum computers to simulate quantum materials

    Scientists achieve important milestone in making quantum computing more effective.
    Quantum computers promise to revolutionize science by enabling computations that were once thought impossible. But for quantum computers to become an everyday reality, there is a long way to go and many challenging tests to pass.
    One of the tests involves using quantum computers to simulate the properties of materials for next-generation quantum technologies.
    In a new study from the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago, researchers performed quantum simulations of spin defects, which are specific impurities in materials that could offer a promising basis for new quantum technologies. The study improved the accuracy of calculations on quantum computers by correcting for noise introduced by quantum hardware.
    “We want to learn how to use new computational technologies that are up-and-coming. Developing robust strategies in the early days of quantum computing is an important first step in being able to understand how to use these machines efficiently in the future.” — Giulia Galli, Argonne and University of Chicago
    The research was conducted as part of the Midwest Integrated Center for Computational Materials (MICCoM), a DOE computational materials science program headquartered at Argonne, as well as Q-NEXT, a DOE National Quantum Information Science Research Center. More

  • in

    Significant energy savings using neuromorphic hardware

    For the first time TU Graz’s Institute of Theoretical Computer Science and Intel Labs demonstrated experimentally that a large neural network can process sequences such as sentences while consuming four to sixteen times less energy while running on neuromorphic hardware than non-neuromorphic hardware. The new research based on Intel Labs’ Loihi neuromorphic research chip that draws on insights from neuroscience to create chips that function similar to those in the biological brain.
    The research was funded by The Human Brain Project (HBP), one of the largest research projects in the world with more than 500 scientists and engineers across Europe studying the human brain. The results of the research are published in the research paper “Memory for AI Applications in Spike-based Neuromorphic Hardware” (DOI 10.1038/s42256-022-00480-w) which in published in Nature Machine Intelligence.
    Human brain as a role model
    Smart machines and intelligent computers that can autonomously recognize and infer objects and relationships between different objects are the subjects of worldwide artificial intelligence (AI) research. Energy consumption is a major obstacle on the path to a broader application of such AI methods. It is hoped that neuromorphic technology will provide a push in the right direction. Neuromorphic technology is modelled after the human brain, which is highly efficient in using energy. To process information, its hundred billion neurons consume only about 20 watts, not much more energy than an average energy-saving light bulb.
    In the research, the group focused on algorithms that work with temporal processes. For example, the system had to answer questions about a previously told story and grasp the relationships between objects or people from the context. The hardware tested consisted of 32 Loihi chips.
    Loihi research chip: up to sixteen times more energy-efficient than non-neuromorphic hardware
    “Our system is four to sixteen times more energy-efficient than other AI models on conventional hardware,” says Philipp Plank, a doctoral student at TU Graz’s Institute of Theoretical Computer Science. Plank expects further efficiency gains as these models are migrated to the next generation of Loihi hardware, which significantly improves the performance of chip-to-chip communication. More

  • in

    Researchers develop algorithm to divvy up tasks for human-robot teams

    As robots increasingly join people on the factory floor, in warehouses and elsewhere on the job, dividing up who will do which tasks grows in complexity and importance. People are better suited for some tasks, robots for others. And in some cases, it is advantageous to spend time teaching a robot to do a task now and reap the benefits later.
    Researchers at Carnegie Mellon University’s Robotics Institute (RI) have developed an algorithmic planner that helps delegate tasks to humans and robots. The planner, “Act, Delegate or Learn” (ADL), considers a list of tasks and decides how best to assign them. The researchers asked three questions: When should a robot act to complete a task? When should a task be delegated to a human? And when should a robot learn a new task?
    “There are costs associated with the decisions made, such as the time it takes a human to complete a task or teach a robot to complete a task and the cost of a robot failing at a task,” said Shivam Vats, the lead researcher and a Ph.D. student in the RI. “Given all those costs, our system will give you the optimal division of labor.”
    The team’s work could be useful in manufacturing and assembly plants, for sorting packages, or in any environment where humans and robots collaborate to complete several tasks. The researchers tested the planner in scenarios where humans and robots had to insert blocks into a peg board and stack parts of different shapes and sizes made of Lego bricks.
    Using algorithms and software to decide how to delegate and divide labor is not new, even when robots are part of the team. However, this work is among the first to include robot learning in its reasoning.
    “Robots aren’t static anymore,” Vats said. “They can be improved and they can be taught.”
    Often in manufacturing, a person will manually manipulate a robotic arm to teach the robot how to complete a task. Teaching a robot takes time and, therefore, has a high upfront cost. But it can be beneficial in the long run if the robot can learn a new skill. Part of the complexity is deciding when it is best to teach a robot versus delegating the task to a human. This requires the robot to predict what other tasks it can complete after learning a new task.
    Given this information, the planner converts the problem into a mixed integer program — an optimization program commonly used in scheduling, production planning or designing communication networks — that can be solved efficiently by off-the-shelf software. The planner performed better than traditional models in all instances and decreased the cost of completing the tasks by 10% to 15%.
    Vats presented the work, “Synergistic Scheduling of Learning and Allocation of Tasks in Human-Robot Teams” at the International Conference on Robotics and Automation in Philadelphia, where it was nominated for the outstanding interaction paper award. The research team included Oliver Kroemer, an assistant professor in RI; and Maxim Likhachev, an associate professor in RI.
    The research was funded by the Office of Naval Research and the Army Research Laboratory.
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    Biocrusts reduce global dust emissions by 60 percent

    In the unceasing battle against dust, humans possess a deep arsenal of weaponry, from microfiber cloths to feather dusters to vacuum cleaners. But new research suggests that none of that technology can compare to nature’s secret weapon — biological soil crusts.

    These biocrusts are thin, cohesive layers of soil, glued together by dirt-dwelling organisms, that often carpet arid landscapes. Though innocuous, researchers now estimate that these rough soil skins prevent around 700 teragrams (30,000 times the mass of the Statue of Liberty) of dust from wafting into the air each year, reducing global dust emissions by a staggering 60 percent. Unless steps are taken to preserve and restore biocrusts, which are threatened by climate change and shifts in land use, the future will be much dustier, ecologist Bettina Weber and colleagues report online May 16 in Nature Geoscience.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Dry-land ecosystems, such as savannas, shrublands and deserts, may appear barren, but they’re providing this important natural service that is often overlooked, says Weber, of the Max Planck Institute for Chemistry in Mainz, Germany. These findings “really call for biocrust conservation.”

    Biocrusts cover around 12 percent of the planet’s land surface and are most often found in arid regions. They are constructed by communities of fungi, lichens, cyanobacteria and other microorganisms that live in the topmost millimeters of soil and produce adhesive substances that clump soil particles together. In dry-land ecosystems, biocrusts play an important role in concentrating nutrients such as carbon and nitrogen and also help prevent soil erosion (SN: 4/12/22).

    And since most of the world’s dust comes from dry lands, biocrusts are important for keeping dust bound to the ground. Fallen dust can carry nutrients that benefit plants, but it can also reduce water and air quality, hasten glacier melting and reduce river flows. For instance in the Upper Colorado River Basin, researchers found that dust not only decreased snow’s ability to reflect sunlight, but it also shortened the duration of snow cover by weeks, reducing flows of meltwater into the Colorado River by 5 percent. That’s more water than the city of Las Vegas draws in a year, says Matthew Bowker, an ecologist from Northern Arizona University in Flagstaff who wasn’t involved in the new study.

    Experiments had already demonstrated that biocrusts strengthened soils against erosion, but Weber and her colleagues were curious how that effect played out on a global scale. So they pulled data from experimental studies that measured wind velocities needed to erode dust from various soil types and calculated how differences in biocrust coverage affected dust generation. They found that the wind velocities needed to erode dust from soils completely shielded by biocrusts were on average 4.8 times greater than the wind velocities need to erode bare soils. 

    The researchers then incorporated their results, along with data on global biocrust coverage, into a global climate simulation which allowed them to estimate how much dust the world’s biocrusts trapped each year.  

    “Nobody has really tried to make that calculation globally before,” says Bowker. “Even if their number is off, it shows us that the real number is probably significant.”

    Using projections of future climate conditions and data on the conditions biocrusts can tolerate, Weber and her colleagues estimated that by 2070, climate change and land-use shifts may result in biocrust losses of 25 to 40 percent, which would increase global dust emissions by 5 to 15 percent.

    Preserving and restoring biocrusts will be key to mitigating soil erosion and dust production in the future, Bowker says. Hopefully, these results will help to whip up more discussions on the impacts of land-use changes on biocrust health, he says. “We need to have those conversations.” More