More stories

  • in

    Scientists engineer new material that can absorb and release enormous amounts of energy

    A team of researchers from the University of Massachusetts Amherst recently announced in the Proceedings of the National Academy of Sciences that they had engineered a new rubber-like solid substance that has surprising qualities. It can absorb and release very large quantities of energy. And it is programmable. Taken together, this new material holds great promise for a very wide array of applications, from enabling robots to have more power without using additional energy, to new helmets and protective materials that can dissipate energy much more quickly.
    “Imagine a rubber band,” says Alfred Crosby, professor of polymer science and engineering at UMass Amherst and the paper’s senior author. “You pull it back, and when you let it go, it flies across the room. Now imagine a super rubber band. When you stretch it past a certain point, you activate extra energy stored in the material. When you let this rubber band go, it flies for a mile.”
    This hypothetical rubber band is made out of a new metamaterial — a substance engineered to have a property not found in naturally occurring materials — that combines an elastic, rubber-like substance with tiny magnets embedded in it. This new “elasto-magnetic” material takes advantage of a physical property known as a phase shift to greatly amplify the amount of energy the material can release or absorb.
    A phase shift occurs when a material moves from one state to another: think of water turning into steam or liquid concrete hardening into a sidewalk. Whenever a material shifts its phase, energy is either released or absorbed. And phase shifts aren’t just limited to changes between liquid, solid and gaseous states — a shift can occur from one solid phase to another. A phase shift that releases energy can be harnessed as a power source, but getting enough energy has always been the difficult part.
    “To amplify energy release or absorption, you have to engineer a new structure at the molecular or even atomic level,” says Crosby. However, this is challenging to do and even more difficult to do in a predictable way. But by using metamaterials, Crosby says that “we have overcome these challenges, and have not only made new materials, but also developed the design algorithms that allow these materials to be programmed with specific responses, making them predictable.”
    The team has been inspired by some of the lightning-quick responses seen in nature: the snapping-shut of Venus flytraps and trap-jaw ants. “We’ve taken this to the next level,” says Xudong Liang, the paper’s lead author, currently a professor at Harbin Institute of Technology, Shenzhen (HITSZ) in China who completed this research while a postdoc at UMass Amherst. “By embedding tiny magnets into the elastic material, we can control the phase transitions of this metamaterial. And because the phase shift is predictable and repeatable, we can engineer the metamaterial to do exactly what we want it to do: either absorbing the energy from a large impact, or releasing great quantities of energy for explosive movement.”
    This research, which was supported by the U.S. Army Research Laboratory and the U.S. Army Research Office as well as Harbin Institute of Technology, Shenzhen (HITSZ), has applications in any scenario where either high-force impacts or lightning-quick responses are needed.
    Story Source:
    Materials provided by University of Massachusetts Amherst. Note: Content may be edited for style and length. More

  • in

    Predicting cell fates: Researchers develop AI solutions for next-gen biomedical research

    Data is not only the answer to numerous questions in the business world; the same applies to biomedical research. In order to develop new therapies or prevention strategies for diseases, scientists need more and better data, faster and faster. However, the quality is often very variable and the integration of different data sets often almost impossible. With the Computational Health Center at Helmholtz Munich, one of Europe’s largest research centers for artificial intelligence in medical science is now being established under the direction of Fabian Theis. In close cooperation with the Technical University of Munich (TUM), more than one hundred scientists are using artificial intelligence and machine learning to discover solutions to precisely these problems, thus enabling medical innovations for a healthier society. In the latest issue of the journal Nature Methods, they present three articles with groundbreaking new solutions.
    According to Fabian Theis, Head of the Computational Health Center at Helmholtz Munich and Professor for Mathematical Modelling of Biological Systems at TUM: “It’s been a crazy 4 weeks, with many of our scientific stories and methods coming to fruition in that same time window. Our research groups focuses on using single cell genomics to understand the origin of disease in a mechanistic fashion — for this we leverage and develop machine learning approaches to better represent this complex data. In the three new paper, we worked on single cell data integration, trajectory learning and spatial resolution, respectively. Besides the applications shown in the papers, we expect to support the next generation of single-cell research towards disease understanding.”
    Here are the latest solutions developed by Helmholtz Munich and TUM researchers:
    Solving the data integration challenge
    To see whether an observation one makes in a single dataset can be generalized, you can check whether the same can be observed in other datasets of the same system. In single-cell data, so-called batch effects complicate combining datasets in this manner. These are differences in the molecular profiles between samples as they were generated at a different time, in a different place, or from a different person. Overcoming these effects is a central challenge in single-cell genomics with more than 50 proposed solutions. But which one is the best? A group of researchers around Malte Lücken carefully curated 86 datasets and compared 16 of the most popular data integration methods on 13 tasks. After over 55,000 hours of computation time and a detailed evaluation of 590 results, they built a guide for optimized data integration. This allows for improved observations on disease processes across datasets at a population scale.
    Predicting cell states with open-source software
    Many questions in biology revolve around continuous processes like development or regeneration. For any cell in such a process, single-cell RNA-sequencing measures gene expression. The method, however, is destructive to cells and scientists obtain only static snapshots. Thus, many algorithms have been developed to reconstruct continuous processes from snapshots of gene expression. A common limitation: These algorithms cannot tell us anything about the direction of the process. To overcome this limitation, Marius Lange and colleagues developed a new algorithm called CellRank. It estimates directed cell-state trajectories by combining previous reconstruction approaches with RNA velocity, a concept to estimate gene up- or down-regulation. Across in-vitro and in-vivo applications, CellRank correctly inferred fate outcomes and recovered previously known genes. In a lung regeneration example, CellRank predicted novel intermediate cell states on a dedifferentiation trajectory whose existence was validated experimentally. CellRank is an open-source software package that is already used by biologists and bioinformaticians around the world to analyze complex cellular dynamics in situations like cancer, reprogramming or regeneration.
    Visualizing spatial omics analysis
    Recent years have seen a growing development of technologies to measure gene expression variation in tissue. The advantage of such technologies is that scientists can see cells in their context, thus being able to investigate principles of tissue organization and cellular communication. Researchers need flexible computational frameworks in order to store, integrate and visualize the growing diversity of such data. To tackle this challenge, Giovanni Palla, Hannah Spitzer, and colleagues developed a new computational framework, called Squidpy. It enables analysts and developers to handle spatial gene expression data. Squidpy integrates tools for gene expression and image analysis to efficiently manipulate and interactively visualize spatial omics data. Squidpy is extensible and can be interfaced with a variety of machine learning tools in the python ecosystem. Scientists around the world are already using it to analyze spatial molecular data. More

  • in

    The puzzle of the 'lost' angular momentum

    In a closed physical system, the sum of all angular momentum remains constant — says an important physical law of conservation. Angular momentum does not necessarily need to involve “real” bodily rotation in this context: Magnetic materials even have angular momentum when, seen from outside, they are at rest. Physicists Albert Einstein and Wander Johannes de Haas were able to prove that already in 1915.
    If such a magnetized material is now bombarded with short pulses of laser light, it loses its magnetic order extremely quickly. Within femtoseconds — a millionth of a billionth second — it becomes demagnetized. The electrons’ angular momentum in the material — also called spin — thus decreases abruptly, much faster than the material can set itself in rotation. According to the conservation principle, however, the angular momentum cannot simply be lost. So, where is the spin angular momentum transferred to in such an extremely short time?
    The solution to the puzzle was now published in the scientific journal Nature. In the study, a team led by Konstanz researchers investigated the demagnetization of nickel crystals using ultrafast electron diffraction — a highly precise measuring method in terms of time and space that can make the course of structural changes visible at the atomic level. They were able to show that the electrons of the crystal transfer their angular momentum to the atoms of the crystal lattice within a few hundred femtoseconds during demagnetization. Much like the passengers on a merry-go-round, the atoms are set in motion on tiny circuits and thus balance the angular momentum. It is only much later and more slowly that the macroscopic rotation effect named after Einstein and de Haas begins, which can be measured mechanically. These findings show new ways of controlling angular momentum extremely quickly, opening up new possibilities for improving magnetic information technologies or new research directions in spintronics.
    Magnetism in metallic solids
    Magnetic phenomena have become an indispensable part of modern technology. They play an important role especially in information processing and data storage. “The speed and efficiency of existing technologies is often limited by the comparatively long duration of magnetic switching processes,” explains Professor Peter Baum, experimental physicist at the University of Konstanz and one of the heads of the study. All the more interesting for materials research, therefore, is a surprising phenomenon that can be observed in nickel, among other things: ultrafast demagnetization caused by bombardment with laser pulses.
    Just like iron, nickel physically belongs to the ferromagnetic materials. Permanent magnets as we know them from our everyday lives can be made from these materials, for example refrigerator magnets. The permanent magnetization results from a parallel arrangement of the magnetic moments of neighbouring particles of the material. “To illustrate this, we can imagine the magnetic moments as small arrows that all point in the same direction,” explains Professor Ulrich Nowak, theoretical physicist at the University of Konstanz and also one of the project leaders. Physically, the angular momentum or spin of the electrons of the ferromagnetic material mainly is the cause for these “arrows” and their direction. More

  • in

    Research advances technology of AI assistance for anesthesiologists

    A new study by researchers at MIT and Massachusetts General Hospital suggests the day may be approaching when advanced artificial intelligence systems could assist anesthesiologists in the operating room.
    In a special edition of Artificial Intelligence in Medicine, the team of neuroscientists, engineers and physicians demonstrated a machine learning algorithm for continuously automating dosing of the anesthetic drug propofol. Using an application of deep reinforcement learning, in which the software’s neural networks simultaneously learned how its dosing choices maintain unconsciousness and how to critique the efficacy of its own actions, the algorithm outperformed more traditional software in sophisticated, physiology-based simulations of patients. It also closely matched the performance of real anesthesiologists when showing what it would do to maintain unconsciousness given recorded data from nine real surgeries.
    The algorithm’s advances increase the feasibility for computers to maintain patient unconsciousness with no more drug than is needed, thereby freeing up anesthesiologists for all the other responsibilities they have in the operating room, including making sure patients remain immobile, experience no pain, remain physiologically stable, and receive adequate oxygen said co-lead authors Gabe Schamberg and Marcus Badgeley.
    “One can think of our goal as being analogous to an airplane’s auto-pilot where the captain is always in the cockpit paying attention,” said Schamberg, a former MIT postdoc who is also the study’s corresponding author. “Anesthesiologists have to simultaneously monitor numerous aspects of a patient’s physiological state, and so it makes sense to automate those aspects of patient care that we understand well.”
    Senior author Emery N. Brown, a neuroscientist at The Picower Institute for Learning and Memory and Institute for Medical Engineering and Science at MIT and an anesthesiologist at MGH, said the algorithm’s potential to help optimize drug dosing could improve patient care.
    “Algorithms such as this one allow anesthesiologists to maintain more careful, near continuous vigilance over the patient during general anesthesia,” said Brown, Edward Hood Taplin Professor Computational Neuroscience and Health Sciences & Technology at MIT. More

  • in

    Artificial intelligence system rapidly predicts how two proteins will attach

    Antibodies, small proteins produced by the immune system, can attach to specific parts of a virus to neutralize it. As scientists continue to battle SARS-CoV-2, the virus that causes Covid-19, one possible weapon is a synthetic antibody that binds with the virus’ spike proteins to prevent the virus from entering a human cell.
    To develop a successful synthetic antibody, researchers must understand exactly how that attachment will happen. Proteins, with lumpy 3D structures containing many folds, can stick together in millions of combinations, so finding the right protein complex among almost countless candidates is extremely time-consuming.
    To streamline the process, MIT researchers created a machine-learning model that can directly predict the complex that will form when two proteins bind together. Their technique is between 80 and 500 times faster than state-of-the-art software methods, and often predicts protein structures that are closer to actual structures that have been observed experimentally.
    This technique could help scientists better understand some biological processes that involve protein interactions, like DNA replication and repair; it could also speed up the process of developing new medicines.
    “Deep learning is very good at capturing interactions between different proteins that are otherwise difficult for chemists or biologists to write experimentally. Some of these interactions are very complicated, and people haven’t found good ways to express them. This deep-learning model can learn these types of interactions from data,” says Octavian-Eugen Ganea, a postdoc in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-lead author of the paper.
    Ganea’s co-lead author is Xinyuan Huang, a graduate student at ETH Zurich. MIT co-authors include Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in CSAIL, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering in CSAIL and a member of the Institute for Data, Systems, and Society. The research will be presented at the International Conference on Learning Representations. More

  • in

    New super-conductors could take data beyond zeroes and ones

    Remember flip-phones? Our smartphones may one day look just as obsolete thanks to spintronics, an incipient field of research promising to revolutionize the way our electronic devices send and receive signals.
    In most current technologies, data is encoded as a zero or a one, depending on the number of electrons that reach a capacitor. With spintronics, data is also transferred according to the direction in which these electrons spin.
    In a new study appearing this week in the Proceedings of the National Academy of Sciences, a team of Duke University and Weizmann Institute researchers led by Michael Therien, professor of Chemistry at Duke, report a keystone achievement in the field: the development of a conducting system that controls the spin of electrons and transmits a spin current over long distances, without the need for the ultra-cold temperatures required by typical spin-conductors.
    “The structures we present here are exciting because they define new strategies to generate large magnitude spin currents at room temperature,” said Chih-Hung Ko, first author of the paper and recent Duke chemistry Ph.D.
    Electrons are like spinning tops. Spin-up electrons rotate clockwise, and spin-down electrons rotate counter-clockwise. Electrons with opposite spins can occupy the same volume, but electrons that spin in the same direction repel themselves, like magnets of the same polarity.
    By controlling the way that electrons spin along a current, scientists can encode a new layer of information into an electric signal. More

  • in

    How Omicron escapes from antibodies

    A new study from MIT suggests that the dozens of mutations in the spike protein of the Omicron variant help it to evade all four of the classes of antibodies that can target the SARS-CoV-2 virus that causes Covid-19.
    This includes antibodies generated by vaccinated or previously infected people, as well as most of the monoclonal antibody treatments that have been developed, says Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Engineering and Health Sciences and Technology (HST) at MIT.
    Using a computational approach that allowed them to determine how mutated amino acids of the viral spike protein influence nearby amino acids, the researchers were able to get a multidimensional view of how the virus evades antibodies. According to Sasisekharan, the traditional approach of only examining changes in the virus’ genetic sequence reduces the complexity of the spike protein’s three-dimensional surface and doesn’t describe the multidimensional complexity of the protein surfaces that antibodies are attempting to bind to.
    “It is important to get a more comprehensive picture of the many mutations seen in Omicron, especially in the context of the spike protein, given that the spike protein is vital for the virus’s function, and all the major vaccines are based on that protein,” he says. “There is a need for tools or approaches that can rapidly determine the impact of mutations in new virus variants of concern, especially for SARS-CoV-2.”
    Sasisekharan is the senior author of the study, which appears this week in Cell Reports Medicine. The lead author of the paper is MIT HST graduate student Nathaniel Miller. Technical associate Thomas Clark and research scientist Rahul Raman are also authors of the paper.
    Even though Omicron is able to evade most antibodies to some degree, vaccines still offer protection, Sasisekharan says. More

  • in

    Researchers develop highly accurate modeling tool to predict COVID-19 risk

    As new coronavirus variants emerge and quickly spread around the globe, both the public and policymakers are faced with a quandary: maintaining a semblance of normality, while also minimizing infections. While digital contact tracing apps offered promise, the adoption rate has been low, due in part to privacy concerns.
    At USC, researchers are advocating for a new approach to predict the chance of infection from Covid-19: combining anonymized cellphone location data with mobility patterns — broad patterns of how people move from place to place.
    To produce “risk scores” for specific locations and times, the team used a large dataset of anonymous, real-world location signals from cell phones across the US in 2019 and 2020. The system shows a 50% improvement in accuracy compared to current systems, said the researchers.
    “Our results show that it is possible to predict and target specific areas that are high-risk, as opposed to putting all businesses under one umbrella. Such risk-targeted policies can be significantly more effective, both for controlling Covid-19 and economically,” said lead author Sepanta Zeighami, a computer science Ph.D. student advised by Professor Cyrus Shahabi.
    “It’s also unlikely that Covid-19 will be the last pandemic in human history, so if we want to avoid the chaos of 2020 and the tragic losses while keeping daily life as unaffected as possible when the next pandemic happens, we need such data-driven approaches.”
    To address privacy concerns, the mobility data comes in an aggregated format, allowing the researchers to see patterns without identifying individual users. The data is not being used for contact tracing, identifying infected individuals, or where they are going, said the researchers. More