More stories

  • in

    'Digital twins,' an aid to give individual patients the right treatment at the right time

    An international team of researchers have developed advanced computer models, or “digital twins,” of diseases, with the goal of improving diagnosis and treatment. They used one such model to identify the most important disease protein in hay fever. The study, which has just been published in the open access journal Genome Medicine, underlines the complexity of disease and the necessity of using the right treatment at the right time.
    Why is a drug effective against a certain illness in some individuals, but not in others? With common diseases, medication is ineffective in 40-70 percent of the patients. One reason for this is that diseases are seldom caused by a single “fault” that can be easily treated. Instead, in most diseases the symptoms are the result of altered interactions between thousands of genes in many different cell types. The timing is also important. Disease processes often evolve over long periods. We are often not aware of disease development until symptoms appear, and diagnosis and treatment are thus often delayed, which may contribute to insufficient medical efficacy.
    In a recent study, an international research team aimed to bridge the gap between this complexity and modern health care by constructing computational disease models of the altered gene interactions across many cell types at different time points. The researchers’ long-term goal is to develop such computational models into “digital twins” of individual patients’ diseases. Such medical digital twins might be used to tailor medication so that each patient could be treated with the right drug at the right time. Ideally, each twin could be matched with and treated with thousands of drugs in the computer, before actual treatment on the patient begins.
    The researchers started by developing methods to construct digital twins of patients with hay fever. They used a technique, single-cell RNA sequencing, to determine all gene activity in each of thousands of individual immune cells — more specifically white blood cells. Since these interactions between genes and cell types may differ between different time points in the same patient, the researchers measured gene activity at different time points before and after stimulating white blood cells with pollen.
    In order to construct computer models of all the data, the researchers used network analyses. Networks can be used to describe and analyse complex systems. For example, a football team could be analysed as a network based on the passes between the players. The player that passes most to other players during the whole match may be most important in that network. Similar principles were applied to construct the computer models, or “twins,” as well as to identify the most important disease protein.
    In the current study, the researchers found that multiple proteins and signalling cascades were important in seasonal allergies, and that these varied greatly across cell types and at different stages of the disease.
    “We can see that these are extremely complicated changes that occur in different phases of a disease. The variation between different times points means that you have to treat the patient with the right medicine at the right time,” says Dr Mikael Benson, professor at Linköping University, who led the study.
    Finally, the researchers identified the most important protein in the twin model of hay fever. They show that inhibiting this protein, called PDGF-BB, in experiments with cells was more effective than using a known allergy drug directed against another protein, called IL-4.
    The study also demonstrated that the methods could potentially be applied to give the right treatment at the right time in other immunological diseases, like rheumatism or inflammatory bowel diseases. Clinical implementation will require international collaborations between universities, hospitals and companies.
    The study is based on an interdisciplinary collaboration between 15 researchers in Sweden, the US, Korea and China. The research has received financial support from the EU, NIH, the Swedish and Nordic Research Councils, and the Swedish Cancer Society.
    Story Source:
    Materials provided by Linköping University. Original written by Karin Söderlund Leifler. Note: Content may be edited for style and length. More

  • in

    Self-propelled, endlessly programmable artificial cilia

    For years, scientists have been attempting to engineer tiny, artificial cilia for miniature robotic systems that can perform complex motions, including bending, twisting, and reversing. Building these smaller-than-a-human-hair microstructures typically requires multi-step fabrication processes and varying stimuli to create the complex movements, limiting their wide-scale applications.
    Now, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a single-material, single-stimuli microstructure that can outmaneuver even living cilia. These programmable, micron-scale structures could be used for a range of applications, including soft robotics, biocompatible medical devices, and even dynamic information encryption.
    The research is published in Nature.
    “Innovations in adaptive self-regulated materials that are capable of a diverse set of programmed motions represent a very active field, which is being tackled by interdisciplinary teams of scientists and engineers,” said Joanna Aizenberg, the Amy Smith Berylson Professor of Materials Science and Professor of Chemistry & Chemical Biology at SEAS and senior author of the paper. “Advances achieved in this field may significantly impact the ways we design materials and devices for a variety of applications, including robotics, medicine and information technologies.”
    Unlike previous research, which relied mostly on complex multi-component materials to achieve programmable movement of reconfigurable structural elements, Aizenberg and her team designed a microstructure pillar made of a single material — a photoresponsive liquid crystal elastomer. Because of the way the fundamental building blocks of the liquid crystal elastomer are aligned, when light hits the microstructure, those building blocks realign and the structure changes shape.
    As this shape change occurs, two things happen. First, the spot where the light hits becomes transparent, allowing the light to penetrate further into the material, causing additional deformations. Second, as the material deforms and the shape moves, a new spot on the pillar is exposed to light, causing that area to also change shape. More

  • in

    Replacing some meat with microbial protein could help fight climate change

    “Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.

    Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).

    Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.

    So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050.

    A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.

    Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says. More

  • in

    Scientists observe quantum speed-up in optimization problems

    A collaboration between Harvard University with scientists at QuEra Computing, MIT, University of Innsbruck and other institutions has demonstrated a breakthrough application of neutral-atom quantum processors to solve problems of practical use.
    The study was co-led by Mikhail Lukin, the George Vasmer Leverett Professor of Physics at Harvard and co-director of the Harvard Quantum Initiative, Markus Greiner, George Vasmer Leverett Professor of Physics, and Vladan Vuletic, Lester Wolfe Professor of Physics at MIT. Titled “Quantum Optimization of Maximum Independent Set using Rydberg Atom Arrays,” was published on May 5th, 2022, in Science Magazine.
    Previously, neutral-atom quantum processors had been proposed to efficiently encode certain hard combinatorial optimization problems. In this landmark publication, the authors not only deploy the first implementation of efficient quantum optimization on a real quantum computer, but also showcase unprecedented quantum hardware power.
    The calculations were performed on Harvard’s quantum processor of 289 qubits operating in the analog mode, with effective circuit depths up to 32. Unlike in previous examples of quantum optimization, the large system size and circuit depth used in this work made it impossible to use classical simulations to pre-optimize the control parameters. A quantum-classical hybrid algorithm had to be deployed in a closed loop, with direct, automated feedback to the quantum processor.
    This combination of system size, circuit depth, and outstanding quantum control culminated in a quantum leap: problem instances were found with empirically better-than-expected performance on the quantum processor versus classical heuristics. Characterizing the difficulty of the optimization problem instances with a “hardness parameter,” the team identified cases that challenged classical computers, but that were more efficiently solved with the neutral-atom quantum processor. A super-linear quantum speed-up was found compared to a class of generic classical algorithms. QuEra’s open-source packages GenericTensorNetworks.jl and Bloqade.jl were instrumental in discovering hard instances and understanding quantum performance.
    “A deep understanding of the underlying physics of the quantum algorithm as well as the fundamental limitations of its classical counterpart allowed us to realize ways for the quantum machine to achieve a speedup,” says Madelyn Cain, Harvard graduate student and one of the lead authors. The importance of match-making between problem and quantum hardware is central to this work: “In the near future, to extract as much quantum power as possible, it is critical to identify problems that can be natively mapped to the specific quantum architecture, with little to no overhead,” said Shengtao Wang, Senior Scientist at QuEra Computing and one of the coinventors of the quantum algorithms used in this work, “and we achieved exactly that in this demonstration.”
    The “maximum independent set” problem, solved by the team, is a paradigmatic hard task in computer science and has broad applications in logistics, network design, finance, and more. The identification of classically challenging problem instances with quantum-accelerated solutions paves the path for applying quantum computing to cater to real-world industrial and social needs.
    “These results represent the first step towards bringing useful quantum advantage to hard optimization problems relevant to multiple industries.,” added Alex Keesling CEO of QuEra Computing and co-author on the published work. “We are very happy to see quantum computing start to reach the necessary level of maturity where the hardware can inform the development of algorithms beyond what can be predicted in advance with classical compute methods. Moreover, the presence of a quantum speedup for hard problem instances is extremely encouraging. These results help us develop better algorithms and more advanced hardware to tackle some of the hardest, most relevant computational problems.”
    Story Source:
    Materials provided by Harvard University. Note: Content may be edited for style and length. More

  • in

    Mechanism 'splits' electron spins in magnetic material

    Holding the right material at the right angle, Cornell researchers have discovered a strategy to switch the magnetization in thin layers of a ferromagnet — a technique that could eventually lead to the development of more energy-efficient magnetic memory devices.
    The team’s paper, “Tilted Spin Current Generated by the Collinear Antiferromagnet Ruthenium Dioxide,” published May 5 in Nature Electronics. The paper’s co-lead authors are postdoctoral researcher Arnab Bose and doctoral students Nathaniel Schreiber and Rakshit Jain.
    For decades, physicists have tried to change the orientation of electron spins in magnetic materials by manipulating them with magnetic fields. But researchers including Dan Ralph, the F.R. Newman Professor of Physics in the College of Arts and Sciences and the paper’s senior author, have instead looked to using spin currents carried by electrons, which exist when electrons have spins generally oriented in one direction.
    When these spin currents interact with a thin magnetic layer, they transfer their angular momentum and generate enough torque to switch the magnetization 180 degrees. (The process of switching this magnetic orientation is how one writes information in magnetic memory devices.)
    Ralph’s group has focused on finding ways to control the direction of the spin in spin currents by generating them with antiferromagnetic materials. In antiferromagnets, every other electron spin points in the opposite direction, hence there is no net magnetization.
    “Essentially, the antiferromagnetic order can lower the symmetries of the samples enough to allow unconventional orientations of spin current to exist,” Ralph said. “The mechanism of antiferromagnets seems to give a way of actually getting fairly strong spin currents, too.”
    The team had been experimenting with the antiferromagnet ruthenium dioxide and measuring the ways its spin currents tilted the magnetization in a thin layer of a nickel-iron magnetic alloy called Permalloy, which is a soft ferromagnet. In order to map out the different components of the torque, they measured its effects at a variety of magnetic field angles. More

  • in

    Using AI to analyze large amounts of biological data

    Researchers at the University of Missouri are applying a form of artificial intelligence (AI) — previously used to analyze how National Basketball Association (NBA) players move their bodies — to now help scientists develop new drug therapies for medical treatments targeting cancers and other diseases.
    The type of AI, called a graph neural network, can help scientists with speeding up the time it takes to sift through large amounts of data generated by studying protein dynamics. This approach can provide new ways to identify target sites on proteins for drugs to work effectively, said Dong Xu, a Curators’ Distinguished Professor in the Department of Electrical Engineering and Computer Science at the MU College of Engineering and one of the study’s authors.
    “Previously, drug designers may have known about a couple places on a protein’s structure to target with their therapies,” said Xu, who is also the Paul K. and Dianne Shumaker Professor in bioinformatics. “A novel outcome of this method is that we identified a pathway between different areas of the protein structure, which could potentially allow scientists who are designing drugs to see additional possible target sites for delivering their targeted therapies. This can increase the chances that the therapy may be successful.”
    Xu said they can also simulate how proteins can change in relation to different conditions, such as the development of cancer, and then use that information to infer their relationships with other bodily functions.
    “With machine learning we can really study what are the important interactions within different areas of the protein structure,” Xu said. “Our method provides a systematic review of the data involved when studying proteins, as well as a protein’s energy state, which could help when identifying any possible mutation’s effect. This is important because protein mutations can enhance the possibility of cancers and other diseases developing in the body.”
    “Neural relational inference to learn long-range allosteric interactions in proteins from molecular dynamics simulations” was published in Nature Communications. Juexin Wang at MU; and Jingxuan Zhu and Weiwei Han at Jilin University in China, also contributed to this study. Funding was provided by the China Scholarship Council and the Overseas Cooperation Project of Jilin Province, which were used to support Jingxuan Zhu to conduct this research at MU, as well as the National Institute of General Medical Sciences of the National Institutes of Health. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agencies.
    Story Source:
    Materials provided by University of Missouri-Columbia. Note: Content may be edited for style and length. More

  • in

    'Metalens' could disrupt vacuum UV market

    Rice University photonics researchers have created a potentially disruptive technology for the ultraviolet optics market.
    By precisely etching hundreds of tiny triangles on the surface of a microscopic film of zinc oxide, nanophotonics pioneer Naomi Halas and colleagues created a “metalens” that transforms incoming long-wave UV (UV-A) into a focused output of vacuum UV (VUV) radiation. VUV is used in semiconductor manufacturing, photochemistry and materials science and has historically been costly to work with, in part because it is absorbed by almost all types of glass used to make conventional lenses.
    “This work is particularly promising in light of recent demonstrations that chip manufacturers can scale up the production of metasurfaces with CMOS-compatible processes,” said Halas, co-corresponding author of a metalens demonstration study published in Science Advances. “This is a fundamental study, but it clearly points to a new strategy for high-throughput manufacturing of compact VUV optical components and devices.”
    Halas’ team showed its microscopic metalens could convert 394-nanometer UV into a focused output of 197-nanometer VUV. The disc-shaped metalens is a transparent sheet of zinc oxide that is thinner than a sheet of paper and just 45 millionths of a meter in diameter. In the demonstration, a 394-nanometer UV-A laser was shined at the back of the disc, and researchers measured the light that emerged from the other side.
    Study co-first author Catherine Arndt, an applied physics graduate student in Halas’ research group, said the key feature of the metalens is its interface, a front surface that is studded with concentric circles of tiny triangles.
    “The interface is where all of the physics is happening,” she said. “We’re actually imparting a phase shift, changing both how quickly the light is moving and the direction it’s traveling. We don’t have to collect the light output because we use electrodynamics to redirect it at the interface where we generate it.”
    Violet light has the lowest wavelength visible to humans. Ultraviolet has even lower wavelengths, which range from 400 nanometers to 10 nanometers. Vacuum UV, with wavelengths between 100-200 nanometers, is so-named because it is strongly absorbed by oxygen. Using VUV light today typically requires a vacuum chamber or other specialized environment, as well as machinery to generate and focus VUV. More

  • in

    New shape memory alloy discovered through artificial intelligence framework

    Funded by the National Science Foundation’s Designing Materials to Revolutionize Our Engineering Future (DMREF) Program, researchers from the Department of Materials Science and Engineering at Texas A&M University used an Artificial Intelligence Materials Selection framework (AIMS) to discover a new shape memory alloy. The shape memory alloy showed the highest efficiency during operation achieved thus far for nickel-titanium-based materials. In addition, their data-driven framework offers proof of concept for future materials development.
    Shape memory alloys are utilized in various fields where compact, lightweight and solid-state actuations are needed, replacing hydraulic or pneumatic actuators because they can deform when cold and then return to their original shape when heated. This unique property is critical for applications, such as airplane wings, jet engines and automotive components, that must withstand repeated, recoverable large-shape changes.
    There have been many advancements in shape memory alloys since their beginnings in the mid-1960s, but at a cost. Understanding and discovering new shape memory alloys has required extensive research through experimentation and ad-hoc trial and error. Despite many of which have been documented to help further shape memory alloy applications, new alloy discoveries have occurred in a decadal fashion. About every 10 years, a significant shape memory alloy composition or system has been discovered. Moreover, even with advances in shape memory alloys, they are hindered by their low energy efficiency caused by incompatibilities in their microstructure during the large shape change. Further, they are notoriously difficult to design from scratch.
    To address these shortcomings, Texas A&M researchers have combined experimental data to create an AIMS computational framework capable of determining optimal materials compositions and processing these materials, which led to the discovery of a new shape memory alloy composition.
    “When designing materials, sometimes you have multiple objectives or constraints that conflict, which is very difficult to work around,” said Dr. Ibrahim Karaman, Chevron Professor I and materials science and engineering department head. “Using our machine-learning framework, we can use experimental data to find hidden correlations between different materials’ features to see if we can design new materials.”
    The shape memory alloy found during the study using AIMS was predicted and proven to achieve the narrowest hysteresis ever recorded. In other words, the material showed the lowest energy loss when converting thermal energy to mechanical work. The material showcased high efficiency when subject to thermal cycling due to its extremely small transformation temperature window. The material also exhibited excellent cyclic stability under repeated actuation.
    A nickel-titanium-copper composition is typical for shape memory alloys. Nickel-titanium-copper alloys typically have titanium equal to 50% and form a single-phase material. Using machine learning, the researchers predicted a different composition with titanium equal to 47% and copper equal to 21%. While this composition is in the two-phase region and forms particles, they help enhance the material’s properties, explained William Trehern, doctoral student and graduate research assistant in the materials science and engineering department and the publication’s first author.
    In particular, this high-efficiency shape memory alloy lends itself to thermal energy harvesting, which requires materials that can capture waste energy produced by machines and put it to use, and thermal energy storage, which is used for cooling electronic devices.
    More notably, the AIMS framework offers the opportunity to use machine-learning techniques in materials science. The researchers see potential to discover more shape memory alloy chemistries with desired characteristics for various other applications.
    “It is a revelation to use machine learning to find connections that our brain or known physical principles may not be able to explain,” said Karaman. “We can use data science and machine learning to accelerate the rate of materials discovery. I also believe that we can potentially discover new physics or mechanisms behind materials behavior that we did not know before if we pay attention to the connections machine learning can find.”
    Other contributors include Dr. Raymundo Arróyave and Dr. Kadri Can Atli, professors in the materials science and engineering department, and materials science and engineering undergraduate student Risheil Ortiz-Ayala.
    “While machine learning is now widely used in materials science, most approaches to date focus on predicting the properties of a material without necessarily explaining how to process it to achieve target properties,” said Arróyave. “Here, the framework looked not only at the chemical composition of candidate materials, but also the processing necessary to attain the properties of interest.”
    Story Source:
    Materials provided by Texas A&M University. Original written by Michelle Revels. Note: Content may be edited for style and length. More