More stories

  • in

    Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations

    Using artificial intelligence, physicists have compressed a daunting quantum problem that until now required 100,000 equations into a bite-size task of as few as four equations — all without sacrificing accuracy. The work, published in the September 23 issue of Physical Review Letters, could revolutionize how scientists investigate systems containing many interacting electrons. Moreover, if scalable to other problems, the approach could potentially aid in the design of materials with sought-after properties such as superconductivity or utility for clean energy generation.
    “We start with this huge object of all these coupled-together differential equations; then we’re using machine learning to turn it into something so small you can count it on your fingers,” says study lead author Domenico Di Sante, a visiting research fellow at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) in New York City and an assistant professor at the University of Bologna in Italy.
    The formidable problem concerns how electrons behave as they move on a gridlike lattice. When two electrons occupy the same lattice site, they interact. This setup, known as the Hubbard model, is an idealization of several important classes of materials and enables scientists to learn how electron behavior gives rise to sought-after phases of matter, such as superconductivity, in which electrons flow through a material without resistance. The model also serves as a testing ground for new methods before they’re unleashed on more complex quantum systems.
    The Hubbard model is deceptively simple, however. For even a modest number of electrons and cutting-edge computational approaches, the problem requires serious computing power. That’s because when electrons interact, their fates can become quantum mechanically entangled: Even once they’re far apart on different lattice sites, the two electrons can’t be treated individually, so physicists must deal with all the electrons at once rather than one at a time. With more electrons, more entanglements crop up, making the computational challenge exponentially harder.
    One way of studying a quantum system is by using what’s called a renormalization group. That’s a mathematical apparatus physicists use to look at how the behavior of a system — such as the Hubbard model — changes when scientists modify properties such as temperature or look at the properties on different scales. Unfortunately, a renormalization group that keeps track of all possible couplings between electrons and doesn’t sacrifice anything can contain tens of thousands, hundreds of thousands or even millions of individual equations that need to be solved. On top of that, the equations are tricky: Each represents a pair of electrons interacting.
    Di Sante and his colleagues wondered if they could use a machine learning tool known as a neural network to make the renormalization group more manageable. The neural network is like a cross between a frantic switchboard operator and survival-of-the-fittest evolution. First, the machine learning program creates connections within the full-size renormalization group. The neural network then tweaks the strengths of those connections until it finds a small set of equations that generates the same solution as the original, jumbo-size renormalization group. The program’s output captured the Hubbard model’s physics even with just four equations.
    “It’s essentially a machine that has the power to discover hidden patterns,” Di Sante says. “When we saw the result, we said, ‘Wow, this is more than what we expected.’ We were really able to capture the relevant physics.”
    Training the machine learning program required a lot of computational muscle, and the program ran for entire weeks. The good news, Di Sante says, is that now that they have their program coached, they can adapt it to work on other problems without having to start from scratch. He and his collaborators are also investigating just what the machine learning is actually “learning” about the system, which could provide additional insights that might otherwise be hard for physicists to decipher.
    Ultimately, the biggest open question is how well the new approach works on more complex quantum systems such as materials in which electrons interact at long distances. In addition, there are exciting possibilities for using the technique in other fields that deal with renormalization groups, Di Sante says, such as cosmology and neuroscience.
    Di Sante co-authored the new study with CCQ guest researcher Matija Medvidović (a graduate student at Columbia University), Alessandro Toschi of TU Wien in Vienna, Giorgio Sangiovanni of the University of Würzburg in Germany, Cesare Franchini of the University of Bologna in Italy, CCQ and Center for Computational Mathematics senior research scientist Anirvan M. Sengupta, and CCQ co-director Andy Millis. Di Sante’s time at the CCQ was supported by a Marie Curie International Fellowship, which encourages transnational scientific collaboration.
    Story Source:
    Materials provided by Simons Foundation. Original written by Thomas Sumner. Note: Content may be edited for style and length. More

  • in

    'Placenta-on-a-chip' mimics malaria-infected nutrient exchange between mother-fetus

    Placental malaria as a consequence of Plasmodium falciparum infections can lead to severe complications for both mother and child. Each year, placental malaria causes nearly 200,000 newborn deaths, mainly due to low birth weight, as well as 10,000 maternal deaths. Placental malaria results from parasite-infected red blood cells that get stuck within tree-like branch structures that make up the placenta.
    Research on human placenta is experimentally challenging due to ethical considerations and inaccessibility of the living organs. The anatomy of the human placenta and architecture of maternal-fetal interface, such as between maternal and fetal blood, are complex and cannot be easily reconstructed in their entirety using modern in vitro models.
    Researchers from Florida Atlantic University’s College of Engineering and Computer Science and Schmidt College of Medicine have developed a placenta-on-a-chip model that mimics the nutrient exchange between the fetus and mother under the influence of placental malaria. Combining microbiology with engineering technologies, this novel 3D model uses a single microfluidic chip to study the complicated processes that take place in malaria-infected placenta as well as other placenta-related diseases and pathologies.
    Placenta-on-a-chip simulates blood flow and mimics the microenvironment of the malaria-infected placenta in this flow condition. Using this method, researchers closely examine the process that takes place as the infected red blood cells interact with the placental vasculature. This microdevice enables them to measure the glucose diffusion across the modeled placental barrier and the effects of blood infected with a P. falciparum line that can adhere to the surface of placenta using placenta-expressed molecule called CSA.
    For the study, trophoblasts or outer layer cells of the placenta and human umbilical vein endothelial cells were cultured on the opposite sides of an extracellular matrix gel in a compartmental microfluidic system, forming a physiological barrier between the co-flow tubular structure to mimic a simplified maternal-fetal interface in placental villi.
    Results, published in Scientific Reports,demonstrated that CSA-binding infected erythrocytes added resistance to the simulated placental barrier for glucose perfusion and decreased the glucose transfer across this barrier. The comparison between the glucose transport rate across the placental barrier in conditions when uninfected or P. falciparum infected blood flows on outer layer cells helps to better understand this important aspect of placental malaria pathology and could potentially be used as a model to study ways to treat placental malaria.
    “Despite advances in biosensing and live cell imaging, interpreting transport across the placental barrier remains challenging. This is because placental nutrient transport is a complex problem that involves multiple cell types, multi-layer structures, as well as coupling between cell consumption and diffusion across the placental barrier,” said Sarah E. Du, Ph.D., senior author and an associate professor in FAU’s Department of Ocean and Mechanical Engineering. “Our technology supports formation of microengineered placental barriers and mimics blood circulations, which provides alternative approaches for testing and screening.”
    Most of the molecular exchange between maternal and fetal blood occurs in the branching tree-like structures called villous trees. Because placental malaria may start only after the beginning of second trimester when intervillous space opens to infected red blood cells and white blood cells, the researchers were interested in the placental model of maternal-fetal interface formed in the second half of pregnancy.
    “This study provides vital information on the exchange of nutrients between mother and fetus affected by malaria,” said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science. “Studying the molecular transport between maternal and fetal compartments may help to understand some of the pathophysiological mechanisms in placental malaria. Importantly, this novel microfluidic device developed by our researchers at Florida Atlantic University could serve as a model for other placenta-relevant diseases.”
    Study co-authors are Babak Mosavati, Ph.D., a recent graduate in FAU’s College of Engineering and Computer Science; and Andrew Oleinikov, Ph.D., a professor of biomedical science, FAU Schmidt College of Medicine.
    The research was supported by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institute of Allergy and Infectious Diseases, and the National Science Foundation.
    Story Source:
    Materials provided by Florida Atlantic University. Original written by Gisele Galoustian. Note: Content may be edited for style and length. More

  • in

    Mangrove forests expand and contract with a lunar cycle

    The glossy leaves and branching roots of mangroves are downright eye-catching, and now a study finds that the moon plays a special role in the vigor of these trees.

    Long-term tidal cycles set in motion by the moon drive, in large part, the expansion and contraction of mangrove forests in Australia, researchers report in the Sept. 16 Science Advances. This discovery is key to predicting when stands of mangroves, which are good at sequestering carbon and could help fight climate change, are most likely to proliferate (SN: 11/18/21). Such knowledge could inform efforts to protect and restore the forests.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Mangroves are coastal trees that provide habitat for fish and buffer against erosion (SN: 9/14/22). But in some places, the forests face a range of threats, including coastal development, pollution and land clearing for agriculture. To get a bird’s-eye view of these forests, Neil Saintilan, an environmental scientist at Macquarie University in Sydney, and his colleagues turned to satellite imagery. Using NASA and U.S. Geological Survey Landsat data from 1987 to 2020, the researchers calculated how the size and density of mangrove forests across Australia changed over time.

    After accounting for persistent increases in these trees’ growth — probably due to rising carbon dioxide levels, higher sea levels and increasing air temperatures — Saintilan and his colleagues noticed a curious pattern. Mangrove forests tended to expand and contract in both extent and canopy cover in a predictable manner. “I saw this 18-year oscillation,” Saintilan says.

    That regularity got the researchers thinking about the moon. Earth’s nearest celestial neighbor has long been known to help drive the tides, which deliver water and necessary nutrients to mangroves. A rhythm called the lunar nodal cycle could explain the mangroves’ growth pattern, the team hypothesized.

    Over the course of 18.6 years, the plane of the moon’s orbit around Earth slowly tips. When the moon’s orbit is the least tilted relative to our planet’s equator, semidiurnal tides — which consist of two high and two low tides each day — tend to have a larger range. That means that in areas that experience semidiurnal tides, higher high tides and lower low tides are generally more likely. The effect is caused by the angle at which the moon tugs gravitationally on the Earth.  

    Saintilan and his colleagues found that mangrove forests experiencing semidiurnal tides tended to be larger and denser precisely when higher high tides were expected based on the moon’s orbit. The effect even seemed to outweigh other climatic drivers of mangrove growth, such as El Niño conditions. Other regions with mangroves, such as Vietnam and Indonesia, probably experience the same long-term trends, the team suggests.

    Having access to data stretching back decades was key to this discovery, Saintilan says. “We’ve never really picked up before some of these longer-term drivers of vegetation dynamics.”

    It’s important to recognize this effect on mangrove populations, says Octavio Aburto-Oropeza, a marine ecologist at the Scripps Institution of Oceanography in La Jolla, Calif., who was not involved in the research.

    Scientists now know when some mangroves are particularly likely to flourish and should make an extra effort at those times to promote the growth of these carbon-sequestering trees, Aburto-Oropeza says. That might look like added limitations on human activity nearby that could harm the forests, he says. “We should be more proactive.”  More

  • in

    Researchers create single-crystal organometallic perovskite optical fibers

    Due to their very high efficiency in transporting electric charges from light, perovskites are known as the next generation material for solar panels and LED displays. A team led by Dr Lei Su at Queen Mary University of London now have invented a brand-new application of perovskites as optical fibres.
    Optical fibres are tiny wires as thin as a human hair, in which light travels at a superfast speed — 100 times faster than electrons in cables. These tiny optical fibres transmit the majority of our internet data. At present, most optical fibres are made of glass. The perovskite optical fibre made by Dr Su’s team consists of just one piece of a perovskite crystal. The optical fibres have a core width as low as 50 μm (the width of a human hair) and are very flexible — they can be bent to a radius of 3.5mm
    Compared to their polycrystal counterparts, single-crystal organometallic perovskites are more stable, more efficient, more durable and have fewer defects. Scientists have therefore been seeking to make single-crystal perovskite optical fibres that can bring this high efficiency to fibre optics.
    Dr Su, Reader in Photonics at Queen Mary University of London, said: ‘Single-crystal perovskite fibres could be integrated into current fibre-optical networks, to substitute key components in this system — for example in more efficient lasing and energy conversions, improving the speed and quality of our broadband networks.’
    Dr Su’s team were able to grow and precisely control the length and diameter of single-crystal organometallic perovskite fibres in liquid solution (which is very cheap to run) by using a new temperature growth method. They gradually changed the heating position, line contact and temperature during the process to ensure continuous growth in the length while preventing random growth in the width. With their method, the length of the fibre can be controlled, and the cross section of the perovskite fibre core can be varied.
    In line with their predictions, due to the single-crystal quality, their fibres proved to have good stability over several months, and a small transmission loss — lower than 0.7dB/cm sufficient for making optical devices. They have great flexibility (can be bent to a radius as small as 3.5mm), and larger photocurrent values than those of a polycrystalline counterpart (the polycrystalline MAPbBr3 milliwire photodetector with similar length).
    Dr Su said, ‘This technology could also be used in medical imaging as high-resolution detectors. The small diameter of the fibre can be used to capture a much smaller pixel compared to the state of the art. So that means by using our fibre so we can have the pixel in micrometer scales, giving a much, much higher resolution image for doctors to make better and more accurate diagnosis. We could also use these fibres in textiles that absorb the light. Then when we’re wearing for example clothes or a device with these kinds of fibre woven into the textile, they could convert the solar energy into the electrical power. So we could have solar powered clothing.’
    Story Source:
    Materials provided by Queen Mary University of London. Note: Content may be edited for style and length. More

  • in

    An AI message decoder based on bacterial growth patterns

    From a box of Cracker Jack to The Da Vinci Code, everybody enjoys deciphering secret messages. But biomedical engineers at Duke University have taken the decoder ring to place it’s never been before — the patterns created by bacterial colonies.
    Depending on the initial conditions used, such as nutrient levels and space constraints, bacteria tend to grow in specific ways. The researchers created a virtual bacterial colony and then controlled growth conditions and the numbers and sizes of simulated bacterial dots to create an entire alphabet based on how the colonies would look after they fill a virtual Petri dish. They call this encoding scheme emorfi.
    The encoding is not one-to-one, as the final simulated pattern corresponding to each letter is not exactly the same every time. However, the researchers discovered that a machine learning program could learn to distinguish between them to recognize the letter intended.
    “A friend may see many images of me over the course of time, but none of them will be exactly the same,” explained Lingchong You, professor of biomedical engineering at Duke. “But if the images are all consistently reinforcing what I generally look like, the friend will be able to recognize me even if they’re shown a picture of me they’ve never seen before.”
    To encrypt real messages, the encoder ends up creating a movie of a series of patterns, each correlating to a different letter. While they may look similar to the untrained eye, the computer algorithm can distinguish between them. So long as the receiver knows the set of initial conditions that led to their creation, an interloper should not be able to crack the code without a powerful AI of their own.
    Give the cypher a try yourself. You can type in anything from your name to the Gettysburg Address, or even the Christmas classic, “Be sure to drink your Ovaltine”:

    This research was supported by the National Science Foundation (MCB-1937259), the Office of Naval Research (N00014-20-1-2121), the David and Lucile Packard Foundation and the Google Cloud Research Credits program.
    Story Source:
    Materials provided by Duke University. Original written by Ken Kingery. Note: Content may be edited for style and length. More

  • in

    AI-based screening method could boost speed of new drug discovery

    Developing life-saving medicines can take billions of dollars and decades of time, but University of Central Florida researchers are aiming to speed up this process with a new artificial intelligence-based drug screening process they’ve developed.
    Using a method that models drug and target protein interactions using natural language processing techniques, the researchers achieved up to 97% accuracy in identifying promising drug candidates. The results were published recently in the journal Briefings in Bioinformatics.
    The technique represents drug-protein interactions through words for each protein binding site and uses deep learning to extract the features that govern the complex interactions between the two.
    “With AI becoming more available, this has become something that AI can tackle,” says study co-author Ozlem Garibay, an assistant professor in UCF’s Department of Industrial Engineering and Management Systems. “You can try out so many variations of proteins and drug interactions and find out which are more likely to bind or not.”
    The model they’ve developed, known as AttentionSiteDTI, is the first to be interpretable using the language of protein binding sites.
    The work is important because it will help drug designers identify critical protein binding sites along with their functional properties, which is key to determining if a drug will be effective. More

  • in

    Traditional computers can solve some quantum problems

    There has been a lot of buzz about quantum computers and for good reason. The futuristic computers are designed to mimic what happens in nature at microscopic scales, which means they have the power to better understand the quantum realm and speed up the discovery of new materials, including pharmaceuticals, environmentally friendly chemicals, and more. However, experts say viable quantum computers are still a decade away or more. What are researchers to do in the meantime?
    A new Caltech-led study in the journal Science describes how machine learning tools, run on classical computers, can be used to make predictions about quantum systems and thus help researchers solve some of the trickiest physics and chemistry problems. While this notion has been shown experimentally before, the new report is the first to mathematically prove that the method works.
    “Quantum computers are ideal for many types of physics and materials science problems,” says lead author Hsin-Yuan (Robert) Huang, a graduate student working with John Preskill, the Richard P. Feynman Professor of Theoretical Physics and the Allen V. C. Davis and Lenabelle Davis Leadership Chair of the Institute for Quantum Science and Technology (IQIM). “But we aren’t quite there yet and have been surprised to learn that classical machine learning methods can be used in the meantime. Ultimately, this paper is about showing what humans can learn about the physical world.”
    At microscopic levels, the physical world becomes an incredibly complex place ruled by the laws of quantum physics. In this realm, particles can exist in a superposition of states, or in two states at once. And a superposition of states can lead to entanglement, a phenomenon in which particles are linked, or correlated, without even being in contact with each other. These strange states and connections, which are widespread within natural and human-made materials, are very hard to describe mathematically.
    “Predicting the low-energy state of a material is very hard,” says Huang. “There are huge numbers of atoms, and they are superimposed and entangled. You can’t write down an equation to describe it all.”
    The new study is the first mathematical demonstration that classical machine learning can be used to bridge the gap between us and the quantum world. Machine learning is a type of computer application that mimics the human brain to learn from data. More

  • in

    'Twisty' photons could turbocharge next-gen quantum communication

    Quantum computers and communication devices work by encoding information into individual or entangled photons, enabling data to be quantum securely transmitted and manipulated exponentially faster than is possible with conventional electronics. Now, quantum researchers at Stevens Institute of Technology have demonstrated a method for encoding vastly more information into a single photon, opening the door to even faster and more powerful quantum communication tools.
    Typically, quantum communication systems “write” information onto a photon’s spin angular momentum. In this case, photons carry out either a right or left circular rotation, or form a quantum superposition of the two known as a two-dimensional qubit. It’s also possible to encode information onto a photon’s orbital angular momentum — the corkscrew path that light follows as it twists and torques forward, with each photon circling around the center of the beam. When the spin and angular momentum interlock, it forms a high-dimensional qudit — enabling any of a theoretically infinite range of values to be encoded into and propagated by a single photon.
    Qubits and qudits, also known as flying qubits and flying qudits, are used to propagate information stored in photons from one point to another. The main difference is that qudits can carry much more information over the same distance than qubits, providing the foundation for turbocharging next generation quantum communication.
    In a cover story in the August 2022 issue of Optica, researchers led by Stefan Strauf, head of the NanoPhotonics Lab at Stevens, show that they can create and control individual flying qudits, or “twisty” photons, on demand — a breakthrough that could dramatically expand the capabilities of quantum communication tools. The work builds upon the team’s 2018 paper in Nature Nanotechnology.
    “Normally the spin angular momentum and the orbital angular momentum are independent properties of a photon. Our device is the first to demonstrate simultaneous control of both properties via the controlled coupling between the two,” explained Yichen Ma, a graduate student in Strauf’s NanoPhotonics Lab, who led the research in collaboration with Liang Feng at the University of Pennsylvania, and Jim Hone at Columbia University.
    “What makes it a big deal is that we’ve shown we can do this with single photons rather than classical light beams, which is the basic requirement for any kind of quantum communication application,” Ma said. More