More stories

  • in

    Machine learning homes in on catalyst interactions to accelerate materials development

    A machine learning technique rapidly rediscovered rules governing catalysts that took humans years of difficult calculations to reveal — and even explained a deviation. The University of Michigan team that developed the technique believes other researchers will be able to use it to make faster progress in designing materials for a variety of purposes.
    “This opens a new door, not just in understanding catalysis, but also potentially for extracting knowledge about superconductors, enzymes, thermoelectrics, and photovoltaics,” said Bryan Goldsmith, an assistant professor of chemical engineering, who co-led the work with Suljo Linic, a professor of chemical engineering.
    The key to all of these materials is how their electrons behave. Researchers would like to use machine learning techniques to develop recipes for the material properties that they want. For superconductors, the electrons must move without resistance through the material. Enzymes and catalysts need to broker exchanges of electrons, enabling new medicines or cutting chemical waste, for instance. Thermoelectrics and photovoltaics absorb light and generate energetic electrons, thereby generating electricity.
    Machine learning algorithms are typically “black boxes,” meaning that they take in data and spit out a mathematical function that makes predictions based on that data.
    “Many of these models are so complicated that it’s very difficult to extract insights from them,” said Jacques Esterhuizen, a doctoral student in chemical engineering and first author of the paper in the journal Chem. “That’s a problem because we’re not only interested in predicting material properties, we also want to understand how the atomic structure and composition map to the material properties.”
    But a new breed of machine learning algorithm lets researchers see the connections that the algorithm is making, identifying which variables are most important and why. This is critical information for researchers trying to use machine learning to improve material designs, including for catalysts.
    A good catalyst is like a chemical matchmaker. It needs to be able to grab onto the reactants, or the atoms and molecules that we want to react, so that they meet. Yet, it must do so loosely enough that the reactants would rather bind with one another than stick with the catalyst.
    In this particular case, they looked at metal catalysts that have a layer of a different metal just below the surface, known as a subsurface alloy. That subsurface layer changes how the atoms in the top layer are spaced and how available the electrons are for bonding. By tweaking the spacing, and hence the electron availability, chemical engineers can strengthen or weaken the binding between the catalyst and the reactants.
    Esterhuizen started by running quantum mechanical simulations at the National Energy Research Scientific Computing Center. These formed the data set, showing how common subsurface alloy catalysts, including metals such as gold, iridium and platinum, bond with common reactants such as oxygen, hydroxide and chlorine.
    The team used the algorithm to look at eight material properties and conditions that might be important to the binding strength of these reactants. It turned out that three mattered most. The first was whether the atoms on the catalyst surface were pulled apart from one another or compressed together by the different metal beneath. The second was how many electrons were in the electron orbital responsible for bonding, the d-orbital in this case. And the third was the size of that d-electron cloud.
    The resulting predictions for how different alloys bind with different reactants mostly reflected the “d-band” model, which was developed over many years of quantum mechanical calculations and theoretical analysis. However, they also explained a deviation from that model due to strong repulsive interactions, which occurs when electron-rich reactants bind on metals with mostly filled electron orbitals.

    Story Source:
    Materials provided by University of Michigan. Original written by Kate McAlpine. Note: Content may be edited for style and length. More

  • in

    Brain circuitry shaped by competition for space as well as genetics

    Complex brain circuits in rodents can organise themselves with genetics playing only a secondary role, according to a new computer modelling study published today in eLife.
    The findings help answer a key question about how the brain wires itself during development. They suggest that simple interactions between nerve cells contribute to the development of complex brain circuits, so that a precise genetic blueprint for brain circuitry is unnecessary. This discovery may help scientists better understand disorders that affect brain development and inform new ways to treat conditions that disrupt brain circuits.
    The circuits that help rodents process sensory information collected by their whiskers are a great example of the complexity of brain wiring. These circuits are organised into cylindrical clusters or ‘whisker barrels’ that closely match the pattern of whiskers on the animal’s face.
    “The brain cells within one whisker barrel become active when its corresponding whisker is touched,” explains lead author Sebastian James, Research Associate at the Department of Psychology, University of Sheffield, UK. “This precise mapping between the individual whisker and its brain representation makes the whisker-barrel system ideal for studying brain wiring.”
    James and his colleagues used computer modelling to determine if this pattern of brain wiring could emerge without a precise genetic blueprint. Their simulations showed that, in the cramped quarters of the developing rodent brain, strong competition for space between nerve fibers originating from different whiskers can cause them to concentrate into whisker-specific clusters. The arrangement of these clusters to form a map of the whiskers is assisted by simple patterns of gene expression in the brain tissue.
    The team also tested their model by seeing if it could recreate the results of experiments that track the effects of a rat losing a whisker on its brain development. “Our simulations demonstrated that the model can be used to accurately test how factors inside and outside of the brain can contribute to the development of cortical fields,” says co-author Leah Krubitzer, Professor of Psychology at the University of California, Davis, US.
    The authors suggest that this and similar computational models could be adapted to study the development of larger, more complex brains, including those of humans.
    “Many of the basic mechanisms of development in the rodent barrel cortex are thought to translate to development in the rest of cortex, and may help inform research into various neurodevelopmental disorders and recovery from brain injuries,” concludes senior author Stuart Wilson, Lecturer in Cognitive Neuroscience at the University of Sheffield. “As well as reducing the number of animal experiments needed to understand cortical development, exploring the parameters of computational models like ours can offer new insights into how development and evolution interact to shape the brains of mammals, including ourselves.”

    Story Source:
    Materials provided by eLife. Note: Content may be edited for style and length. More

  • in

    Invasive jumping worms damage U.S. soil and threaten forests

    What could be more 2020 than an ongoing invasion of jumping worms?
    These earthworms are wriggling their way across the United States, voraciously devouring protective forest leaf litter and leaving behind bare, denuded soil. They displace other earthworms, centipedes, salamanders and ground-nesting birds, and disrupt forest food chains. They can invade more than five hectares in a single year, changing soil chemistry and microbial communities as they go, new research shows. And they don’t even need mates to reproduce.
    Endemic to Japan and the Korean Peninsula, three invasive species of these worms — Amynthas agrestis, A. tokioensis and Metaphire hilgendorfi — have been in the United States for over a century. But just in the past 15 years, they’ve begun to spread widely (SNS: 10/7/16). Collectively known as Asian jumping worms, crazy worms, snake worms or Alabama jumpers, they’ve become well established across the South and Mid-Atlantic and have reached parts of the Northeast, Upper Midwest and West.
    Jumping worms are often sold as compost worms or fishing bait. And that, says soil ecologist Nick Henshue of the University at Buffalo in New York, is partially how they’re spreading (SN: 11/5/17). Fishers like them because the worms wriggle and thrash like angry snakes, which lures fish, says Henshue. They’re also marketed as compost worms because they gobble up food scraps far faster than other earthworms, such as nightcrawlers and other Lumbricus species.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    But when it comes to ecology, the worms have more worrisome traits. Their egg cases, or cocoons, are so small that they can easily hitch a ride on a hiker’s or gardener’s shoe, or can be transported in mulch, compost or shared plants. Hundreds can exist within a square meter of ground.  
    Compared with Lumbricus worms, jumping worms grow faster and reproduce faster — and without a mate, so one worm can create a whole invasion. Jumping worms also consume more nutrients than other earthworms, turning soil into dry granular pellets that resemble coffee grounds or ground beef — Henshue calls it “taco meat.” This can make the soil inhospitable to native plants and tree seedlings and far more likely to erode.
    [embedded content]
    Asian jumping worm species thrash furiously, unlike the more placid movements of other earthworm species. The jumping worms can also slime and shed their tails as defense mechanisms.
    To date, scientists have worried most about the worms’ effects on ground cover. Prior to a jumping worm invasion, the soft layer of decomposing leaves, bark and sticks covering the forest floor might be more than a dozen centimeters thick. What’s left afterward is bare soil with a different structure and mineral content, says Sam Chan, an invasive species specialist with Oregon Sea Grant at Oregon State University in Corvallis. Worms can reduce leaf litter by 95 percent in a single season, he says.
    That in turn can reduce or remove the forest understory, providing less nutrients or protection for the creatures that live there or for seedlings to grow. Eventually, different plants come in, usually invasive, nonnative species, says Bradley Herrick, an ecologist and research program manager at the University of Wisconsin–Madison Arboretum. And now, new research shows the worms are also changing the soil chemistry and the fungi, bacteria and microbes that live in the soils.
    Invasive jumping worms can clear a forest of leaf litter in just a couple of months, as these pictures taken in Jacobsburg State Park near Nazareth, Pa., in June 2016 (left) and August 2016 (right) show.Nick Henshue
    In a study in the October Soil Biology and Biochemistry, Herrick, soil scientist Gabriel Price-Christenson and colleagues tested samples from soils impacted by jumping worms. They were looking for changes in carbon and nitrogen levels and in soils’ release of carbon dioxide, which is produced by the metabolism of microbes and animals living in the soil. Results showed that the longer the worms had lived in the soils, the more the soils’ basal metabolic rate increased — meaning soils invaded by jumping worms could release more carbon dioxide into the atmosphere, says Price-Christenson, who is at the University of Illinois at Urbana-Champaign.
    Relative amounts of carbon and nitrogen in soils with jumping worms also shifted, the team found. That can affect plant communities, Herrick says. For example, although nitrogen is a necessary nutrient, if there’s too much, or it’s available at the wrong time of year, plants or other soil organisms won’t be able to use it. 
    The team also extracted DNA from worm poop and guts to examine differences in microbes among the jumping worm species, and tested the soils for bacterial and fungal changes. Each jumping worm species harbors a different collection of microbes in its gut, the results showed. That’s “a really important find,” Herrick says, “because for a long time, we were talking about jumping worms as a large group … but now we’re learning that [these different species] have different impacts on the soil, which will likely cascade down to having different effects on other worms, soil biota, pH and chemistry.”  
    The finding suggests each species might have a unique niche in the environment, with gut microbes breaking down particular food sources. This allows multiple species to invade and thrive together, Herrick says. This makes sense, given findings of multiple species together, but it’s still a surprise that such similar worms would have different niches, he says.    
    Scientists have been working hard to get a good handle on the biology of these worms, Henshue says. So the newly discovered soil chemistry and microbiology changes are “thoughtful” and important lines of research. But there’s still a lot that’s unknown, making it hard to predict how much farther the worms might spread and into what kinds of environments. One important question is how weather conditions affect the worms. For example, a prolonged drought this year in Wisconsin seems to have killed off many of the worms, Herrick says. Soils teeming with wriggling worms just a few weeks ago now hold far fewer.
    Perhaps that’s a hopeful sign that even these hardy worms have their limits, but in the meantime, the onslaught of worms continues its march — with help from the humans who spread them. More

  • in

    Understanding ghost particle interactions

    Scientists often refer to the neutrino as the “ghost particle.” Neutrinos were one of the most abundant particles at the origin of the universe and remain so today. Fusion reactions in the sun produce vast armies of them, which pour down on the Earth every day. Trillions pass through our bodies every second, then fly through the Earth as though it were not there.
    “While first postulated almost a century ago and first detected 65 years ago, neutrinos remain shrouded in mystery because of their reluctance to interact with matter,” said Alessandro Lovato, a nuclear physicist at the U.S. Department of Energy’s (DOE) Argonne National Laboratory.
    Lovato is a member of a research team from four national laboratories that has constructed a model to address one of the many mysteries about neutrinos — how they interact with atomic nuclei, complicated systems made of protons and neutrons (“nucleons”) bound together by the strong force. This knowledge is essential to unravel an even bigger mystery — why during their journey through space or matter neutrinos magically morph from one into another of three possible types or “flavors.”
    To study these oscillations, two sets of experiments have been undertaken at DOE’s Fermi National Accelerator Laboratory (MiniBooNE and NOvA). In these experiments, scientists generate an intense stream of neutrinos in a particle accelerator, then send them into particle detectors over a long period of time (MiniBooNE) or five hundred miles from the source (NOvA).
    Knowing the original distribution of neutrino flavors, the experimentalists then gather data related to the interactions of the neutrinos with the atomic nuclei in the detectors. From that information, they can calculate any changes in the neutrino flavors over time or distance. In the case of the MiniBooNE and NOvA detectors, the nuclei are from the isotope carbon-12, which has six protons and six neutrons.
    “Our team came into the picture because these experiments require a very accurate model of the interactions of neutrinos with the detector nuclei over a large energy range,” said Noemi Rocco, a postdoc in Argonne’s Physics division and Fermilab. Given the elusiveness of neutrinos, achieving a comprehensive description of these reactions is a formidable challenge.
    The team’s nuclear physics model of neutrino interactions with a single nucleon and a pair of them is the most accurate so far. “Ours is the first approach to model these interactions at such a microscopic level,” said Rocco. “Earlier approaches were not so fine grained.”
    One of the team’s important findings, based on calculations carried out on the now-retired Mira supercomputer at the Argonne Leadership Computing Facility (ALCF), was that the nucleon pair interaction is crucial to model neutrino interactions with nuclei accurately. The ALCF is a DOE Office of Science User Facility.
    “The larger the nuclei in the detector, the greater the likelihood the neutrinos will interact with them,” said Lovato. “In the future, we plan to extend our model to data from bigger nuclei, namely, those of oxygen and argon, in support of experiments planned in Japan and the U.S.”
    Rocco added that “For those calculations, we will rely on even more powerful ALCF computers, the existing Theta system and upcoming exascale machine, Aurora.”
    Scientists hope that, eventually, a complete picture will emerge of flavor oscillations for both neutrinos and their antiparticles, called “antineutrinos.” That knowledge may shed light on why the universe is built from matter instead of antimatter — one of the fundamental questions about the universe.

    Story Source:
    Materials provided by DOE/Argonne National Laboratory. Original written by Joseph E. Harmon. Note: Content may be edited for style and length. More

  • in

    New artificial intelligence platform uses deep learning to diagnose dystonia with high accuracy in less than one second

    Researchers at Mass Eye and Ear have developed a unique diagnostic tool that can detect dystonia from MRI scans, the first technology of its kind to provide an objective diagnosis of the disorder. Dystonia is a potentially disabling neurological condition that causes involuntary muscle contractions, leading to abnormal movements and postures. It is often misdiagnosed and can take people up to 10 years to get a correct diagnosis.
    In a new study published September 28 in Proceedings of the National Academy of Sciences, researchers developed an AI-based deep learning platform — called DystoniaNet — to compare brain MRIs of 612 people, including 392 patients with three different forms of isolated focal dystonia and 220 healthy individuals. The platform diagnosed dystonia with 98.8 percent accuracy. During the process, the researchers identified a new microstructural neural network biological marker of dystonia. With further testing and validation, they believe DystoniaNet can be easily integrated into clinical decision-making.
    “There is currently no biomarker of dystonia and no ‘gold standard’ test for its diagnosis. Because of this, many patients have to undergo unnecessary procedures and see different specialists until other diseases are ruled out and the diagnosis of dystonia is established,” said senior study author Kristina Simonyan, MD, PhD, Dr med, Director of Laryngology Research at Mass Eye and Ear, Associate Neuroscientist at Massachusetts General Hospital, and Associate Professor of Otolaryngology-Head and Neck Surgery at Harvard Medical School. “There is a critical need to develop, validate and incorporate objective testing tools for the diagnosis of this neurological condition, and our results show that DystoniaNet may fill this gap.”
    A disorder notoriously difficult to diagnose
    About 35 out of every 100,000 people have isolated or primary dystonia — prevalence that is likely underestimated due to the current challenges in diagnosing this disorder. In some cases, dystonia can be a result of a neurological event, such as Parkinson’s disease or a stroke. However, the majority of isolated dystonia cases have no known cause and affect a single muscle group in the body. These so-called focal dystonias can lead to disability and problems with physical and emotional quality of life.
    The study included three of the most common types of focal dystonia: Laryngeal dystonia, characterized by involuntary movements of the vocal cords that can cause difficulties with speech (also called spasmodic dysphonia); Cervical dystonia, which causes the neck muscles to spasm and the neck to tilt in an unusual manner; Blepharospasm, a focal dystonia of the eyelid that causes involuntary twitching and forceful eyelid closure.

    advertisement

    Traditionally, a dystonia diagnosis is made based on clinical observations, said Dr. Simonyan. Previous studies have found that the agreement on dystonia diagnosis between clinicians based on purely clinical assessments is as low as 34 percent and have reported that about 50 percent of the cases go misdiagnosed or underdiagnosed at a first patient visit.
    DystoniaNet could be integrated into medical decision-making
    DystoniaNet utilizes deep learning, a particular type of AI algorithm, to analyze data from individual MRI and identify subtler differences in brain structure. The platform is able to detect clusters of abnormal structures in several regions of the brain that are known to control processing and motor commands. These small changes cannot be seen by a naked eye in MRI, and the patterns are only evident through the platform’s ability to take 3D brain images and zoom into their microstructural details.
    “Our study suggests that the implementation of the DystoniaNet platform for dystonia diagnosis would be transformative for the clinical management of this disorder,” said the first study author Davide Valeriani, PhD, a postdoctoral research fellow in the Dystonia and Speech Motor Control Laboratory at Mass Eye and Ear and Harvard Medical School. “Importantly, our platform was designed to be efficient and interpretable for clinicians, by providing the patient’s diagnosis, the confidence of the AI in that diagnosis, and information about which brain structures are abnormal.”
    DystoniaNet is a patent-pending proprietary platform developed by Dr. Simonyan and Dr. Valeriani, in conjunction with Mass General Brigham Innovation. The technology interprets an MRI scan for microstructural biomarker in 0.36 seconds. DystoniaNet has been trained using Amazon Web Services computational cloud platform. The researchers believe this technology can be easily translated into the clinical setting, such as by being integrated in an electronic medical record or directly in the MRI scanner software. If DystoniaNet finds a high probability of dystonia in the MRI, a physician can use this information to help confidently confirm the diagnosis, pursue future actions, and suggest a course of treatment without a delay. Dystonia cannot be cured, but some treatments can help reduce the incidence of dystonia-related spasms.
    Future studies will look at more types of dystonia and will include trials at multiple hospitals to further validate the DystoniaNet platform in a larger number of patients.
    This research was funded and supported by the National Institutes of Health (R01DC011805, R01DC012545, R01NS088160), Amazon Web Services through the Machine Learning Research Award, and a charitable gift by Keith and Bobbi Richardson. More

  • in

    3D biometric authentication based on finger veins almost impossible to fool

    Biometric authentication, which uses unique anatomical features such as fingerprints or facial features to verify a person’s identity, is increasingly replacing traditional passwords for accessing everything from smartphones to law enforcement systems. A newly developed approach that uses 3D images of finger veins could greatly increase the security of this type of authentication.
    “The 3D finger vein biometric authentication method we developed enables levels of specificity and anti-spoofing that were not possible before,” said Jun Xia, from University at Buffalo, The State University of New York, research team leader. “Since no two people have exactly the same 3D vein pattern, faking a vein biometric authentication would require creating an exact 3D replica of a person’s finger veins, which is basically not possible.”
    In the Optical Society (OSA) journal Applied Optics, the researchers describe their new approach, which represents the first time that photoacoustic tomography has been used for 3D finger vein biometric authentication. Tests of the method on people showed that it can correctly accept or reject an identity 99 percent of the time.
    “Due to the COVID-19 pandemic, many jobs and services are now performed remotely,” said research team member Giovanni Milione, from NEC Laboratories America, Inc. “Because our technique detects invisible features in 3D, it could be used to enable better authentication techniques to protect personnel data and sensitive documents.”
    Adding depth information
    Although other biometric authentication approaches based on finger veins have been developed, they are all based on 2D images. The additional depth from a 3D image increases security by making it more difficult to fake an identity and less likely that the technique will accept the wrong person or reject the right one.

    advertisement

    To accomplish 3D biometric authentication using the veins in a person’s fingers, the researchers turned to photoacoustic tomography, an imaging technique that combines light and sound. First, light from a laser is used to illuminate the finger. If the light hits a vein, it creates a sound much in the same way that a grill creates a “poof” sound when it is first lit. The system then detects that sound with an ultrasound detector and uses it to reconstruct a 3D image of the veins.
    “It has been challenging to use photoacoustic tomography for 3D finger vein biometric authentication because of the bulky imaging system, small field of view and inconvenient positioning of the hand,” said Xia. “We addressed these issues in the new system design through a better combination of light and acoustic beams and custom-made transducers to improve the imaging field of view.”
    Designing a practical system
    To better integrate light illumination and acoustic detection, the researcher fabricated a new light- and acoustic-beam combiner. They also designed an imaging window that allows the hand to be naturally placed on the platform, similar to a full-size fingerprint scanner. Another critical development was a new matching algorithm, developed by Wenyao Xu from the Computer science and Engineering department that allows biometric identification and matching of features in 3D space.
    The researchers tested their new system with 36 people by imaging their four left and four right fingers. The tests showed that the approach was not only feasible but also accurate, especially when multiple fingers were used.
    “We envision this technique being used in critical facilities, such as banks and military bases, that require a high level of security,” said Milione. “With further miniaturization 3D vein authentication could also be used in personal electronics or be combined with 2D fingerprints for two-factor authentication.”
    The researchers are now working to make the system even smaller and to reduce the imaging time to less than one second. They note that it should be possible to implement the photoacoustic system in smartphones since ultrasound systems have already been developed for use in smartphones. This could enable portable or wearable systems that perform biometric authentication in real time.

    Story Source:
    Materials provided by The Optical Society. Note: Content may be edited for style and length. More

  • in

    Recording thousands of nerve cell impulses at high resolution

    For over 15 years, ETH Professor Andreas Hierlemann and his group have been developing microelectrode-array chips that can be used to precisely excite nerve cells in cell cultures and to measure electrical cell activity. These developments make it possible to grow nerve cells in cell-culture dishes and use chips located at the bottom of the dish to examine each individual cell in a connected nerve tissue in detail. Alternative methods for conducting such measurements have some clear limitations. They are either very time-consuming — because contact to each cell has to be individually established — or they require the use of fluorescent dyes, which influence the behaviour of the cells and hence the outcome of the experiments.
    Now, researchers from Hierlemann’s group at the Department of Biosystems Science and Engineering of ETH Zurich in Basel, together with Urs Frey and his colleagues from the ETH spin-off MaxWell Biosystems, developed a new generation of microelectrode-array chips. These chips enable detailed recordings of considerably more electrodes than previous systems, which opens up new applications.
    Stronger signal required
    As with previous chip generations, the new chips have around 20,000 microelectrodes in an area measuring 2 by 4 millimetres. To ensure that these electrodes pick up the relatively weak nerve impulses, the signals need to be amplified. Examples of weak signals that the scientists want to detect include those of nerve cells, derived from human pluripotent stem cells (iPS cells). These are currently used in many cell-culture disease models. Another reason to significantly amplify the signals is if the researchers want to track nerve impulses in axons (fine, very thin fibrous extensions of a nerve cell).
    However, high-performance amplification electronics take up space, which is why the previous chip was able to simultaneously amplify and read out signals from only 1,000 of the 20,000 electrodes. Although the 1,000 electrodes could be arbitrarily selected, they had to be determined prior to every measurement. This meant that it was possible to make detailed recordings over only a fraction of the chip area during a measurement.
    Background noise reduced
    In the new chip, the amplifiers are smaller, permitting the signals of all 20,000 electrodes to be amplified and measured at the same time. However, the smaller amplifiers have higher noise levels. So, to make sure they capture even the weakest nerve impulses, the researchers included some of the larger and more powerful amplifiers into the new chips and employ a nifty trick: they use these powerful amplifiers to identify the time points, at which nerve impulses occur in the cell culture dish. At these time points, they then can search for signals on the other electrodes, and by taking the average of several successive signals, they can reduce the background noise. This procedure yields a clear image of the signal activity over the entire area being measured.
    In first experiments, which the researchers published in the journal Nature Communications, they demonstrated their method on human iPS-derived neuronal cells as well as on brain sections, retina pieces, cardiac cells and neuronal spheroids.
    Application in drug development
    With the new chip, the scientists can produce electrical images of not only the cells but also the extension of their axons, and they can determine how fast a nerve impulse is transmitted to the farthest reaches of the axons. “The previous generations of microelectrode array chips let us measure up to 50 nerve cells. With the new chip, we can perform detailed measurements of more than 1,000 cells in a culture all at once,” Hierlemann says.
    Such comprehensive measurements are suitable for testing the effects of drugs, meaning that scientists can now conduct research and experiments with human cell cultures instead of relying on lab animals. The technology thus also helps to reduce the number of animal experiments.
    The ETH spin-off MaxWell Biosystems is already marketing the existing microelectrode technology, which is now in use around the world by over a hundred research groups at universities and in industry. At present, the company is looking into a potential commercialisation of the new chip.

    Story Source:
    Materials provided by ETH Zurich. Original written by Fabio Bergamin. Note: Content may be edited for style and length. More

  • in

    Avoiding environmental losses in quantum information systems

    New research published in EPJ D has revealed how robust initial states can be prepared in quantum information systems, minimising any unwanted transitions which lead to losses in quantum information.
    Through new techniques for generating ‘exceptional points’ in quantum information systems, researchers have minimised the transitions through which they lose information to their surrounding environments.
    Recently, researchers have begun to exploit the effects of quantum mechanics to process information in some fascinating new ways. One of the main challenges faced by these efforts is that systems can easily lose their quantum information as they interact with particles in their surrounding environments. To understand this behaviour, researchers in the past have used advanced models to observe how systems can spontaneously evolve into different states over time — losing their quantum information in the process. Through new research published in EPJ D, M. Reboiro and colleagues at the University of La Plata in Argentina have discovered how robust initial states can be prepared in quantum information systems, avoiding any unwanted transitions extensive time periods.
    The team’s findings could provide valuable insights for the rapidly advancing field of quantum computing; potentially enabling more complex operations to be carried out using the cutting-edge devices. Their study considered a ‘hybrid’ quantum information system based around a specialised loop of superconducting metal, which interacted with an ensemble of imperfections within the atomic lattice of diamond. Within this system, the researchers aimed to generate sets of ‘exceptional points.’ When these are present, information states don’t decay in the usual way: instead, any gains and losses of quantum information can be perfectly balanced between states.
    By accounting for quantum effects, Reboiro and colleagues modelled how the dynamics of ensembled imperfections were affected by their surrounding environments. From these results, they combined information states which displayed large transition probabilities over long time intervals — allowing them to generate exceptional points. Since this considerably increased the survival probability of a state, the team could finally prepare initial states which were robust against the effects of their environments. Their techniques could soon be used to build quantum information systems which retain their information for far longer than was previously possible.

    Story Source:
    Materials provided by Springer. Note: Content may be edited for style and length. More