More stories

  • in

    New method to systematically find optimal quantum operation sequences for quantum computers developed

    The National Institute of Information and Communications Technology, Keio University, Tokyo University of Science, and The University of Tokyo, succeeded for the first time in developing a method for systematically finding the optimal quantum operation sequence for a quantum computer.
    In order for a quantum computer to perform a task, we need to write a sequence of quantum operations. Until now, computer operators have written their own quantum operation sequences based on existing methods (recipes). What we have developed this time is a systematic method that applies optimal control theory (GRAPE algorithm) to identify the theoretically optimal sequence from among all conceivable quantum operation sequences.
    This method is expected to become a useful tool for medium-scale quantum computers and is expected to contribute to improving the performance of quantum computers and reducing environmental impact in the near future.
    This result was published in the American scientific journal Physical Review A on August 23, 2022.
    Quantum computers, which are currently under development, are expected to have a major impact on society. Their benefits include reducing the environmental burden by reducing energy consumption, finding new chemical substances for medical use, accelerating the search for materials for a cleaner environment, etc.
    One of the big problems for quantum computers is that the quantum state is very sensitive to noise, so it is difficult to maintain it stably for a long time (maintaining a coherent quantum state). In order to obtain the best performance, it is necessary to complete the operations within the time that the coherent quantum state is maintained. There was a need for a method to systematically identify the optimal sequences.
    The research team has developed a systematic method to identify the optimal quantum operation sequence.
    When a computer stores and processes information, all information is converted to a string of bits with values of 0 or 1. A quantum operation sequence is a computer program written in a human-readable language that is converted so that it can be processed by a quantum computer. The quantum operation sequence consists of 1-qubit operations and 2-qubit operations. The best sequence is the one with the fewest operations and shows the best performance (the number of red squares and green vertical lines is the smallest).
    The new method analyzes all possible sequences of elementary quantum operations using a computational algorithm called GRAPE, a numerical optimal control theory algorithm. Specifically, we create a table of quantum operation sequences and the performance index (fidelity F) for each sequence, ranging from thousands to millions, depending on the number of qubits and the number of operations under investigation. The optimal quantum operation sequence is systematically identified based on the accumulated data.
    It is also possible for the new method to analyze the complete list of all quantum operation sequences and evaluate conventional recipes. As such, it can provide a valuable tool for establishing benchmarks for past and future research on the performance of few-qubit quantum algorithms.
    The systematic method to find the optimal quantum operation sequence for quantum computers is expected to become a useful tool for medium-scale quantum computers. In the near future, it is expected to improve the performance of quantum computers  and contribute to reducing the burden on the environment.
    We also found that there are many optimal sequences of quantum operations that are excellent. This means that a probabilistic approach could extend the applicability of this new method to larger tasks. Approaches based on analyzing large datasets suggest the possibility of integrating machine learning with our new method to further enhance the predictive power. In the future, the research team will apply the results obtained this time to the optimization of tasks obtained from actual quantum algorithms. More

  • in

    Making stable molecules reactive with light

    Researchers at Linköping University have used computer simulations to show that stable aromatic molecules can become reactive after absorbing light. The results, published in the Journal of Organic Chemistry, may have long-term applications in such areas as the storage of solar energy, pharmacology, and molecular machines.
    “Everyone knows that petrol smells nice. This is because it contains the aromatic molecule benzene. And aromatic molecules don’t just smell nice: they have many useful chemical properties. Our discovery means that we can add more properties,” says Bo Durbeej, professor of computational physics at Linköping University.
    In normal organic chemistry, heat can be used to start reactions. However, an aromatic molecule is a stable hydrocarbon, and it is difficult to initiate reactions between such molecules and others simply by heating. This is because the molecule is already in an optimal energy state. In contrast, a reaction in which an aromatic molecule is formed takes place extremely readily.
    Researchers at Linköping University have now used computer simulations to show that it is possible to activate aromatic molecules using light. Reactions of this type are known as photochemical reactions.
    “It is possible to add more energy using light than using heat. In this case, light can help an aromatic molecule to become antiaromatic, and thus highly reactive. This is a new way to control photochemical reactions using the aromaticity of the molecules,” says Bo Durbeej.
    The result was important enough to be highlighted on the cover of the Journal of Organic Chemistry when it was published. In the long term, it has possible applications in many areas. Bo Durbeej’s research group focuses on applications in the storage of solar energy, but he sees potential also in molecular machines, molecular synthesis, and photopharmacology. In the latter application, it may be possible to use light to selectively activate drugs with aromatic groups at a location in the body where the pharmacological effect is wanted.
    “In some cases, it’s not possible to supply heat without harming surrounding structures, such as body tissue. It should, however, be possible to supply light,” says Bo Durbeej.
    The researchers tested the hypothesis that it was the loss of aromaticity that led to the increased reactivity by examining the opposite relationship in the simulations. In this case, they started with an antiaromatic unstable molecule and simulated it being subject to light irradiation. This led to the formation of an aromatic compound, and the researchers saw, as expected, that the reactivity was lost.
    “Our discovery extends the concept of ‘aromaticity’, and we have shown that we can use this concept in organic photochemistry,” says Bo Durbeej.
    The study has been funded by the Olle Engkvist Foundation, the Swedish Research Council, ÅForsk, and the Carl Trygger Foundation. The computations were carried out at the National Supercomputer Centre at Linköping University with support from the Swedish National Infrastructure for Computing (SNIC).
    Story Source:
    Materials provided by Linköping University. Original written by Anders Törneholm. Note: Content may be edited for style and length. More

  • in

    How 'prediction markets' could improve climate risk policies and investment decisions

    A market-led approach could be key to guiding policy, research and business decisions about future climate risks, a new study outlines.
    Published in the journal Nature Climate Change, the paper from academics at the Universities of Lancaster and Exeter details how expert ‘prediction markets’ could improve the climate-risk forecasts that guide key business and regulatory decisions.
    Organisations now appreciate that they have to consider climate risks within their strategic plans — whether that relates to physical risks to buildings and sites, or risks associated with transitioning to achieve net zero.
    However, the forward-looking information needed to inform these strategic decisions is limited, the researchers say.
    Dr Kim Kaivanto, a co-author from Lancaster University’s Department of Economics, said: “The institutional arrangements under which climate-risk information is currently provided mirrors the incentive problems and conflicts of interest that prevailed in the credit-rating industry prior to the 2007/8 financial crisis.
    “In order to make sense of emissions scenarios and to support planning and decision-making, organisations have a pressing need for this type of forward-looking expert risk information. More

  • in

    Robots can be used to assess children's mental wellbeing

    Robots can be better at detecting mental wellbeing issues in children than parent-reported or self-reported testing, a new study suggests.
    A team of roboticists, computer scientists and psychiatrists from the University of Cambridge carried out a study with 28 children between the ages of eight and 13, and had a child-sized humanoid robot administer a series of standard psychological questionnaires to assess the mental wellbeing of each participant.
    The children were willing to confide in the robot, in some cases sharing information with the robot that they had not yet shared via the standard assessment method of online or in-person questionnaires. This is the first time that robots have been used to assess mental wellbeing in children.
    The researchers say that robots could be a useful addition to traditional methods of mental health assessment, although they are not intended to be a substitute for professional mental health support. The results will be presented today (1 September) at the 31st IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) in Naples, Italy.
    During the COVID-19 pandemic, home schooling, financial pressures, and isolation from peers and friends impacted the mental health of many children. Even before the pandemic however, anxiety and depression among children in the UK has been on the rise, but the resources and support to address mental wellbeing are severely limited.
    Professor Hatice Gunes, who leads the Affective Intelligence and Robotics Laboratory in Cambridge’s Department of Computer Science and Technology, has been studying how socially-assistive robots (SARs) can be used as mental wellbeing ‘coaches’ for adults, but in recent years has also been studying how they may be beneficial to children. More

  • in

    COVID radar: Genetic sequencing can help predict severity of next variant

    As public health officials around the world contend with the latest surge of the COVID-19 pandemic, researchers at Drexel University have created a computer model that could help them be better prepared for the next one. Using machine learning algorithms, trained to identify correlations between changes in the genetic sequence of the COVID-19 virus and upticks in transmission, hospitalizations and deaths, the model can provide an early warning about the severity of new variants.
    More than two years into the pandemic, scientists and public health officials are doing their best to predict how mutations of the SARS-CoV-2 virus are likely to make it more transmissible, evasive to the immune system and likely to cause severe infections. But collecting and analyzing the genetic data to identify new variants — and linking it to the specific patients who have been sickened by it — is still an arduous process.
    Because of this, most public health projections about new “variants of concern” — as the World Health Organization categorizes them — are based on surveillance testing and observation of the regions where they are already spreading.
    “The speed with which new variants, like Omicron have made their way around the globe means that by the time public health officials have a good handle on how vulnerable their population might be, the virus has already arrived,” said Bahrad A. Sokhansanj, PhD, an assistant research professor in Drexel’s College of Engineering who led development of the computer model. “We’re trying to give them an early warning system — like advanced weather modeling for meteorologists — so they can quickly predict how dangerous a new variant is likely to be — and prepare accordingly.”
    The Drexel model, which was recently published in the journal Computers in Biology and Medicine, is driven by a targeted analysis of the genetic sequence of the virus’s spike protein — the part of the virus that allows it to evade the immune system and infect healthy cells, it is also the part known to have mutated most frequently throughout the pandemic — combined with a mixed effects machine learning analysis of factors such as age, sex and geographic location of COVID patients.
    Learning to Find Patterns
    The research team used a newly developed machine learning algorithm, called GPBoost, based on methods commonly used by large companies to analyze sales data. Via a textual analysis, the program can quickly home in on the areas of the genetic sequence that are most likely to be linked to changes in the severity of the variant. More

  • in

    Neural networks predict forces in jammed granular solids

    Granular matter is all around us. Examples include sand, rice, nuts, coffee and even snow. These materials are made of solid particles that are large enough not to experience thermal fluctuations. Instead, their state is determined by mechanical influences: shaking produces “granular gases” whilst by compression one gets “granular solids.” An unusual feature of such solids is that forces within the material concentrate along essentially linear paths called force chains whose shape resembles that of lightning. Apart from granular solids, other complex solids such as dense emulsions, foams and even groups of cells can exhibit these force chains. Researchers led by the University of Göttingen used machine learning and computer simulations to predict the position of force chains. The results were published in Nature Communications.
    The formation of force chains is highly sensitive to the way the individual grains interact. This makes it very difficult to predict where force chains will form. Combining computer simulations with tools from artificial intelligence, researchers at the Institute for Theoretical Physics, University of Göttingen, and at Ghent University tackled this challenge by developing a novel tool for predicting the formation of force chains in both frictionless and frictional granular matter. The approach uses a machine learning method known as a graph neural network (GNN). The researchers have demonstrated that GNNs can be trained in a supervised approach to predict the position of force chains that arise while deforming a granular system, given an undeformed static structure.
    “Understanding force chains is crucial in describing the mechanical and transport properties of granular solids and this applies in a wide range of circumstances — for example how sound propagates or how sand or a pack of coffee grains respond to mechanical deformation,” explains Dr Rituparno Mandal, Institute for Theoretical Physics, University of Göttingen. Mandal adds, “A recent study even suggests that living creatures such as ants exploit the effects of force chain networks when removing grains of soil for efficient tunnel excavation.”
    “We experimented with different machine learning-based tools and realised that a trained GNN can generalize remarkably well from training data, allowing it to predict force chains in new undeformed samples,” says Mandal. “We were fascinated by just how robust the method is: it works exceptionally well for many types of computer generated granular materials. We are currently planning to extend this to experimental systems in the lab,” added Corneel Casert, joint first author Ghent University. Senior author, Professor Peter Sollich, Institute for Theoretical Physics, University of Göttingen, explains: “The efficiency of this new method is surprisingly high for different scenarios with varying system size, particle density, and composition of different particles types. This means it will be useful in understanding force chains for many types of granular matter and systems.”
    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More

  • in

    Why 'erasure' could be key to practical quantum computing

    Researchers have discovered a new method for correcting errors in the calculations of quantum computers, potentially clearing a major obstacle to a powerful new realm of computing.
    In conventional computers, fixing errors is a well-developed field. Every cellphone requires checks and fixes to send and receive data over messy airwaves. Quantum computers offer enormous potential to solve certain complex problems that are impossible for conventional computers, but this power depends on harnessing extremely fleeting behaviors of subatomic particles. These computing behaviors are so ephemeral that even looking in on them to check for errors can cause the whole system to collapse.
    In a theoretical paper published Aug. 9 in Nature Communications, an interdisciplinary team led by Jeff Thompson, an associate professor of electrical and computer engineering at Princeton, and collaborators Yue Wu and Shruti Puri at Yale University and Shimon Kolkowitz at the University of Wisconsin-Madison, showed that they could dramatically improve a quantum computer’s tolerance for faults, and reduce the amount of redundant information needed to isolate and fix errors. The new technique increases the acceptable error rate four-fold, from 1% to 4%, which is practical for quantum computers currently in development.
    “The fundamental challenge to quantum computers is that the operations you want to do are noisy,” said Thompson, meaning that calculations are prone to myriad modes of failure.
    In a conventional computer, an error can be as simple as a bit of memory accidentally flipping from a 1 to a 0, or as messy as one wireless router interfering with another. A common approach for handling such faults is to build in some redundancy, so that each piece of data is compared with duplicate copies. However, that approach increases the amount of data needed and creates more possibilities for errors. Therefore, it only works when the vast majority of information is already correct. Otherwise, checking wrong data against wrong data leads deeper into a pit of error.
    “If your baseline error rate is too high, redundancy is a bad strategy,” Thompson said. “Getting below that threshold is the main challenge.”
    Rather than focusing solely on reducing the number of errors, Thompson’s team essentially made errors more visible. The team delved deeply into the actual physical causes of error, and engineered their system so that the most common source of error effectively eliminates, rather than simply corrupting, the damaged data. Thompson said this behavior represents a particular kind of error known as an “erasure error,” which is fundamentally easier to weed out than data that is corrupted but still looks like all the other data. More

  • in

    Push, pull or swirl: The many movements of cilia

    Cilia are tiny, hair-like structures on cells throughout our bodies that beat rhythmically to serve a variety of functions when they are working properly, including circulating cerebrospinal fluid in brains and transporting eggs in fallopian tubes.
    Defective cilia can lead to disorders including situs inversus — a condition where a person’s organs develop on the side opposite of where they usually are.
    Researchers know about many of cilia’s roles, but not exactly how they beat in the first place. This knowledge would be a step toward better understanding, and ultimately being able to treat, cilia-related diseases.
    A team of McKelvey School of Engineering researchers at Washington University in St. Louis, led by Louis Woodhams, senior lecturer, and Philip V. Bayly, the Lee Hunter Distinguished Professor and chair of the Department of Mechanical Engineering & Materials Science, have developed a mathematical model of the cilium in which beating arises from a mechanical instability due to steady forces generated by the cilium motor protein, dynein.
    Results of the research appeared on the cover of the August issue of Journal of the Royal Society Interface.
    Bayly’s lab has been working with cilia as a model to study vibration, wave motion and instability in mechanical and biomedical systems. As intricate nanomachines in their own right, cilia could inspire similarly propelled machines that can do useful tasks on the tiniest scales, maybe even for chemical sensing or drug delivery in the human body.
    The new model will allow the team to explore what happens when the motor protein exerts different forces, or when internal structures are more or less stiff, as a result of genetic or environmental factors.
    Story Source:
    Materials provided by Washington University in St. Louis. Original written by Beth Miller. Note: Content may be edited for style and length. More