More stories

  • in

    New qubit circuit enables quantum operations with higher accuracy

    In the future, quantum computers may be able to solve problems that are far too complex for today’s most powerful supercomputers. To realize this promise, quantum versions of error correction codes must be able to account for computational errors faster than they occur.
    However, today’s quantum computers are not yet robust enough to realize such error correction at commercially relevant scales.
    On the way to overcoming this roadblock, MIT researchers demonstrated a novel superconducting qubit architecture that can perform operations between qubits — the building blocks of a quantum computer — with much greater accuracy than scientists have previously been able to achieve.
    They utilize a relatively new type of superconducting qubit, known as fluxonium, which can have a lifespan that is much longer than more commonly used superconducting qubits.
    Their architecture involves a special coupling element between two fluxonium qubits that enables them to perform logical operations, known as gates, in a highly accurate manner. It suppresses a type of unwanted background interaction that can introduce errors into quantum operations.
    This approach enabled two-qubit gates that exceeded 99.9 percent accuracy and single-qubit gates with 99.99 percent accuracy. In addition, the researchers implemented this architecture on a chip using an extensible fabrication process.
    “Building a large-scale quantum computer starts with robust qubits and gates. We showed a highly promising two-qubit system and laid out its many advantages for scaling. Our next step is to increase the number of qubits,” says Leon Ding PhD ’23, who was a physics graduate student in the Engineering Quantum Systems (EQuS) group and is the lead author of a paper on this architecture. More

  • in

    Did life exist on Mars? Other planets? With AI’s help, we may know soon

    Scientists have discovered a simple and reliable test for signs of past or present life on other planets — “the holy grail of astrobiology.”
    In the journal Proceedings of the National Academy of Sciences, a seven-member team, funded by the John Templeton Foundation and led by Jim Cleaves and Robert Hazen of the Carnegie Institution for Science, reports that, with 90% accuracy, their artificial intelligence-based method distinguished modern and ancient biological samples from those of abiotic origin.
    “This routine analytical method has the potential to revolutionize the search for extraterrestrial life and deepen our understanding of both the origin and chemistry of the earliest life on Earth,” says Dr. Hazen. “It opens the way to using smart sensors on robotic spacecraft, landers and rovers to search for signs of life before the samples return to Earth.”
    Most immediately, the new test could reveal the history of mysterious, ancient rocks on Earth, and possibly that of samples already collected by the Mars Curiosity rover’s Sample Analysis at Mars (SAM) instrument. The latter tests could be conducted using an onboard analytical instrument nicknamed “SAM” (for Sample Analysis at Mars.
    “We’ll need to tweak our method to match SAM’s protocols, but it’s possible that we already have data in hand to determine if there are molecules on Mars from an organic Martian biosphere.”
    “The search for extraterrestrial life remains one of the most tantalizing endeavors in modern science,” says lead author Jim Cleaves of the Earth and Planets Laboratory, Carnegie Institution for Science, Washington, DC.
    “The implications of this new research are many, but there are three big takeaways: First, at some deep level, biochemistry differs from abiotic organic chemistry; second, we can look at Mars and ancient Earth samples to tell if they were once alive; and third, it is likely this new method could distinguish alternative biospheres from those of Earth, with significant implications for future astrobiology missions.”
    The innovative analytical method does not rely simply on identifying a specific molecule or group of compounds in a sample. More

  • in

    Machine learning unravels mysteries of atomic shapes

    New research has used machine learning to find the properties of atomic pieces of geometry, in pioneering work that could drive the development of new results in mathematics.
    Mathematicians from the University of Nottingham and Imperial College London have, for the first time, used Machine Learning to expand and accelerate work identifying ‘atomic shapes’ that form the basic pieces of geometry in higher dimensions. Their findings have been published in Nature Communications.
    The research group started their work to create a Periodic Table for shapes several years ago. The atomic pieces are called Fano varieties. The team associate a sequence of numbers, called quantum periods, to each shape, giving a ‘barcode’ or ‘fingerprint’ that describes the shape. Their recent breakthrough uses a new machine learning methodology to sift very quickly through these barcodes, identifying shapes and their properties such as the dimension of each shape.
    Alexander Kasprzyk is Associate Professor in Geometry in the School of Mathematical Sciences at the University of Nottingham and was one of the authors on the paper. He explains: “For mathematicians, the key step is working out what the pattern is in a given problem. This can be very difficult, and some mathematical theories can take years to discover.”
    Professor Tom Coates from the Department of Mathematics at Imperial College London and co-author on the paper said, “This is where Artificial Intelligence could really revolutionise Mathematics as we have shown that machine learning is a powerful tool for spotting patterns in complex domains like algebra and geometry.”
    Sara Veneziale, co-author and a PhD student in the team, continues: “We’re really excited about the fact that machine learning can be used in Pure Mathematics. This will accelerate new insights across the field.” More

  • in

    Drug discovery on an unprecedented scale

    Boosting virtual screening with machine learning allowed for a 10-fold time reductionin the processing of 1.56 billion drug-like molecules. Researchers from the University of Eastern Finland teamed up with industry and supercomputers to carry out one of the world’s largest virtual drug screens.
    In their efforts to find novel drug molecules, researchers often rely on fast computer-aided screening of large compound libraries to identify agents that can block a drug target. Such a target can, for instance, be an enzyme that enables a bacterium to withstand antibiotics or a virus to infect its host. The size of these collections of small organic molecules has seen a massive surge over the past years. With libraries growing faster than the speed of the computers needed to process them, the screening of a modern billion-scale compound library against only a single drug target can take several months or years — even when using state-of-the-art supercomputers. Therefore, quite evidently, faster approaches are desperately needed.
    In a recent study published in the Journal of Chemical Information and Modeling, Dr Ina Pöhner and colleagues from the University of Eastern Finland’s School of Pharmacy teamed up with the host organisation of Finland’s powerful supercomputers, CSC — IT Center for Science Ltd. — and industrial collaborators from Orion Pharma to study the prospect of machine learning in the speed-up of giga-scale virtual screens.
    Before applying artificial intelligence to accelerate the screening, the researchers first established a baseline: In a virtual screening campaign of unprecedented size, 1.56 billion drug-like molecules were evaluated against two pharmacologically relevant targets over almost six months with the help of the supercomputers Mahti and Puhti, and molecular docking. Docking is a computational technique that fits the small molecules into a binding region of the target and computes a “docking score” to express how well they fit. This way, docking scores were first determined for all 1.56 billion molecules.
    Next, the results were compared to a machine learning-boosted screen using HASTEN, a tool developed by Dr TuomoKalliokoski from Orion Pharma, a co-author of the study. “HASTEN uses machine learning to learn the properties of molecules and how those properties affect how well the compounds score. When presented with enough examples drawn from conventional docking, the machine learning model can predict docking scores for other compounds in the library much faster than the brute-force docking approach,” Kalliokoski explains.
    Indeed, with only 1% of the whole library docked and used as training data, the tool correctly identified 90% of the best-scoring compounds within less than ten days.
    The study represented the first rigorous comparison of a machine learning-boosted docking tool with a conventional docking baseline on the giga-scale. “We found the machine learning-boosted tool to reliably and repeatedly reproduce the majority of the top-scoring compounds identified by conventional docking in a significantly shortened time frame,” Pöhner says.
    “This project is an excellent example of collaboration between academia and industry, and how CSC can offer one of the best computational resources in the world. By combining our ideas, resources and technology, it was possible to reach our ambitious goals,” continues Professor Antti Poso, who leads the computational drug discovery group within the University of Eastern Finland’s DrugTech Research Community.
    Studies on a comparable scale remain elusive in most settings. Thus, the authors released large datasets generated as part of the study into the public domain: Their ready-to-use screening library for docking that enables others to speed up their respective screening efforts, and their entire 1.56 billion compound-docking results for two targets as benchmarking data. This data will encourage the future development of tools to save time and resources and will ultimately advance the field of computational drug discovery. More

  • in

    Efficient fuel-molecule sieving using graphene

    A research team has successfully developed a new method that can prevent the crossover of large fuel molecules and suppress the degradation of electrodes in advanced fuel cell technology using methanol or formic acid. The successful sieving of the fuel molecules is achieved via selective proton transfers due to steric hindrance on holey graphene sheets that have chemical functionalization and act as proton-exchange membranes.
    For realizing carbon neutrality, the demand for the development of direct methanol/formic acid-fuel cell technology has been increasing. In this technology, methanol or formic acid is used as an e-fuel for generating electricity. The fuel cells generate electricity via proton transfer; however, conventional proton-exchange membranes suffer from the “crossover phenomenon,” where the fuel molecules are also transferred between anodes and cathodes. Thereafter, the fuel molecules are unnecessarily oxidized and the electrodes are deactivated.
    In this study, the researchers developed a new proton-exchange membrane comprising graphene sheets with 5-10 nm-diameter holes, which are chemically modified with sulfanilic functional groups affording sulfo groups around the holes. Owing to steric hindrance by the functional groups, the graphene membrane successfully suppresses the crossover phenomenon by blocking the penetration of the fuel molecules while maintaining high proton conductivity for the first time to the best of our knowledge.
    To date, conventional approaches for inhibiting fuel-molecule migration involved an increase of the membrane thickness or sandwiching two-dimensional materials, which in turn reduced the proton conductivity. In this study, the researchers investigated structures that inhibit the migration of fuel molecules through electro-osmotic drag and steric hindrance. Consequently, they found that the sulfanilic-functionalized graphene membrane can remarkably suppress electrode degradation compared with the commercially-available Nafion membranes while maintaining the proton conductivity required for fuel cells.
    Furthermore, simply pasting the graphene membrane onto a conventional proton-exchange membrane can suppress the crossover phenomenon. Thus, this study contributes to the development of advanced fuel cells as a new alternative for hydrogen-type fuel cells. More

  • in

    AI increases precision in plant observation

    Artificial intelligence (AI) can help plant scientists collect and analyze unprecedented volumes of data, which would not be possible using conventional methods. Researchers at the University of Zurich (UZH) have now used big data, machine learning and field observations in the university’s experimental garden to show how plants respond to changes in the environment.
    Climate change is making it increasingly important to know how plants can survive and thrive in a changing environment. Conventional experiments in the lab have shown that plants accumulate pigments in response to environmental factors. To date, such measurements were made by taking samples, which required a part of the plant to be removed and thus damaged. “This labor-intensive method isn’t viable when thousands or millions of samples are needed. Moreover, taking repeated samples damages the plants, which in turn affects observations of how plants respond to environmental factors. There hasn’t been a suitable method for the long-term observation of individual plants within an ecosystem,” says Reiko Akiyama, first author of the study.
    With the support of UZH’s University Research Priority Program (URPP) “Evolution in Action,” a team of researchers has now developed a method that enables scientists to observe plants in nature with great precision. PlantServation is a method that incorporates robust image-acquisition hardware and deep learning-based software to analyze field images, and it works in any kind of weather.
    Millions of images support evolutionary hypothesis of robustness
    Using PlantServation, the researchers collected (top-view) images of Arabidopsis plants on the experimental plots of UZH’s Irchel Campus across three field seasons (lasting five months from fall to spring) and then analyzed the more than four million images using machine learning. The data recorded the species-specific accumulation of a plant pigment called “anthocyanin” as a response to seasonal and annual fluctuations in temperature, light intensity and precipitation.
    PlantServation also enabled the scientists to experimentally replicate what happens after the natural speciation of a hybrid polyploid species. These species develop from a duplication of the entire genome of their ancestors, a common type of species diversification in plants. Many wild and cultivated plants such as wheat and coffee originated in this way.
    In the current study, the anthocyanin content of the hybrid polyploid species A. kamchatica resembled that of its two ancestors: from fall to winter its anthocyanin content was similar to that of the ancestor species originating from a warm region, and from winter to spring it resembled the other species from a colder region. “The results of the study thus confirm that these hybrid polyploids combine the environmental responses of their progenitors, which supports a long-standing hypothesis about the evolution of polyploids,” says Rie Shimizu-Inatsugi, one of the study’s two corresponding authors. More

  • in

    Efficient training for artificial intelligence

    Artifical intelligence not only affords impressive performance, but also creates significant demand for energy. The more demanding the tasks for which it is trained, the more energy it consumes. Víctor López-Pastor and Florian Marquardt, two scientists at the Max Planck Institute for the Science of Light in Erlangen, Germany, present a method by which artificial intelligence could be trained much more efficiently. Their approach relies on physical processes instead of the digital artificial neural networks currently used.
    The amount of energy required to train GPT-3, which makes ChatGPT an eloquent and apparently well-informed Chatbot, has not been revealed by Open AI, the company behind that artificial intelligence (AI). According to the German statistics company Statista, this would require 1000 megawatt hours — about as much as 200 German households with three or more people consume annually. While this energy expenditure has allowed GPT-3 to learn whether the word ‘deep’ is more likely to be followed by the word ‘sea’ or ‘learning’ in its data sets, by all accounts it has not understood the underlying meaning of such phrases.
    Neural networks on neuromorphic computers
    In order to reduce the energy consumption of computers, and particularly AI-applications, in the past few years several research institutions have been investigating an entirely new concept of how computers could process data in the future. The concept is known as neuromorphic computing. Although this sounds similar to artificial neural networks, it in fact has little to do with them as artificial neural networks run on conventional digital computers. This means that the software, or more precisely the algorithm, is modelled on the brain’s way of working, but digital computers serve as the hardware. They perform the calculation steps of the neuronal network in sequence, one after the other, differentiating between processor and memory.
    “The data transfer between these two components alone devours large quantities of energy when a neural network trains hundreds of billions of parameters, i.e. synapses, with up to one terabyte of data” says Florian Marquardt, director of the Max Planck Institute for the Science of Light and professor at the University of Erlangen. The human brain is entirely different and would probably never have been evolutionarily competitive, had it worked with an energy efficiency similar to that of computers with silicon transistors. It would most likely have failed due to overheating.
    The brain is characterized by undertaking the numerous steps of a thought process in parallel and not sequentially. The nerve cells, or more precisely the synapses, are both processor and memory combined. Various systems around the world are being treated as possible candidates for the neuromorphic counterparts to our nerve cells, including photonic circuits utilizing light instead of electrons to perform calculations. Their components serve simultaneously as switches and memory cells.
    A self-learning physical machine optimizes its synapses independently
    Together with Víctor López-Pastor, a doctoral student at the Max Planck Institute for the Science of Light, Florian Marquardt has now devised an efficient training method for neuromorphic computers. “We have developed the concept of a self-learning physical machine,” explains Florian Marquardt. “The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimized by the process itself.” More