More stories

  • in

    Scientists deliver quantum algorithm to develop new materials and chemistry

    U.S. Naval Research Laboratory (NRL) scientists published the Cascaded Variational Quantum Eigensolver (CVQE) algorithm in a recent Physical Review Research article, expected to become a powerful tool to investigate the physical properties in electronic systems.
    The CVQE algorithm is a variant of the Variational Quantum Eigensolver (VQE) algorithm that only requires the execution of a set of quantum circuits once rather than at every iteration during the parameter optimization process, thereby increasing the computational throughput.
    “Both algorithms produce a quantum state close to the ground state of a system, which is used to determine many of the system’s physical properties,” said John Stenger, Ph.D., a Theoretical Chemistry Section research physicist. “Calculations that previously took months can now be performed in hours.”
    The CVQE algorithm uses a quantum computer to probe the needed probability mass functions and a classical computer to perform the remaining calculations, including the energy minimization.
    “Finding the minimum energy is computationally hard as the size of the state space grows exponentially with the system size,” said Steve Hellberg, Ph.D., a Theory of Advanced Functional Materials Section research physicist. “Except for very small systems, even the world’s most powerful supercomputers are unable to find the exact ground state.”
    To address this challenge, scientists use a quantum computer with a qubit register, whose state space also increases exponentially, in this case with qubits. By representing the states of a physical system on the state space of the register, a quantum computer can be used to simulate the states in the exponentially large representation space of the system.
    Data can subsequently be extracted by quantum measurements. As quantum measurements are not deterministic, the quantum circuit executions must be repeated multiple times to estimate probability distributions describing the states, a process known as sampling. Variational quantum algorithms, including the CVQE algorithm, identify trial states by a set of parameters that are optimized to minimize the energy.
    “The key difference between the original VQE method and the new CVQE method is that the sampling and optimization processes have been decoupled in the latter such that the sampling can be performed exclusively on the quantum computer and the parameters processed exclusively on a classical computer,” said Dan Gunlycke, D.Phil., Theoretical Chemistry Section Head, who also leads the NRL quantum computing effort. “The new approach also has other benefits. The form of the solution space does not have to comport with the symmetry requirements of the qubit register, and therefore, it is much easier to shape the solution space and implement symmetries of the system and other physically motivated constraints, which will ultimately lead to more accurate predictions of electronic system properties.”
    Quantum computing is a component of quantum science, which has been designated as a Critical Technology Area within the USD(R&E) Technology Vision for an Era of Competition by the Under Secretary of Defense for Research and Engineering Heidi Shyu.
    “Understanding the properties of quantum-mechanical systems is essential in the development of new materials and chemistry for the Navy and Marine Corps,” Gunlycke said. “Corrosion, for instance, is an omnipresent challenge costing the Department of Defense billions every year. The CVQE algorithm can be used to study the chemical reactions causing corrosion and provide critical information to our existing anticorrosion teams in their quest to develop better coatings and additives.” More

  • in

    The world is one step closer to secure quantum communication on a global scale

    Researchers at the University of Waterloo’s Institute for Quantum Computing (IQC) have brought together two Nobel prize-winning research concepts to advance the field of quantum communication.
    Scientists can now efficiently produce nearly perfect entangled photon pairs from quantum dot sources.
    Entangled photons are particles of light that remain connected, even across large distances, and the 2022 Nobel Prize in Physics recognized experiments on this topic. Combining entanglement with quantum dots, a technology recognized with the Nobel Prize in Chemistry in 2023, the IQC research team aimed to optimize the process for creating entangled photons, which have a wide variety of applications, including secure communications.
    “The combination of a high degree of entanglement and high efficiency is needed for exciting applications such as quantum key distribution or quantum repeaters, which are envisioned to extend the distance of secure quantum communication to a global scale or link remote quantum computers,” said Dr. Michael Reimer, professor at IQC and Waterloo’s Department of Electrical and Computer Engineering. “Previous experiments only measured either near-perfect entanglement or high efficiency, but we’re the first to achieve both requirements with a quantum dot.”
    By embedding semiconductor quantum dots into a nanowire, the researchers created a source that creates near-perfect entangled photons 65 times more efficiently than previous work. This new source, developed in collaboration with the National Research Council of Canada in Ottawa, can be excited with lasers to generate entangled pairs on command. The researchers then used high-resolution single photon detectors provided by Single Quantum in The Netherlands to boost the degree of entanglement.
    “Historically, quantum dot systems were plagued with a problem called fine structure splitting, which causes an entangled state to oscillate over time. This meant that measurements taken with a slow detection system would prevent the entanglement from being measured,” said Matteo Pennacchietti, a PhD student at IQC and Waterloo’s Department of Electrical and Computer Engineering. “We overcame this by combining our quantum dots with a very fast and precise detection system. We can basically take a timestamp of what the entangled state looks like at each point during the oscillations, and that’s where we have the perfect entanglement.”
    To showcase future communications applications, Reimer and Pennacchietti worked with Dr. Norbert Lütkenhaus and Dr. Thomas Jennewein, both IQC faculty members and professors in Waterloo’s Department of Physics and Astronomy, and their teams. Using their new quantum dot entanglement source, the researchers simulated a secure communications method known as quantum key distribution, proving that the quantum dot source holds significant promise in the future of secure quantum communications. More

  • in

    Rectifying AI’s usage in the quest for thermoelectric materials

    Using AI, a team of researchers has identified a thermoelectric material that potentially possesses favorable values. The group was able to navigate AI’s conventional pitfalls, giving a prime example of how AI can revolutionize materials science.
    Details of their finding were published in the journal Science China Materials on March 8, 2024.
    “Traditional methods of finding suitable materials involve trial-and-error, which is time-consuming and often expensive,” proclaims Hao Li, associate professor at Tohoku University’s Advanced Institute for Materials Research (WPI-AIMR) and corresponding author of the paper. “AI transforms this by combing through databases to identify potential materials that can then be experimentally verified.”
    Still, challenges remain. Large-scale material datasets sometimes contain errors and overfitting the predicted temperature-dependent properties is also a common error. Overfitting occurs when a model learns to capture noise or random fluctuations in the training data rather than the underlying pattern or relationship. As a result, the model performs well on the training data but fails to generalize new, unseen data. When predicting temperature-dependent properties, overfitting could lead to inaccurate predictions when the model encounters new conditions outside the range of the training data.
    Li and his colleagues sought to overcome this to develop a thermoelectric material. These materials convert heat energy into electrical energy, or vice versa. Thus, getting a highly accurate temperature-dependence is critical.
    “First, we performed a series of rational actions to identify and discard questionable data, obtaining 92,291 data points comprising 7,295 compositions and different temperatures from the Starrydata2 database — an online database that collects digital data from published papers,” states Li.
    Following this, the researchers implemented a composition-based cross-validation method. Crucially, they emphasized that data points with the same compositions but different temperatures should not be split into different sets to avoid overfitting.
    Then the researchers built machine building models using the Gradient Boosting Decision Tree method. The model achieved remarkable R2 values 0.89, ~0.90, and ~0.89 on the training dataset, test dataset, and new out-of-sample experimental data released in 2023, demonstrating the models accuracy in predicting newly available materials.
    “We could use this model to carry out a large-scale evaluation of the stable materials from the Materials Project database, predicting the potential thermoelectric performance of new materials and providing guidance for experiments,” states Xue Jia, Assistant Professor at WPI-AIMR, and co-author of the paper.
    Ultimately, the study illustrates the importance of following rigorous guidelines when it comes to data preprocessing and data splitting in machine learning so that it addresses the pressing issues in materials science. The researchers are optimistic that their strategy can also be applied to other materials, such as electrocatalysts and batteries. More

  • in

    Quantum interference could lead to smaller, faster, and more energy-efficient transistors

    An international team of researchers from Queen Mary University of London, the University of Oxford, Lancaster University, and the University of Waterloo have developed a new single-molecule transistor that uses quantum interference to control the flow of electrons. The transistor, which is described in a paper published in the Nature Nanotechnology, opens new possibilities for using quantum effects in electronic devices.
    Transistors are the basic building blocks of modern electronics. They are used to amplify and switch electrical signals, and they are essential for everything from smartphones to spaceships. However, the traditional method of making transistors, which involves etching silicon into tiny channels, is reaching its limits. As transistors get smaller, they become increasingly inefficient and susceptible to errors, as electrons can leak through the device even when it is supposed to be switched off, by a process known as quantum tunnelling. Researchers are exploring new types of switching mechanisms that can be used with different materials to remove this effect.
    In the nanoscale structures that Professor Jan Mol, Dr James Thomas, and their group study at Queen Mary’s School of Physical and Chemical Sciences, quantum mechanical effects dominate, and electrons behave as waves rather than particles. Taking advantage of these quantum effects, the researchers built a new transistor. The transistor’s conductive channel is a single zinc porphyrin, a molecule that can conduct electricity. The porphyrin is sandwiched between two graphene electrodes, and when a voltage is applied to the electrodes, electron flow through the molecule can be controlled using quantum interference.
    Interference is a phenomenon that occurs when two waves interact with each other and either cancel each other out (destructive interference) or reinforce each other (constructive interference). In the new transistor’s case, researchers switched the transistor on and off by controlling whether the electrons interfere constructively (on) or destructively (off) as they flow through the zinc porphyrin molecule.
    The researchers found that the new transistor has a very high on/off ratio, meaning that it can be turned on and off very precisely. Destructive quantum interference plays a crucial role in this by eliminating the leaky electron flow from quantum tunneling through the transistor when it is supposed to be switched off. They also found that the transistor is very stable. Previous transistors made from a single molecule have only been able to demonstrate a handful of switching cycles, however this device can be operated for hundreds of thousands of cycles without breaking down.
    “Quantum interference is a powerful phenomenon that has the potential to be used in a wide variety of electronics applications,” said lead author Dr James Thomas, Lecturer in Quantum Technologies at Queen Mary. “We believe that our work is a significant step towards realizing this potential.”
    “Our results show that quantum interference can be used to control the flow of electrons in transistors, and that this can be done in a way that is both efficient and reliable,” said co-author Professor Jan Mol. “This could lead to the development of new types of transistors that are smaller, faster, and more energy-efficient than current devices.”
    The researchers also found that the quantum interference effects could be used to improve the transistor’s subthreshold swing, which is a measure of how sensitive the transistor is to changes in the gate voltage. The lower the subthreshold swing, the more efficient the transistor is. The researchers’ transistors had a subthreshold swing of 140 mV/dec, which is better than subthreshold swings reported for other single-molecule transistors, and comparable to larger devices made from materials such as carbon nanotubes.
    The research is still in its initial stages, but the researchers are optimistic that the new transistor could be used to create a new generation of electronic devices. These devices could be used in a variety of applications, starting from computers, smartphones, and ending with medical devices. More

  • in

    Novel quantum algorithm for high-quality solutions to combinatorial optimization problems

    Conventional quantum algorithms are not feasible for solving combinatorial optimization problems (COPs) with constraints in the operation time of quantum computers. To address this issue, researchers have developed a novel algorithm called post-processing variationally scheduled quantum algorithm. The novelty of this innovative algorithm lies in the use of a post-processing technique combined with variational scheduling to achieve high-quality solutions to COPs in a short time.
    Combinatorial optimization problems (COPs) have applications in many different fields such as logistics, supply chain management, machine learning, material design and drug discovery, among others, for finding the optimal solution to complex problems. These problems are usually very computationally intensive using classical computers and thus solving COPs using quantum computers has attracted significant attention from both academia and industry.
    Quantum computers take advantage of the quantum property of superposition, using specialized qubits, that can exist in an infinite yet contained number of states of 0 or 1 or any combination of the two, to quickly solve large problems. However, when COPs involve constraints, conventional quantum algorithms like adiabatic quantum annealing struggle to obtain a near-optimal solution within the operation time of quantum computers. Recent advances in quantum technology have led to devices such as quantum annealers and gate-type quantum devices that provide suitable platforms for solving COPs. Unfortunately, they are susceptible to noise, which limits their applicability to quantum algorithms with low computational costs.
    To address this challenge, Assistant Professor Tatsuhiko Shirai and Professor Nozomu Togawa from the Department of Computer Science and Communications Engineering at Waseda University in Japan have recently developed a groundbreaking post-processing variationally scheduled quantum algorithm (pVSQA). “The two main methods for solving COPs with quantum devices are variational scheduling and post-processing. Our algorithm combines variational scheduling with a post-processing method that transforms infeasible solutions into feasible ones, allowing us to achieve near-optimal solutions for constrained COPs on both quantum annealers and gate-based quantum computers,” explains Dr. Shirai. Their study was published in the journal IEEE Transactions on Quantum Engineering on 13 March 2024.
    The innovative pVSQA algorithm uses a quantum device to first generate a variational quantum state via quantum computation. This is then used to generate a probability distribution function which consists of all the feasible and infeasible solutions that are within the constraints of the COP. Next, the post-processing method transforms the infeasible solutions into feasible ones, leaving the probability distribution with only feasible solutions. A classical computer is then used to calculate an energy expectation value of the cost function using this new probability distribution. Repeating this calculation results in a near-optimal solution.
    The researchers analyzed the performance of this algorithm using both a simulator and real quantum devices such as a quantum annealer and a gate-type quantum device. The experiments revealed that pVSQA achieves a near-optimal performance within a predetermined time on the simulator and outperforms conventional quantum algorithms without post-processing on real quantum devices.
    Dr. Shirai highlights the potential applications of the algorithm, stating: “Drastic social transformations are urgently needed to address various social issues. Examples include the realization of a carbon-neutral society to solve climate change issues and the realization of sustainable development goals to address issues such as increased energy demand and food shortage. Efficiently solving combinatorial optimization problems is at the heart of achieving these transformations. Our new method will play a significant role in realizing these long-term social transformations.”
    In conclusion, this study marks a significant step forward for using quantum computers for solving COPs, holding promise for addressing complex real-world problems across various domains.

    Reference
    DOI: https://doi.org/10.1109/TQE.2024.3376721
    Authors: Tatsuhiko Shirai1 and Nozomu Togawa1
    Affiliations: Department of Computer Science and Communications Engineering, Waseda University
    About Waseda University
    Located in the heart of Tokyo, Waseda University is a leading private research university that has long been dedicated to academic excellence, innovative research, and civic engagement at both the local and global levels since 1882. The University has produced many changemakers in its history, including nine prime ministers and many leaders in business, science and technology, literature, sports, and film. Waseda has strong collaborations with overseas research institutions and is committed to advancing cutting-edge research and developing leaders who can contribute to the resolution of complex, global social issues. The University has set a target of achieving a zero-carbon campus by 2032, in line with the Sustainable Development Goals (SDGs) adopted by the United Nations in 2015.

    To learn more about Waseda University, visit https://www.waseda.jp/top/en
    About Assistant Professor Tatsuhiko Shirai
    Tatsuhiko Shirai is currently an Assistant Professor at the Department of Computer and Communications Engineering at Waseda University in Japan. He obtained his master’s and Ph.D. in Physics from the University of Tokyo in 2013 and 2016 respectively. In 2022, he obtained the SLDM Research Group Excellent Paper Award. His research interests include quantum algorithms, quantum open systems, and quantum computing. He is a member of The Physical Society of Japan. More

  • in

    Semiconductors at scale: New processor achieves remarkable speed-up in problem solving

    Annealing processors are designed specifically for addressing combinatorial optimization problems, where the task is to find the best solution from a finite set of possibilities. This holds implications for practical applications in logistics, resource allocation, and the discovery of drugs and materials. In the context of CMOS (a type of semiconductor technology), it is necessary for the components of annealing processors to be fully “coupled.” However, the complexity of this coupling directly affects the scalability of the processors.
    In a new IEEE Access study published on 30 January 2024, led by ProfessorTakayuki Kawahara from Tokyo University of Science, researchers have developed and successfully tested a scalable processor that divides the calculation into multiple LSI chips. The innovation was also presented in IEEE 22nd World Symposium on Applied Machine Intelligence and Informatics (SAMI 2024) on 25 January 2024.
    According to Prof. Kawahara, We want to achieve advanced information processing directly at the edge, rather than in the cloud, or performing preprocessing at the edge for the cloud. Using the unique processing architecture announced by the Tokyo University of Science in 2020, we have realized a fully coupled LSI (Large Scale Integration) on one chip using 28nm CMOS technology. Furthermore, we devised a scalable method with parallel-operating chips, and demonstrated its feasibility using FPGAs (Field-Programmable Gate Arrays) in 2022.
    In the study, which was partially supported by the JSPS KAKENHI Grant Number 22H01559, the Tokyo University of Science Entrepreneurship Grant (PoC Support Grant), and the Tokyo Metropolitan Government, the team created a scalable annealing processor. It used 36 22nm CMOS calculation LSI (Large Scale Integration) chips and one control FPGA. This technology enables the construction of large-scale fully coupled semiconductor systems following the Ising model (a mathematical model of magnetic systems) with 4096 spins.
    The processor incorporates two distinct technologies developed at the Tokyo University of Science. This includes a “spin thread method” that enables 8 parallel solution searches, coupled with a technique that reduces chip requirements by about half compared to conventional methods. Its power needs are also modest, operating at 10MHz with a power consumption of 2.9W (1.3W for the core part). This was practically confirmed using a vertex cover problem with 4096 vertices.
    In terms of power performance ratio, the processor outperformed simulating a fully coupled Ising system on a PC (i7, 3.6GHz) using annealing emulation by 2,306 times. Additionally, it surpassed the core CPU and arithmetic chip by 2,186 times.
    The successful machine verification of this processor suggests the possibility of enhanced capacity. According to Prof. Kawahara, who holds a vision for the social implementation of this technology (such as initiating a business, joint research, and technology transfer), “In the future, we will develop this technology for a joint research effort targeting an LSI system with the computing power of a 2050-level quantum computer for solving combinatorial optimization problems. The goal is to achieve this without the need for air conditioning, large equipment, or cloud infrastructure, using current semiconductor processes. Specifically, we would like to achieve 2M (million) spins by 2030 and explore the creation of new digital industries using this.”
    In summary, researchers have developed a scalable, fully coupled annealing processor incorporating 4096 spins on a single board with 36 CMOS chips. Key innovations, including chip reduction and parallel operations for simultaneous solution searches, played a crucial role in this development! More

  • in

    Downscaling storage devices: Magnetic memory based on the chirality of spiral magnets

    A team of researchers has proposed a new concept for magnet-based memory devices, which might revolutionize information storage devices owing to their potential for large-scale integration, non-volatility, and high durability.
    Details of their findings were published in the journal Nature Communications on March 7, 2024.
    Spintronic devices, represented by magnetic random access memory (MRAM), utilize the magnetization direction of ferromagnetic materials to memorize information. Because of their non-volatility and low energy consumption, spintronic devices will likely play a pivotal role in future information storage components.
    However, ferromagnet-based spintronics devices have a potential pitfall. Ferromagnets generate magnetic fields around them, which affect nearby ferromagnets. In an integrated magnetic device, this results in crosstalk between magnetic bits, which will limit the magnetic memory density.
    The research team, which comprised Hidetoshi Masuda, Takeshi Seki, Yoshinori Onose and others from Tohoku University’s Institute for Materials Research, and Jun-ichiro Ohe from Toho University, demonstrated that magnetic materials called helical magnets can be utilized for a magnetic memory device, which should resolve the magnetic field problem.
    In helical magnets, the directions of the atomic magnetic moments are ordered in a spiral. The right- or left-handedness of the spiral, called chirality, could be utilized to memorize the information. The magnetic fields induced by each atomic magnetic moment cancel each other out, so the helical magnets do not generate any macroscopic magnetic field. “The memory devices based on the handedness of the helimagnets, free from the crosstalk among bits, could pave a new pathway for improving the memory density,” says Masuda.
    The research team demonstrated that the chirality memory can be written and read out at room temperature. They fabricated epitaxial thin films of a room-temperature helimagnet MnAu2 and demonstrated the switching of chirality (right- and left-handedness of the spiral) by the electric current pulses under magnetic fields. Furthermore, they fabricated a bilayer device composed of MnAu2 and Pt (platinum) and demonstrated that the chirality memory can be read out as a resistance change, even without magnetic fields.
    “We have uncovered the potential capability of chirality memory in helical magnets for next-generation memory devices; it may offer high-density, non-volatile, and highly stable memory bits,” adds Masuda. “This will hopefully lead to future storage devices with ultrahigh information density and high reliability.” More

  • in

    Physicists develop modeling software to diagnose serious diseases

    Researchers at the Niels Bohr Institute, University of Copenhagen and University of Southern Denmark have recently published FreeDTS — a shared software package designed to model and study biological membranes at the mesoscale — the scale “in between” the larger macro level and smaller micro level.
    This software fills an important missing software among the available biomolecular modeling tools and enables modeling and understanding of many different biological processes involving the cellular membranes e.g. cell division.
    Membrane shape contains information about the physiological state of the cell and overall health of an organism, so this new tool, with its wide array of applications, will enhance our understanding of cell behavior and open routes for diagnostics of infections and diseases like Parkinsons.
    The publication of FreeDTS is now reported in Nature Communications.
    Sharing a powerful tool that could have provided NBI with an advantage. Why?
    The software package Weria Pezeshkian from the Niels Bohr Institute has been working on for the last 5 years, after an initial idea between him and John Ipsen from the University of Southern Denmark, is shared — laid open for every researcher in this field to use.
    Normally the competition for achieving scientific results is high, and science advancements kept secret until publication — so this seems like a very generous attitude indeed. So generous it might seem a bit naive.

    It is a strange mix of respect for the “pioneers” of the biomolecular modeling field and the fact that the field offers so many unanswered questions that it would seem almost disrespectful towards the scientific community to keep the tool to ourselves, Weria Pezeshkian explains.
    “There are so many questions and bottlenecks to tackle to reach the end goals, that it would be unlikely that we work on exactly the same problems. However, occasional overlap occurs and is a worthwhile cost we pay for advancing the field.
    But there is another aspect as well: One of the reasons our community, the biomolecular simulation and modeling community has had this surge in popularity and a fast growth is that we’ve always strived to get more people into the game and share ideas, results and methods and often direct assistance without expecting immediate personal gains.
    Acknowledging Herman Berendsen
    Herman Berendsen (1934-2019) was a professor of physical chemistry at the University of Groningen (RUG). He was especially known for his contributions to the field of molecular modeling and his dedication to translate models into accessible applications.
    Berendsen was especially praised for his non-hierarchical and open approach. This not only locally at his institute, where he was known for enabling the young researchers in his group, but also among the wider scientific community. He contributed to computer simulation applications that are still widely used to study the dynamics of biomolecules. Examples of this are his SPC (simple-point-charge) model, used to model liquid water; and the ‘Berendsen’ thermostat and barostat, that serves to keep the temperature and pressure constant during simulations.

    Also, he organized a series of workshops where pioneers in the field met to discuss and share their newest findings.
    Berendsen remains one of RUG’s most cited scholars. The applicability of his work ranges far beyond the field of physical chemistry and it is also used by mathematicians, computer scientists, molecular life scientists and in the development of medical applications.
    Biological membranes — what are they really?
    When you consider a cell, you can imagine a whole lot of small “factories” inside, called organelles, doing their thing — surrounded by a membrane.
    The cell also is surrounded by a membrane called Plasma membrane. But membranes are not just a boundary surface. They are actively participating in many processes. They are made from a myriad of different molecules, and they are dynamic, in motion all the time.
    Many diseases are associated with irregular membrane shape and abnormal biomolecular organization, so the study of membranes can help us understand the state of a cell and overall health of an organism. For instance, when a neuron has increased spiking activity, indicating a higher energy demand, the structure of mitochondria, an organelle responsible for generating cellular energy parcels from food (often referred to as the powerhouse of the cell), undergoes changes.
    Moreover, certain diseases, e.g., Alzheimers for one, have been associated with changes in the mitochondrial membranes shapes.
    Computer models will improve our abilities within diagnostics
    “For now, we are not able to see exactly what the exact causes of changes in membrane shape are and how are they exactly related to the diagnostics of a certain disease. But at some point, in the future, the try and error works in the lab will become minimal because modelling will guide experiments with unimaginable accuracy, as our modeling becomes more precise and the power of computational options increasing.
    We will need a lot of adjustments and there is still long way to go, so it is really nice to work within this sharing community, because we all work on different aspects of it” Weria Pezeshkian explains.
    Weria continues with a word of caution: “This is probably stretching it a bit far, but possibly, in the future, by imaging for example our mitochondria and leveraging physics-based computer simulations we may be able to say: This person has this disease with this specific genetic deficiency. So, the perspective for computational modelling is rather great — we are not there yet, but we can see it in the horizon.” More