More stories

  • in

    Computer models determine drug candidate's ability to bind to proteins

    Combing computational physics with experimental data, University of Arkansas researchers have developed computer models for determining a drug candidate’s ability to target and bind to proteins within cells.
    If accurate, such an estimator could computationally demonstrate binding affinity and thus prevent experimental researchers from needing to investigate millions of chemical compounds. The work could substantially reduce the cost and time associated with developing new drugs.
    “We developed a theoretical framework for estimating ligand-protein binding,” said Mahmoud Moradi, associate professor of chemistry and biochemistry in the Fulbright College of Arts and Sciences. “The proposed method assigns an effective energy to the ligand at every grid point in a coordinate system, which has its origin at the most likely location of the ligand when it is in its bound state.”
    A ligand is a substance — an ion or molecule — such as a drug that binds to another molecule, such as a protein, to form a complex system that may cause or prevent a biological function.
    Moradi’s research focuses on computational simulations of diseases, including coronavirus. For this project, he collaborated with Suresh Thallapuranam, professor of biochemistry and the Cooper Chair of Bioinformatics Research.
    Moradi and Thallapuranam used biased simulations — as well as non-parametric re-weighting techniques to account for the bias — to create a binding estimator that was computationally efficient and accurate. They then used a mathematically robust technique called orientation quaternion formalism to further describe the ligand’s conformational changes as it bound to targeted proteins.
    The researchers tested this approach by estimating the binding affinity between human fibroblast growth factor 1 — a specific signaling protein — and heparin hexasaccharide 5, a popular medication.
    The project was conceived because Moradi and Thallapuranam were studying human fibroblast growth factor 1 protein and its mutants in the absence and presence of heparin. They found strong qualitative agreement between simulations and experimental results.
    “When it came to binding affinity, we knew that the typical methods we had at our disposal would not work for such a difficult problem,” Moradi said. “This is why we decided to develop a new method. We had a joyous moment when the experimental and computational data were compared with each other, and the two numbers matched almost perfectly.”
    The researchers’ work was published in Nature Computational Science.
    Moradi previously received attention for developing computational simulations of the behavior of SARS-CoV-2 spike proteins prior to fusion with human cell receptors. SARS-CoV-2 is the virus that causes COVID-19. More

  • in

    Now on the molecular scale: Electric motors

    Electric vehicles, powered by macroscopic electric motors, are increasingly prevalent on our streets and highways. These quiet and eco-friendly machines got their start nearly 200 years ago when physicists took the first tiny steps to bring electric motors into the world.
    Now a multidisciplinary team led by Northwestern University has made an electric motor you can’t see with the naked eye: an electric motor on the molecular scale.
    This early work — a motor that can convert electrical energy into unidirectional motion at the molecular level — has implications for materials science and particularly medicine, where the electric molecular motor could team up with biomolecular motors in the human body.
    “We have taken molecular nanotechnology to another level,” said Northwestern’s Sir Fraser Stoddart, who received the 2016 Nobel Prize in Chemistry for his work in the design and synthesis of molecular machines. “This elegant chemistry uses electrons to effectively drive a molecular motor, much like a macroscopic motor. While this area of chemistry is in its infancy, I predict one day these tiny motors will make a huge difference in medicine.”
    Stoddart, Board of Trustees Professor of Chemistry at the Weinberg College of Arts and Sciences, is a co-corresponding author of the study. The research was done in close collaboration with Dean Astumian, a molecular machine theorist and professor at the University of Maine, and William Goddard, a computational chemist and professor at the California Institute of Technology. Long Zhang, a postdoctoral fellow in Stoddart’s lab, is the paper’s first author and a co-corresponding author.
    “We have taken molecular nanotechnology to another level.” — Sir Fraser Stoddart, chemist
    Only 2 nanometers wide, the molecular motor is the first to be produced en masse in abundance. The motor is easy to make, operates quickly and does not produce any waste products.

    The study and a corresponding news brief were published today (Jan. 11) by the journal Nature.
    The research team focused on a certain type of molecule with interlocking rings known as catenanes held together by powerful mechanical bonds, so the components could move freely relative to each other without falling apart. (Stoddart decades ago played a key role in the creation of the mechanical bond, a new type of chemical bond that has led to the development of molecular machines.)
    The electric molecular motor specifically is based on a [3]catenane whose components ― a loop interlocked with two identical rings ― are redox active, i.e. they undergo unidirectional motion in response to changes in voltage potential. The researchers discovered that two rings are needed to achieve this unidirectional motion. Experiments showed that a [2]catenane, which has one loop interlocked with one ring, does not run as a motor.
    The synthesis and operation of molecules that perform the function of a motor ― converting external energy into directional motion ― has challenged scientists in the fields of chemistry, physics and molecular nanotechnology for some time.
    To achieve their breakthrough, Stoddart, Zhang and their Northwestern team spent more than four years on the design and synthesis of their electric molecular motor. This included a year working with UMaine’s Astumian and Caltech’s Goddard to complete the quantum mechanical calculations to explain the working mechanism behind the motor.

    “Controlling the relative movement of components on a molecular scale is a formidable challenge, so collaboration was crucial,” Zhang said. “Working with experts in synthesis, measurements, computational chemistry and theory enabled us to develop an electric molecular motor that works in solution.”
    A few examples of single-molecule electric motors have been reported, but they require harsh operating conditions, such as the use of an ultrahigh vacuum, and also produce waste.
    The next steps for their electric molecular motor, the researchers said, is to attach many of the motors to an electrode surface to influence the surface and ultimately do some useful work.
    “The achievement we report today is a testament to the creativity and productivity of our young scientists as well as their willingness to take risks,” Stoddart said. “This work gives me and the team enormous satisfaction.”
    Stoddart is a member of the International Institute for Nanotechnology and the Robert H. Lurie Comprehensive Cancer Center of Northwestern University. More

  • in

    Project aims to expand language technologies

    Only a fraction of the 7,000 to 8,000 languages spoken around the world benefit from modern language technologies like voice-to-text transcription, automatic captioning, instantaneous translation and voice recognition. Carnegie Mellon University researchers want to expand the number of languages with automatic speech recognition tools available to them from around 200 to potentially 2,000.
    “A lot of people in this world speak diverse languages, but language technology tools aren’t being developed for all of them,” said Xinjian Li, a Ph.D. student in the School of Computer Science’s Language Technologies Institute (LTI). “Developing technology and a good language model for all people is one of the goals of this research.”
    Li is part of a research team aiming to simplify the data requirements languages need to create a speech recognition model. The team — which also includes LTI faculty members Shinji Watanabe, Florian Metze, David Mortensen and Alan Black — presented their most recent work, “ASR2K: Speech Recognition for Around 2,000 Languages Without Audio,” at Interspeech 2022 in South Korea.
    Most speech recognition models require two data sets: text and audio. Text data exists for thousands of languages. Audio data does not. The team hopes to eliminate the need for audio data by focusing on linguistic elements common across many languages.
    Historically, speech recognition technologies focus on a language’s phoneme. These distinct sounds that distinguish one word from another — like the “d” that differentiates “dog” from “log” and “cog” — are unique to each language. But languages also have phones, which describe how a word sounds physically. Multiple phones might correspond to a single phoneme. So even though separate languages may have different phonemes, their underlying phones could be the same.
    The LTI team is developing a speech recognition model that moves away from phonemes and instead relies on information about how phones are shared between languages, thereby reducing the effort to build separate models for each language. Specifically, it pairs the model with a phylogenetic tree — a diagram that maps the relationships between languages — to help with pronunciation rules. Through their model and the tree structure, the team can approximate the speech model for thousands of languages without audio data.
    “We are trying to remove this audio data requirement, which helps us move from 100 or 200 languages to 2,000,” Li said. “This is the first research to target such a large number of languages, and we’re the first team aiming to expand language tools to this scope.”
    Still in an early stage, the research has improved existing language approximation tools by a modest 5%, but the team hopes it will serve as inspiration not only for their future work but also for that of other researchers.
    For Li, the work means more than making language technologies available to all. It’s about cultural preservation.
    “Each language is a very important factor in its culture. Each language has its own story, and if you don’t try to preserve languages, those stories might be lost,” Li said. “Developing this kind of speech recognition system and this tool is a step to try to preserve those languages.”
    Story Source:
    Materials provided by Carnegie Mellon University. Original written by Aaron Aupperlee. Note: Content may be edited for style and length. More

  • in

    The optical fiber that keeps data safe even after being twisted or bent

    Optical fibres are the backbone of our modern information networks. From long-range communication over the internet to high-speed information transfer within data centres and stock exchanges, optical fibre remains critical in our globalised world.
    Fibre networks are not, however, structurally perfect, and information transfer can be compromised when things go wrong. Tßo address this problem, physicists at the University of Bath in the UK have developed a new kind of fibre designed to enhance the robustness of networks. This robustness could prove to be especially important in the coming age of quantum networks.
    The team has fabricated optical fibres (the flexible glass channels through which information is sent) that can protect light (the medium through which data is transmitted) using the mathematics of topology. Best of all, these modified fibres are easily scalable, meaning the structure of each fibre can be preserved over thousands of kilometres.
    The Bath study is published in the latest issue of Science Advances.
    Protecting light against disorder
    At its simplest, optical fibre, which typically has a diameter of 125 µm (similar to a thick strand of hair) comprises a core of solid glass surrounded by cladding. Light travels through the core, where it bounces along as though reflecting off a mirror.

    However, the pathway taken by an optical fibre as it crisscrosses the landscape is rarely straight and undisturbed: turns, loops, and bends are the norm. Distortions in the fibre can cause information to degrade as it moves between sender and receiver. “The challenge was to build a network that takes robustness into account,” said Physics PhD student Nathan Roberts, who led the research.
    “Whenever you fabricate a fibre-optic cable, small variations in the physical structure of the fibre are inevitably present. When deployed in a network, the fibre can also get twisted and bent. One way to counter these variations and defects is to ensure the fibre design process includes a real focus on robustness. This is where we found the ideas of topology useful.”
    To design this new fibre, the Bath team used topology, which is the mathematical study of quantities that remain unchanged despite continuous distortions to the geometry. Its principles are already applied to many areas of physics research. By connecting physical phenomena to unchanging numbers, the destructive effects of a disordered environment can be avoided.
    The fibre designed by the Bath team deploys topological ideas by including several light-guiding cores in a fibre, linked together in a spiral. Light can hop between these cores but becomes trapped within the edge thanks to the topological design. These edge states are protected against disorder in the structure.
    Bath physicist Dr Anton Souslov, who co-authored the study as theory lead, said: “Using our fibre, light is less influenced by environmental disorder than it would be in an equivalent system lacking topological design.

    “By adopting optical fibres with topological design, researchers will have the tools to pre-empt and forestall signal-degrading effects by building inherently robust photonic systems.”
    Theory meets practical expertise
    Bath physicist Dr Peter Mosley, who co-authored the study as experimental lead, said: “Previously, scientists have applied the complex mathematics of topology to light, but here at the University of Bath we have lots of experience physically making optical fibres, so we put the mathematics together with our expertise to create topological fibre.”
    The team, which also includes PhD student Guido Baardink and Dr Josh Nunn from the Department of Physics, are now looking for industry partners to develop their concept further.
    “We are really keen to help people build robust communication networks and we are ready for the next phase of this work,” said Dr Souslov.
    Mr Roberts added: “We have shown that you can make kilometres of topological fibre wound around a spool. We envision a quantum internet where information will be transmitted robustly across continents using topological principles.”
    He also pointed out that this research has implications that go beyond communications networks. He said: “Fibre development is not only a technological challenge, but also an exciting scientific field in its own right.
    “Understanding how to engineer optical fibre has led to light sources from bright ‘supercontinuum’ that spans the entire visible spectrum right down to quantum light sources that produce individual photons — single particles of light.”
    The future is quantum
    Quantum networks are widely expected to play an important technological role in years to come. Quantum technologies have the capacity to store and process information in more powerful ways than ‘classical’ computers can today, as well as sending messages securely across global networks without any chance of eavesdropping.
    But the quantum states of light that transmit information are easily impacted by their environment and finding a way to protect them is a major challenge. This work may be a step towards maintaining quantum information in fibre optics using topological design. More

  • in

    Scientists use machine learning to fast-track drug formulation development

    Scientists at the University of Toronto have successfully tested the use of machine learning models to guide the design of long-acting injectable drug formulations. The potential for machine learning algorithms to accelerate drug formulation could reduce the time and cost associated with drug development, making promising new medicines available faster.
    The study was published today in Nature Communications and is one of the first to apply machine learning techniques to the design of polymeric long-acting injectable drug formulations.
    The multidisciplinary researchis led by Christine Allen from the University of Toronto’s department of pharmaceutical sciences and Alán Aspuru-Guzik, from thedepartments of chemistry and computer science. Both researchers are also members of the Acceleration Consortium, a global initiative that uses artificial intelligence and automation to accelerate the discovery of materials and molecules needed for a sustainable future.
    “This study takes a critical step towards data-driven drug formulation development with an emphasis on long-acting injectables,” said Christine Allen, professor in pharmaceutical sciences at the Leslie Dan Faculty of Pharmacy, University of Toronto. “We’ve seen how machine learning has enabled incredible leap-step advances in the discovery of new molecules that have the potential to become medicines. We are now working to apply the same techniques to help us design better drug formulations and, ultimately, better medicines.”
    Considered one of the most promising therapeutic strategies for the treatment of chronic diseases, long-acting injectables (LAI) are a class of advanced drug delivery systems that are designed to release their cargo over extended periods of time to achieve a prolonged therapeutic effect. This approach can help patients better adhere to their medication regimen, reduce side effects, and increase efficacy when injected close to the site of action in the body. However, achieving the optimal amount of drug release over the desired period of time requires the development and characterization of a wide array of formulation candidates through extensive and time-consuming experiments. This trial-and-error approach has created a significant bottleneck in LAI development compared to more conventional types of drug formulation.
    “AI is transforming the way we do science. It helps accelerate discovery and optimization. This is a perfect example of a ‘Before AI’ and an ‘After AI’ moment and shows how drug delivery can be impacted by this multidisciplinary research,” said Alán Aspuru-Guzik, professor in chemistry and computer science, University of Toronto who also holds the CIFAR Artificial Intelligence Research Chair at the Vector Institute in Toronto.
    To investigate whether machine learning tools could accurately predict the rate of drug release, the research team trained and evaluated a series of eleven different models, including multiple linear regression (MLR), random forest (RF), light gradient boosting machine (lightGBM), and neural networks (NN). The data set used to train the selected panel of machine learning models was constructed from previously published studies by the authors and other research groups.
    “Once we had the data set, we split it into two subsets: one used for training the models and one for testing. We then asked the models to predict the results of the test set and directly compared with previous experimental data. We found that the tree-based models, and specifically lightGBM, delivered the most accurate predictions,” said Pauric Bannigan, research associate with the Allen research group at the Leslie Dan Faculty of Pharmacy, University of Toronto.
    As a next step, the team worked to apply these predictions and illustrate how machine learning models might be used to inform the design of new LAIs, the team used advanced analytical techniques to extract design criteria from the lightGBM model. This allowed the design of a new LAI formulation for a drug currently used to treat ovarian cancer. “Once you have a trained model, you can then work to interpret what the machine has learned and use that to develop design criteria for new systems,” said Bannigan. Once prepared, the drug release rate was tested and further validated the predictions made by the lightGBM model. “Sure enough, the formulation had the slow-release rate that we were looking for. This was significant because in the past it might have taken us several iterations to get to a release profile that looked like this, with machine learning we got there in one,” he said.
    The results of the current study are encouraging and signal the potential for machine learning to reduce reliance on trial-and-error testing slowing the pace of development for long-acting injectables. However, the study’s authors identify that the lack of available open-source data sets in pharmaceutical sciences represents a significant challenge to future progress. “When we began this project, we were surprised by the lack of data reported across numerous studies using polymeric microparticles,” said Allen. “This meant the studies and the work that went into them couldn’t be leveraged to develop the machine learning models we need to propel advances in this space,” said Allen. “There is a real need to create robust databases in pharmaceutical sciences that are open access and available for all so that we can work together to advance the field,” she said.
    To promote the move toward the accessible databases needed to support the integration of machine learning into pharmaceutical sciences more broadly, Allen and the research team have made their datasets and code and available on the open-source platform Zenodo.
    “For this study our goal was to lower the barrier of entry to applying machine learning in pharmaceutical sciences,” said Bannigan. “We’ve made our data sets fully available so others can hopefully build on this work. We want this to be the start of something and not the end of the story for machine learning in drug formulation.” More

  • in

    The thermodynamics of quantum computing

    Heat and computers do not mix well. If computers overheat, they do not work well or may even crash. But what about the quantum computers of the future? These high-performance devices are even more sensitive to heat. This is because their basic computational units — quantum bits or “qubits” — are based on highly-sensitive units, some of them individual atoms, and heat can be a crucial interference factor.
    The basic dilemma: In order to retrieve the information of a qubit, its quantum state must be destroyed. The heat released in the process can interfere with the sensitive quantum system. The quantum computer’s own heat generation could consequently become a problem, suspect physicists Wolfgang Belzig (University of Konstanz), Clemens Winkelmann (Néel Institute, Grenoble) and Jukka Pekola (Aalto University, Helsinki). In experiments, the researchers have now documented the heat generated by superconducting quantum systems. To do so, they developed a method that can measure and display the temperature curve to one millionth of a second in accuracy throughout the process of reading one qubit. “This means we are monitoring the process as it takes place,” says Wolfgang Belzig. The method was recently published in the journal Nature Physics.
    Superconducting quantum systems produce heat
    Until now, research on quantum computing has focused on the basics of getting these high-performance computers to work: Much research mainly involves the coupling of quantum bits and identifying which material systems are optimal for qubits. Little consideration has been given to heat generation: Especially in the case of superconducting qubits constructed using a supposedly ideal conducting material, researchers have often assumed that no heat is generated or that the amount is negligible. “That is simply not true,” Wolfgang Belzig says and adds: “People often think of quantum computers as idealized systems. However, even the circuitry of a superconducting quantum system produces heat.” How much heat, is what the researchers can now measure precisely.
    A thermometer for the quantum bit
    The measurement method was developed for superconducting quantum systems. These systems are based on superconducting circuits that use “Josephson junctions” as a central electronic element. “We measure the electron temperature based on the conductivity of such contacts. This is nothing special in and of itself: Many electronic thermometers are based in some way on measuring conductivity using a resistor. The only problem is: How quickly can you take the measurements?” Clemens Winkelmann explains. Changes to a quantum state take only a millionth of a second.
    “Our trick is to have the resistor measuring the temperature inside a resonator — an oscillating circuit — that produces a strong response at a certain frequency. This resonator oscillates at 600 megahertz and can be read out very quickly,” Winkelmann explains.
    Heat is always generated
    With their experimental evidence, the researchers want to draw attention to the thermodynamic processes of a quantum system. “Our message to the quantum computing world is: Be careful, and watch out for heat generation. We can even measure the exact amount,” Winkelmann adds.
    This heat generation could become particularly relevant for scaling up quantum systems. Wolfgang Belzig explains: “One of the greatest advantages of superconducting qubits is that they are so large, because this size makes them easy to build and control. On the other hand, this can be a disadvantage if you want to put many qubits on a chip. Developers need to take into account that more heat will be produced as a result and that the system needs to be cooled adequately.”
    This research was conducted in the context of the Collaborative Research Centre SFB 1432 “Fluctuations and Nonlinearities in Classical and Quantum Matter beyond Equilibrium” at the University of Konstanz. More

  • in

    AI developed to monitor changes to the globally important Thwaites Glacier

    Scientists have developed artificial intelligence techniques to track the development of crevasses — or fractures — on the Thwaites Glacier Ice Tongue in west Antarctica.
    A team of scientists from the University of Leeds and University of Bristol have adapted an AI algorithm originally developed to identify cells in microscope images to spot crevasses forming in the ice from satellite images. Crevasses are indicators of stresses building-up in the glacier.
    Thwaites is a particularly important part of the Antarctic Ice Sheet because it holds enough ice to raise global sea levels by around 60 centimetres and is considered by many to be at risk of rapid retreat, threatening coastal communities around the world.
    Use of AI will allow scientists to more accurately monitor and model changes to this important glacier.
    Published today (Monday, Jan 9) in the journal Nature Geoscience, the research focussed on a part of the glacier system where the ice flows into the sea and begins to float. Where this happens is known as the grounding line and it forms the start of the Thwaites Eastern Ice Shelf and the Thwaites Glacier Ice Tongue, which is also an ice shelf.
    Despite being small in comparison to the size of the entire glacier, changes to these ice shelves could have wide-ranging implications for the whole glacier system and future sea-level rise.

    The scientists wanted to know if crevassing or fracture formation in the glacier was more likely to occur with changes to the speed of the ice flow.
    Development of the algorithm
    Using machine learning, the researchers taught a computer to look at radar satellite images and identify changes over the last decade. The images were taken by the European Space Agency’s Sentinel-1 satellites, which can “see” through the top layer of snow and onto the glacier, revealing the fractured surface of the ice normally hidden from sight.
    The analysis revealed that over the last six years, the Thwaites Glacier ice tongue has sped up and slowed down twice, by around 40% each time — from four km/year to six km/year before slowing. This is a a substantial increase in the magnitude and frequency of speed change compared with past records.
    The study found a complex interplay between crevasse formation and speed of the ice flow. When the ice flow quickens or slows, more crevasses are likely to form. In turn, the increase in crevasses causes the ice to change speed as the level of friction between the ice and underlying rock alters.

    Dr Anna Hogg, a glaciologist in the Satellite Ice Dynamics group at Leeds and an author on the study, said: “Dynamic changes on ice shelves are traditionally thought to occur on timescales of decades to centuries, so it was surprising to see this huge glacier speed up and slow down so quickly.”
    “The study also demonstrates the key role that fractures play in un-corking the flow of ice — a process known as ‘unbuttressing’.
    “Ice sheet models must be evolved to account for the fact that ice can fracture, which will allow us to measure future sea level contributions more accurately.”
    Trystan Surawy-Stepney, lead author of the paper and a doctoral researcher at Leeds, added: “The nice thing about this study is the precision with which the crevasses were mapped.
    “It has been known for a while that crevassing is an important component of ice shelf dynamics and this study demonstrates that this link can be studied on a large scale with beautiful resolution, using computer vision techniques applied to the deluge of satellite images acquired each week.”
    Satellites orbiting the Earth provide scientists with new data over the most remote and inaccessible regions of Antarctica. The radar on board Sentinel-1 allows places like Thwaites Glacier to be imaged day or night, every week, all year round.
    Dr Mark Drinkwater of the European Space Agency commented: “Studies like this would not be possible without the large volume of high-resolution data provided by Sentinel-1. By continuing to plan future missions, we can carry on supporting work like this and broaden the scope of scientific research on vital areas of the Earth’s climate system.”
    As for Thwaites Glacier Ice Tongue, it remains to be seen whether such short-term changes have any impact on the long-term dynamics of the glacier, or whether they are simply isolated symptoms of an ice shelf close to its end.
    The paper — “Episodic dynamic change linked to damage on the thwaites glacier ice tongue” — was authored by Trystan Surawy-Stepney, Anna E. Hogg and Benjamin J. Davison, from the University of Leeds; and Stephen L. Cornford, from the University of Bristol. More

  • in

    New quantum computing architecture could be used to connect large-scale devices

    Quantum computers hold the promise of performing certain tasks that are intractable even on the world’s most powerful supercomputers. In the future, scientists anticipate using quantum computing to emulate materials systems, simulate quantum chemistry, and optimize hard tasks, with impacts potentially spanning finance to pharmaceuticals.
    However, realizing this promise requires resilient and extensible hardware. One challenge in building a large-scale quantum computer is that researchers must find an effective way to interconnect quantum information nodes — smaller-scale processing nodes separated across a computer chip. Because quantum computers are fundamentally different from classical computers, conventional techniques used to communicate electronic information do not directly translate to quantum devices. However, one requirement is certain: Whether via a classical or a quantum interconnect, the carried information must be transmitted and received.
    To this end, MIT researchers have developed a quantum computing architecture that will enable extensible, high-fidelity communication between superconducting quantum processors. In work published in Nature Physics, MIT researchers demonstrate step one, the deterministic emission of single photons — information carriers — in a user-specified direction. Their method ensures quantum information flows in the correct direction more than 96 percent of the time.
    Linking several of these modules enables a larger network of quantum processors that are interconnected with one another, no matter their physical separation on a computer chip.
    “Quantum interconnects are a crucial step toward modular implementations of larger-scale machines built from smaller individual components,” says Bharath Kannan PhD ’22, co-lead author of a research paper describing this technique.
    “The ability to communicate between smaller subsystems will enable a modular architecture for quantum processors, and this may be a simpler way of scaling to larger system sizes compared to the brute-force approach of using a single large and complicated chip,” Kannan adds.

    Kannan wrote the paper with co-lead author Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) at MIT. The senior author is William D. Oliver, a professor of electrical engineering and computer science and of physics, an MIT Lincoln Laboratory Fellow, director of the Center for Quantum Engineering, and associate director of RLE.
    Moving quantum information
    In a conventional classical computer, various components perform different functions, such as memory, computation, etc. Electronic information, encoded and stored as bits (which take the value of 1s or 0s), is shuttled between these components using interconnects, which are wires that move electrons around on a computer processor.
    But quantum information is more complex. Instead of only holding a value of 0 or 1, quantum information can also be both 0 and 1 simultaneously (a phenomenon known as superposition). Also, quantum information can be carried by particles of light, called photons. These added complexities make quantum information fragile, and it can’t be transported simply using conventional protocols.
    A quantum network links processing nodes using photons that travel through special interconnects known as waveguides. A waveguide can either be unidirectional, and move a photon only to the left or to the right, or it can be bidirectional.

    Most existing architectures use unidirectional waveguides, which are easier to implement since the direction in which photons travel is easily established. But since each waveguide only moves photons in one direction, more waveguides become necessary as the quantum network expands, which makes this approach difficult to scale. In addition, unidirectional waveguides usually incorporate additional components to enforce the directionality, which introduces communication errors.
    “We can get rid of these lossy components if we have a waveguide that can support propagation in both the left and right directions, and a means to choose the direction at will. This ‘directional transmission’ is what we demonstrated, and it is the first step toward bidirectional communication with much higher fidelities,” says Kannan.
    Using their architecture, multiple processing modules can be strung along one waveguide. A remarkable feature the architecture design is that the same module can be used as both a transmitter and a receiver, he says. And photons can be sent and captured by any two modules along a common waveguide.
    “We have just one physical connection that can have any number of modules along the way. This is what makes it scalable. Having demonstrated directional photon emission from one module, we are now working on capturing that photon downstream at a second module,” Almanakly adds.
    Leveraging quantum properties
    To accomplish this, the researchers built a module comprising four qubits.
    Qubits are the building blocks of quantum computers, and are used to store and process quantum information. But qubits can also be used as photon emitters. Adding energy to a qubit causes the qubit to become excited, and then when it de-excites, the qubit will emit the energy in the form of a photon.
    However, simply connecting one qubit to a waveguide does not ensure directionality. A single qubit emits a photon, but whether it travels to the left or to the right is completely random. To circumvent this problem, the researchers utilize two qubits and a property known as quantum interference to ensure the emitted photon travels in the correct direction.
    The technique involves preparing the two qubits in an entangled state of single excitation called a Bell state. This quantum-mechanical state comprises two aspects: the left qubit being excited and the right qubit being excited. Both aspects exist simultaneously, but which qubit is excited at a given time is unknown.
    When the qubits are in this entangled Bell state, the photon is effectively emitted to the waveguide at the two qubit locations simultaneously, and these two “emission paths” interfere with each other. Depending on the relative phase within the Bell state, the resulting photon emission must travel to the left or to the right. By preparing the Bell state with the correct phase, the researchers choose the direction in which the photon travels through the waveguide.
    They can use this same technique, but in reverse, to receive the photon at another module.
    “The photon has a certain frequency, a certain energy, and you can prepare a module to receive it by tuning it to the same frequency. If they are not at the same frequency, then the photon will just pass by. It’s analogous to tuning a radio to a particular station. If we choose the right radio frequency, we’ll pick up the music transmitted at that frequency,” Almanakly says.
    The researchers found that their technique achieved more than 96 percent fidelity — this means that if they intended to emit a photon to the right, 96 percent of the time it went to the right.
    Now that they have used this technique to effectively emit photons in a specific direction, the researchers want to connect multiple modules and use the process to emit and absorb photons. This would be a major step toward the development of a modular architecture that combines many smaller-scale processors into one larger-scale, and more powerful, quantum processor.
    The research is funded, in part, by the AWS Center for Quantum Computing, the U.S. Army Research Office, the Department of Energy Office of Science National Quantum Information Science Research Centers, the Co-design Center for Quantum Advantage, and the Department of Defense. More