More stories

  • in

    Researchers demonstrate secure information transfer using spatial correlations in quantum entangled beams of light

    Researchers at the University of Oklahoma led a study recently published in Science Advances that proves the principle of using spatial correlations in quantum entangled beams of light to encode information and enable its secure transmission.
    Light can be used to encode information for high-data rate transmission, long-distance communication and more. But for secure communication, encoding large amounts of information in light has additional challenges to ensure the privacy and integrity of the data being transferred.
    Alberto Marino, the Ted S. Webb Presidential Professor in the Homer L. Dodge College of Arts, led the research with OU doctoral student and the study’s first author Gaurav Nirala and co-authors Siva T. Pradyumna and Ashok Kumar. Marino also holds positions with OU’s Center for Quantum Research and Technology and with the Quantum Science Center, Oak Ridge National Laboratory.
    “The idea behind the project is to be able to use the spatial properties of the light to encode large amounts of information, just like how an image contains information. However, to be able to do so in a way that is compatible with quantum networks for secure information transfer. When you consider an image, it can be constructed by combining basic spatial patterns know as modes, and depending on how you combine these modes, you can change the image or encoded information,” Marino said.
    “What we’re doing here that is new and different is that we’re not just using those modes to encode information; we’re using the correlations between them,” he added. “We’re using the additional information on how those modes are linked to encode the information.”
    The researchers used two entangled beams of light, meaning that the light waves are interconnected with correlations that are stronger than those that can be achieved with classical light and remain interconnected despite their distance apart.
    “The advantage of the approach we introduce is that you’re not able to recover the encoded information unless you perform joint measurements of the two entangled beams,” Marino said. “This has applications such as secure communication, given that if you were to measure each beam by itself, you would not be able to extract any information. You have to obtain the shared information between both of the beams and combine it in the right way to extract the encoded information.”
    Through a series of images and correlation measurements, the researchers demonstrated results of successfully encoding information in these quantum-entangled beams of light. Only when the two beams were combined using the methods intended did the information resolve into recognizable information encoded in the form of images.
    “The experimental result describes how one can transfer spatial patterns from one optical field to two new optical fields generated using a quantum mechanical process called four-wave mixing,” said Nirala. “The encoded spatial pattern can be retrieved solely by joint measurements of generated fields. One interesting aspect of this experiment is that it offers a novel method of encoding information in light by modifying the correlation between various spatial modes without impacting time-correlations.”
    “What this could enable, in principle, is the ability to securely encode and transmit a lot of information using the spatial properties of the light, just like how an image contains a lot more information than just turning the light on and off,” Marino said. “Using the spatial correlations is a new approach to encode information.”
    “Information encoding in the spatial correlations of entangled twin beams” was published in Science Advances on June 2, 2023. More

  • in

    Quantum computers are better at guessing, new study demonstrates

    Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and first author Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, achieved this quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one).
    Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.'” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.
    The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.
    In their study, the researchers replaced words with bitstrings. A classical computer would, on average, require approximately 33 million guesses to correctly identify a 26-bit string. In contrast, a perfectly functioning quantum computer, presenting guesses in quantum superposition, could identify the correct answer in just one guess. This efficiency comes from running a quantum algorithm developed more than 25 years ago by computer scientists Ethan Bernstein and Umesh Vazirani. However, noise can significantly hamper this exponential quantum advantage.
    Lidar and Pokharel achieved their quantum speedup by adapting a noise suppression technique called dynamical decoupling. They spent a year experimenting, with Pokharel working as a doctoral candidate under Lidar at USC. Initially, applying dynamical decoupling seemed to degrade performance. However, after numerous refinements, the quantum algorithm functioned as intended. The time to solve problems then grew more slowly than with any classical computer, with the quantum advantage becoming increasingly evident as the problems became more complex.
    Lidar notes that “currently, classical computers can still solve the problem faster in absolute terms.” In other words, the reported advantage is measured in terms of the time-scaling it takes to find the solution, not the absolute time. This means that for sufficiently long bitstrings, the quantum solution will eventually be quicker.
    The study conclusively demonstrates that with proper error control, quantum computers can execute complete algorithms with better scaling of the time it takes to find the solution than conventional computers, even in the NISQ era. More

  • in

    Unveiling the nanoscale frontier: innovating with nanoporous model electrodes

    Researchers at Tohoku University and Tsinghua University have introduced a next-generation model membrane electrode that promises to revolutionize fundamental electrochemical research. This innovative electrode, fabricated through a meticulous process, showcases an ordered array of hollow giant carbon nanotubes (gCNTs) within a nanoporous membrane, unlocking new possibilities for energy storage and electrochemical studies.
    The key breakthrough lies in the construction of this novel electrode. The researchers developed a uniform carbon coating technique on anodic aluminum oxide (AAO) formed on an aluminum substrate, with the barrier layer eliminated. The resulting conformally carbon-coated layer exhibits vertically aligned gCNTs with nanopores ranging from 10 to 200 nm in diameter and 2 μm to 90 μm in length, covering small electrolyte molecules to bio-related large matters such as enzymes and exosomes. Unlike traditional composite electrodes, this self-standing model electrode eliminates inter-particle contact, ensuring minimal contact resistance — something essential for interpreting the corresponding electrochemical behaviors.
    “The potential of this model electrode is immense,” stated Dr. Zheng-Ze Pan, one of the corresponding authors of the study. “By employing the model membrane electrode with its extensive range of nanopore dimensions, we can attain profound insights into the intricate electrochemical processes transpiring within porous carbon electrodes, along with their inherent correlations to the nanopore dimensions.”
    Moreover, the gCNTs are composed of low-crystalline stacked graphene sheets, offering unparalleled access to the electrical conductivity within low-crystalline carbon walls. Through experimental measurements and the utilization of an in-house temperature-programmed desorption system, the researchers constructed an atomic-scale structural model of the low-crystalline carbon walls, enabling detailed theoretical simulations. Dr. Alex Aziz, who carried out the simulation part for this research, points out, “Our advanced simulations provide a unique lens to estimate electron transitions within amorphous carbons, shedding light on the intricate mechanisms governing their electrical behavior.”
    This project was led by Prof. Dr. Hirotomo Nishihara, the Principal Investigator of the Device/System Group at Advanced Institute for Materials Research (WPI-AIMR). The findings are detailed in one of materials science’s top-level journal, ” Advanced Functional Materials.
    Ultimately, the study represents a significant step forward in our understanding of amorphous-based porous carbon materials and their applications in probing various electrochemical systems. More

  • in

    Finally solved! The great mystery of quantized vortex motion

    Liquid helium-4, which is in a superfluid state at cryogenic temperatures close to absolute zero (-273°C), has a special vortex called a quantized vortex that originates from quantum mechanical effects. When the temperature is relatively high, the normal fluid exists simultaneously in the superfluid helium, and when the quantized vortex is in motion, mutual friction occurs between it and the normal-fluid. However, it is difficult to explain precisely how a quantized vortex interacts with a normal-fluid in motion. Although several theoretical models have been proposed, it has not been clear which model is correct.

    A research group led by Professor Makoto Tsubota and Specially Appointed Assistant Professor Satoshi Yui, from the Graduate School of Science and the Nambu Yoichiro Institute of Theoretical and Experimental Physics, Osaka Metropolitan University respectively in cooperation with their colleagues from Florida State University and Keio University, investigated numerically the interaction between a quantized vortex and a normal-fluid. Based on the experimental results, researchers decided on the most consistent of several theoretical models. They found that a model that accounts for changes in the normal-fluid and incorporates more theoretically accurate mutual friction is the most compatible with the experimental results.
    “The subject of this study, the interaction between a quantized vortex and a normal-fluid, has been a great mystery since I began my research in this field 40 years ago,” stated Professor Tsubota. “Computational advances have made it possible to handle this problem, and the brilliant visualization experiment by our collaborators at Florida State University has led to a breakthrough. As is often the case in science, subsequent developments in technology have made it possible to elucidate, and this study is a good example of this.”
    Their findings were published in Nature Communications. More

  • in

    Tiny video capsule shows promise as an alternative to endoscopy

    While indigestible video capsule endoscopes have been around for many years, the capsules have been limited by the fact that they could not be controlled by physicians. They moved passively, driven only by gravity and the natural movement of the body. Now, according to a first-of-its-kind research study at George Washington University, physicians can remotely drive a miniature video capsule to all regions of the stomach to visualize and photograph potential problem areas. The new technology uses an external magnet and hand-held video game style joysticks to move the capsule in three-dimensions in the stomach. This new technology comes closer to the capabilities of a traditional tube-based endoscopy.
    “A traditional endoscopy is an invasive procedure for patients, not to mention it is costly due to the need for anesthesia and time off work,” Andrew Meltzer, a professor of Emergency Medicine at the GW School of Medicine & Health Sciences, said. “If larger studies can prove this method is sufficiently sensitive to detect high-risk lesions, magnetically controlled capsules could be used as a quick and easy way to screen for health problems in the upper GI tract such as ulcers or stomach cancer.”
    More than 7 million traditional endoscopies of the stomach and upper part of the intestine are performed every year in the United States to help doctors investigate and treat stomach pain, nausea, bleeding and other symptoms of disease, including cancer. Despite the benefits of traditional endoscopies, studies suggest some patients have trouble accessing the procedure.
    In fact, Meltzer got interested in the magnetically controlled capsule endoscopy after seeing patients in the emergency room with stomach pain or suspected upper GI bleeding who faced barriers to getting a traditional endoscopy as an outpatient.
    “I would have patients who came to the ER with concerns for a bleeding ulcer and, even if they were clinically stable, I would have no way to evaluate them without admitting them to the hospital for an endoscopy. We could not do an endoscopy in the ER and many patients faced unacceptable barriers to getting an outpatient endoscopy, a crucial diagnostic tool to preventing life-threatening hemorrhage,” Meltzer said. “To help address this problem, I started looking for less invasive ways to visualize the upper gastrointestinal tract for patients with suspected internal bleeding.”
    The study is the first to test magnetically controlled capsule endoscopy in the United States. For patients who come to the ER or a doctor’s office with severe stomach pain, the ability to swallow a capsule and get a diagnosis on the spot — without a second appointment for a traditional endoscopy — is a real plus, not to mention potentially life-saving, says Meltzer. An external magnet allows the capsule to be painlessly driven to visualize all anatomic areas of the stomach and record video and photograph any possible bleeding, inflammatory or malignant lesions.
    While using the joystick requires additional time and training, software is being developed that will use artificial intelligence to self-drive the capsule to all parts of the stomach with a push of the button and record any potential risky abnormalities. That would make it easier to use the system as a diagnostic tool or screening test. In addition, the videos can be easily transmitted for off-site review if a gastroenterologist is not on-site to over-read the images.
    Meltzer and colleagues conducted a study of 40 patients at a physician office building using the magnetically controlled capsule endoscopy. They found that the doctor could direct the capsule to all major parts of the stomach with a 95 percent rate of visualization. Capsules were driven by the ER physician and then the study reports were reviewed by an attending gastroenterologist who was physically off-site.
    To see how the new method compared with a traditional endoscopy, participants in the study also received a follow up endoscopy. No high-risk lesions were missed with the new method and 80 percent of the patients preferred the capsule method to the traditional endoscopy. The team found no safety problems associated with the new method.
    Yet, Meltzer cautions that the study is a pilot and a much bigger trial with more patients must be conducted to make sure the method does not miss important lesions and can be used in place of an endoscopy. A major limitation of the capsule includes the inability to perform biopsies of lesions that are detected.
    The study, “Magnetically Controlled Capsule for Assessment of the Gastric Mucosa in Symptomatic Patients (MAGNET): A Prospective, Single-Arm, Single-Center, Comparative Study,” was published in iGIE, the open-access, online journal of the American Society for Gastrointestinal Endoscopy.
    The medical technology company AnX Robotica funded the research and is the creator of the capsule endoscopy system used in the study, called NaviCam®. More

  • in

    New method improves efficiency of ‘vision transformer’ AI systems

    Vision transformers (ViTs) are powerful artificial intelligence (AI) technologies that can identify or categorize objects in images — however, there are significant challenges related to both computing power requirements and decision-making transparency. Researchers have now developed a new methodology that addresses both challenges, while also improving the ViT’s ability to identify, classify and segment objects in images.
    Transformers are among the most powerful existing AI models. For example, ChatGPT is an AI that uses transformer architecture, but the inputs used to train it are language. ViTs are transformer-based AI that are trained using visual inputs. For example, ViTs could be used to detect and categorize objects in an image, such as identifying all of the cars or all of the pedestrians in an image.
    However, ViTs face two challenges.
    First, transformer models are very complex. Relative to the amount of data being plugged into the AI, transformer models require a significant amount of computational power and use a large amount of memory. This is particularly problematic for ViTs, because images contain so much data.
    Second, it is difficult for users to understand exactly how ViTs make decisions. For example, you might have trained a ViT to identify dogs in an image. But it’s not entirely clear how the ViT is determining what is a dog and what is not. Depending on the application, understanding the ViT’s decision-making process, also known as its model interpretability, can be very important.
    The new ViT methodology, called “Patch-to-Cluster attention” (PaCa), addresses both challenges.

    “We address the challenge related to computational and memory demands by using clustering techniques, which allow the transformer architecture to better identify and focus on objects in an image,” says Tianfu Wu, corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University. “Clustering is when the AI lumps sections of the image together, based on similarities it finds in the image data. This significantly reduces computational demands on the system. Before clustering, computational demands for a ViT are quadratic. For example, if the system breaks an image down into 100 smaller units, it would need to compare all 100 units to each other — which would be 10,000 complex functions.
    “By clustering, we’re able to make this a linear process, where each smaller unit only needs to be compared to a predetermined number of clusters. Let’s say you tell the system to establish 10 clusters; that would only be 1,000 complex functions,” Wu says.
    “Clustering also allows us to address model interpretability, because we can look at how it created the clusters in the first place. What features did it decide were important when lumping these sections of data together? And because the AI is only creating a small number of clusters, we can look at those pretty easily.”
    The researchers did comprehensive testing of PaCa, comparing it to two state-of-the-art ViTs called SWin and PVT.
    “We found that PaCa outperformed SWin and PVT in every way,” Wu says. “PaCa was better at classifying objects in images, better at identifying objects in images, and better at segmentation — essentially outlining the boundaries of objects in images. It was also more efficient, meaning that it was able to perform those tasks more quickly than the other ViTs.
    “The next step for us is to scale up PaCa by training on larger, foundational data sets.”
    The paper, “PaCa-ViT: Learning Patch-to-Cluster Attention in Vision Transformers,” will be presented at the IEEE/CVF Conference on Computer Vision and Pattern Recognition, being held June 18-22 in Vancouver, Canada. First author of the paper is Ryan Grainger, a Ph.D. student at NC State. The paper was co-authored by Thomas Paniagua, a Ph.D. student at NC State; Xi Song, an independent researcher; and Naresh Cuntoor and Mun Wai Lee of BlueHalo.
    The work was done with support from the Office of the Director of National Intelligence, under contract number 2021-21040700003; the U.S. Army Research Office, under grants W911NF1810295 and W911NF2210010; and the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451. More

  • in

    Reading between the cracks: Artificial intelligence can identify patterns in surface cracking to assess damage in reinforced concrete structures

    Recent structural collapses, including tragedies in Surfside, Florida, Pittsburgh, and New York City, have centered the need for more frequent and thorough inspections of aging buildings and infrastructure across the country. But inspections are time-consuming, and often inconsistent, processes, heavily dependent on the judgment of inspectors. Researchers at Drexel University and the State University of New York at Buffalo are trying to make the process more efficient and definitive by using artificial intelligence, combined with a classic mathematical method for quantifying web-like networks, to determine how damaged a concrete structure is, based solely on its pattern of cracking.
    In the paper “A graph-based method for quantifying crack patterns on reinforced concrete shear walls,” which was recently published in the journal Computer-Aided Civil and Infrastructure Engineering, the researchers, led by Arvin Ebrahimkhanlou, PhD, an assistant professor in Drexel’s College of Engineering, and Pedram Bazrafshan, a doctoral student in the College, present a process that could help the country better understand how many of its hundreds of thousands of aging bridges, levees, roadways and buildings are in urgent need of repair.
    “Without an autonomous and objective process for assessing damage to the many reinforced concrete structures that make up our built environment, these tragic structural failures are sure to continue,” Ebrahimkhanlou said. “Our aging infrastructures are being used beyond their design lifespan, and because manual inspections are time-consuming and subjective, indications of structural damage may be missed or underestimated.”
    The current process for inspecting a concrete structure, such as a bridge or a parking deck, involves an inspector visually examining it for cracking, chipping, or water penetration, taking measurements of the cracks, and noting whether or not they have changed in the time between inspections — which may be years. If enough of these conditions are present and appear to be in an advanced state — according to a set of guidelines on a damage index — then the structure could be rated “unsafe.”
    In addition to the time it takes to go through this process for each inspection, there is widespread concern that the process leaves too much room for subjectivity to skew the final assessment.
    “The same crack in a reinforced concrete structure can appear menacing or mundane — depending on who is looking at it,” Bazrafshan said. “A crack can be an innocuous part of a building’s settling process or a telltale sign of structural damage; unfortunately, there is little agreement on precisely when one has progressed from the former to the latter.”
    The first step for Bazrafshan and Ebrahimkhanlou’s group was to eliminate this uncertainty by creating a method to precisely quantify the extent of cracking. To do it, they employed a mathematical method called graph theory, which is used to measure and study networks — most recently, social networks — by pinpointing its graph features, such as the number of times cracks intersect on average.

    Ebrahimkhanlou originally developed the process for using graph features to create a kind of “fingerprint” for each set of cracks in a reinforced concrete structure and — by comparing the prints of newly inspected structures to those of structures with known safety ratings — produce a quick and accurate damage assessment.
    “Creating a mathematical representation of cracking patterns is a novel idea and the key contribution of our recent paper,” Ebrahimkhanlou said. “We find this to be a highly effective way to quantify changes in the patterns of cracking, which enables us to connect the visual appearance of a crack to the level of structural damage in a way that is quantifiable and can be consistently repeated regardless of who is doing the inspection.”
    The team used AI pixel-tracking algorithms to convert images of cracks to their corresponding mathematical representation: a graph.
    “The crack-to-graph conversion and feature-extraction processes take just a minute or so per image, which is a significant improvement by comparison to the inspection process which could take hours or days to make all of the required measurements,” Bazrafshan said. “This is also a promising development for the possibility of automating the entire analysis process in the future.”
    To develop a feature framework for comparison, they had a machine learning program extract graph features from a set of images of reinforced concrete shear wall structures with different height-to-length ratios, that were created to test different behaviors of the walls that could occur in an earthquake.

    Focusing specifically on the group of images that showed moderate cracking — the kind that shows that the safety of the structure is under question — the team trained a second algorithm to correlate the extracted graph features with a tangible scale showing the amount of damage imposed on the structure. For example, the more cracks intersect one another — which corresponds with a higher “average degree” of their graph feature — the more serious the damage to the structure.
    The program assigned a weighted value to each of these features, depending on how closely they correlated with mechanical indicators of damage, to produce a quantitative profile against which the algorithm could measure new samples to determine the extent of their structural damage.
    To test the assessment algorithm, the team used images of three large-scale walls that had been mechanically tested in a lab at the University at Buffalo to determine their conditions. The team used images of one side of each wall as a training set and then tested the model with images of the opposite side to test its ability to predict each sample’s level of damage.
    In each case, the AI program was able to correctly assess the damage with greater than 90% accuracy, indicating that the program would be a highly effective means of rapid damage assessment.
    “This is just the first step in creating a very powerful assessment tool that leverages volumes of research and human knowledge to make faster and more accurate assessments of structures in the built environment,” Ebrahimkhanlou said. “Imposing order on a seemingly chaotic set of features is the essence of scientific discovery. We believe this innovation could go a long way toward identifying problems before they happen and making our infrastructures safer.”
    The group plans to continue its work by training and testing the program against larger and more diverse datasets, including other types of structures. And they are also working toward automating the process so that it could be integrated into structural monitoring systems, as well as the process of collecting photos and video of damaged structures following earthquakes and other natural disasters. More

  • in

    The ‘breath’ between atoms — a new building block for quantum technology

    University of Washington researchers have discovered they can detect atomic “breathing,” or the mechanical vibration between two layers of atoms, by observing the type of light those atoms emitted when stimulated by a laser. The sound of this atomic “breath” could help researchers encode and transmit quantum information.
    The researchers also developed a device that could serve as a new type of building block for quantum technologies, which are widely anticipated to have many future applications in fields such as computing, communications and sensor development.
    The researchers published these findings June 1 in Nature Nanotechnology.
    “This is a new, atomic-scale platform, using what the scientific community calls ‘optomechanics,’ in which light and mechanical motions are intrinsically coupled together,” said senior author Mo Li, a UW professor of both electrical and computer engineering and physics. “It provides a new type of involved quantum effect that can be utilized to control single photons running through integrated optical circuits for many applications.”
    Previously, the team had studied a quantum-level quasiparticle called an “exciton.” Information can be encoded into an exciton and then released in the form of a photon — a tiny particle of energy considered to be the quantum unit of light. Quantum properties of each photon emitted — such as the photon’s polarization, wavelength and/or emission timing — can function as a quantum bit of information, or “qubit,” for quantum computing and communication. And because this qubit is carried by a photon, it travels at the speed of light.
    “The bird’s-eye view of this research is that to feasibly have a quantum network, we need to have ways of reliably creating, operating on, storing and transmitting qubits,” said lead author Adina Ripin, a UW doctoral student of physics. “Photons are a natural choice for transmitting this quantum information because optical fibers enable us to transport photons long distances at high speeds, with low losses of energy or information.”
    The researchers were working with excitons in order to create a single photon emitter, or “quantum emitter,” which is a critical component for quantum technologies based on light and optics. To do this, the team placed two thin layers of tungsten and selenium atoms, known as tungsten diselenide, on top of each other.

    When the researchers applied a precise pulse of laser light, they knocked a tungsten diselenide atom’s electron away from the nucleus, which generated an exciton quasiparticle. Each exciton consisted of a negatively charged electron on one layer of the tungsten diselenide and a positively charged hole where the electron used to be on the other layer. And because opposite charges attract each other, the electron and the hole in each exciton were tightly bonded to each other. After a short moment, as the electron dropped back into the hole it previously occupied, the exciton emitted a single photon encoded with quantum information — producing the quantum emitter the team sought to create.
    But the team discovered that the tungsten diselenide atoms were emitting another type of quasiparticle, known as a phonon. Phonons are a product of atomic vibration, which is similar to breathing. Here, the two atomic layers of the tungsten diselenide acted like tiny drumheads vibrating relative to each other, which generated phonons. This is the first time phonons have ever been observed in a single photon emitter in this type of two-dimensional atomic system.
    When the researchers measured the spectrum of the emitted light, they noticed several equally spaced peaks. Every single photon emitted by an exciton was coupled with one or more phonons. This is somewhat akin to climbing a quantum energy ladder one rung at a time, and on the spectrum, these energy spikes were represented visually by the equally spaced peaks.
    “A phonon is the natural quantum vibration of the tungsten diselenide material, and it has the effect of vertically stretching the exciton electron-hole pair sitting in the two layers,” said Li, who is also is a member of the steering committee for the UW’s QuantumX, and is a faculty member of the Institute for Nano-Engineered Systems. “This has a remarkably strong effect on the optical properties of the photon emitted by the exciton that has never been reported before.”
    The researchers were curious if they could harness the phonons for quantum technology. They applied electrical voltage and saw that they could vary the interaction energy of the associated phonons and emitted photons. These variations were measurable and controllable in ways relevant to encoding quantum information into a single photon emission. And this was all accomplished in one integrated system — a device that involved only a small number of atoms.
    Next the team plans to build a waveguide — fibers on a chip that catch single photon emissions and direct them where they need to go — and then scale up the system. Instead of controlling only one quantum emitter at a time, the team wants to be able to control multiple emitters and their associated phonon states. This will enable the quantum emitters to “talk” to each other, a step toward building a solid base for quantum circuitry.
    “Our overarching goal is to create an integrated system with quantum emitters that can use single photons running through optical circuits and the newly discovered phonons to do quantum computing and quantum sensing,” Li said. “This advance certainly will contribute to that effort, and it helps to further develop quantum computing which, in the future, will have many applications.” More