More stories

  • in

    Remote control for quantum emitters

    In order to exploit the properties of quantum physics technologically, quantum objects and their interaction must be precisely controlled. In many cases, this is done using light. Researchers at the University of Innsbruck and the Institute of Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences have now developed a method to individually address quantum emitters using tailored light pulses. “Not only is it important to individually control and read the state of the emitters,” says Oriol Romero-Isart, “but also to do so while leaving the system as undisturbed as possible.” Together with Juan Jose Garcia-Ripoll (IQOQI visiting fellow) from the Instituto de Fisica Fundamental in Madrid, Romero-Isart’s research group has now investigated how specifically engineered pulses can be used to focus light on a single quantum emitter.
    Self-compressing light pulse
    “Our proposal is based on chirped light pulses,” explains Silvia Casulleras, first author of the research paper. “The frequency of these light pulses is time-dependent.” So, similar to the chirping of birds, the frequency of the signal changes over time. In structures with certain electromagnetic properties — such as waveguides — the frequencies propagate at different speeds. “If you set the initial conditions of the light pulse correctly, the pulse compresses itself at a certain distance,” explains Patrick Maurer from the Innsbruck team. “Another important part of our work was to show that the pulse enables the control of individual quantum emitters.” This approach can be used as a kind of remote control to address, for example, individual superconducting quantum bits in a waveguide or atoms near a photonic crystal.
    Wide range of applications
    In their work, now published in Physical Review Letters, the scientists show that this method works not only with light or electromagnetic pulses, but also with other waves such as lattice oscillations (phonons) or magnetic excitations (magnons). The research group led by the Innsbruck experimental physicist Gerhard Kirchmair, wants to implement the concept for superconducting qubits in the laboratory in close collaboration with the team of theorists.
    The research was financially supported by the European Union.

    Story Source:
    Materials provided by University of Innsbruck. Note: Content may be edited for style and length. More

  • in

    Experts recreate a mechanical Cosmos for the world's first computer

    Researchers at UCL have solved a major piece of the puzzle that makes up the ancient Greek astronomical calculator known as the Antikythera Mechanism, a hand-powered mechanical device that was used to predict astronomical events.
    Known to many as the world’s first analogue computer, the Antikythera Mechanism is the most complex piece of engineering to have survived from the ancient world. The 2,000-year-old device was used to predict the positions of the Sun, Moon and the planets as well as lunar and solar eclipses.
    Published in Scientific Reports, the paper from the multidisciplinary UCL Antikythera Research Team reveals a new display of the ancient Greek order of the Universe (Cosmos), within a complex gearing system at the front of the Mechanism.
    Lead author Professor Tony Freeth (UCL Mechanical Engineering) explained: “Ours is the first model that conforms to all the physical evidence and matches the descriptions in the scientific inscriptions engraved on the Mechanism itself.
    “The Sun, Moon and planets are displayed in an impressive tour de force of ancient Greek brilliance.”
    The Antikythera Mechanism has generated both fascination and intense controversy since its discovery in a Roman-era shipwreck in 1901 by Greek sponge divers near the small Mediterranean island of Antikythera.

    advertisement

    The astronomical calculator is a bronze device that consists of a complex combination of 30 surviving bronze gears used to predict astronomical events, including eclipses, phases of the moon, positions of the planets and even dates of the Olympics.
    Whilst great progress has been made over the last century to understand how it worked, studies in 2005 using 3D X-rays and surface imaging enabled researchers to show how the Mechanism predicted eclipses and calculated the variable motion of the Moon.
    However, until now, a full understanding of the gearing system at the front of the device has eluded the best efforts of researchers. Only about a third of the Mechanism has survived, and is split into 82 fragments — creating a daunting challenge for the UCL team.
    The biggest surviving fragment, known as Fragment A, displays features of bearings, pillars and a block. Another, known as Fragment D, features an unexplained disk, 63-tooth gear and plate.
    Previous research had used X-ray data from 2005 to reveal thousands of text characters hidden inside the fragments, unread for nearly 2,000 years. Inscriptions on the back cover include a description of the cosmos display, with the planets moving on rings and indicated by marker beads. It was this display that the team worked to reconstruct.

    advertisement

    Two critical numbers in the X-rays of the front cover, of 462 years and 442 years, accurately represent cycles of Venus and Saturn respectively. When observed from Earth, the planets’ cycles sometimes reverse their motions against the stars. Experts must track these variable cycles over long time-periods in order to predict their positions.
    “The classic astronomy of the first millennium BC originated in Babylon, but nothing in this astronomy suggested how the ancient Greeks found the highly accurate 462-year cycle for Venus and 442-year cycle for Saturn,” explained PhD candidate and UCL Antikythera Research Team member Aris Dacanalis.
    Using an ancient Greek mathematical method described by the philosopher Parmenides, the UCL team not only explained how the cycles for Venus and Saturn were derived but also managed to recover the cycles of all the other planets, where the evidence was missing.
    PhD candidate and team member David Higgon explained: “After considerable struggle, we managed to match the evidence in Fragments A and D to a mechanism for Venus, which exactly models its 462-year planetary period relation, with the 63-tooth gear playing a crucial role.”
    Professor Freeth added: “The team then created innovative mechanisms for all of the planets that would calculate the new advanced astronomical cycles and minimize the number of gears in the whole system, so that they would fit into the tight spaces available.”
    “This is a key theoretical advance on how the Cosmos was constructed in the Mechanism,” added co-author, Dr Adam Wojcik (UCL Mechanical Engineering). “Now we must prove its feasibility by making it with ancient techniques. A particular challenge will be the system of nested tubes that carried the astronomical outputs.” More

  • in

    Unique Ag-hydrogel composite for soft bioelectronics created

    In the field of robotics, metals offer advantages like strength, durability, and electrical conductivity. But, they are heavy and rigid — properties that are undesirable in soft and flexible systems for wearable computing and human-machine interfaces.
    Hydrogels, on the other hand, are lightweight, stretchable, and biocompatible, making them excellent materials for contact lenses and tissue engineering scaffolding. They are, however, poor at conducting electricity, which is needed for digital circuits and bioelectronics applications.
    Researchers in Carnegie Mellon University’s Soft Machines Lab have developed a unique silver-hydrogel composite that has high electrical conductivity and is capable of delivering direct current while maintaining soft compliance and deformability. The findings were published in Nature Electronics.
    The team suspended micrometer-sized silver flakes in a polyacrylamide-alginate hydrogel matrix. After going through a partial dehydration process, the flakes formed percolating networks that were electrically conductive and robust to mechanical deformations. By manipulating this dehydration and hydration process, the flakes can be made to stick together or break apart, forming reversible electrical connections.
    Previous attempts to combine metals and hydrogels revealed a trade-off between improved electrical conductivity and lowered compliance and deformability. Majidi and his team sought to tackle this challenge, building on their expertise in developing stretchable, conductive elastomers with liquid metal.
    “With its high electrical conductivity and high compliance or ‘squishiness,’ this new composite can have many applications in bioelectronics and beyond,” explained Carmel Majidi, professor of mechanical engineering. “Examples include a sticker for the brain that has sensors for signal processing, a wearable energy generation device to power electronics, and stretchable displays.”
    The silver-hydrogel composite can be printed by standard methods like stencil lithography, similar to screen printing. The researchers used this technique to develop skin-mounted electrodes for neuromuscular electrical stimulation. According to Majidi, the composite could cover a large area of the human body, “like a second layer of nervous tissue over your skin.”
    Future applications could include treating muscular disorders and motor disabilities, such as assisting someone with tremors from Parkinson’s disease or difficulty grasping something with their fingers after a stroke.

    Story Source:
    Materials provided by College of Engineering, Carnegie Mellon University. Original written by Lisa Kulick. Note: Content may be edited for style and length. More

  • in

    Standard vital signs could help estimate people's pain levels

    A new study demonstrates that machine-learning strategies can be applied to routinely collected physiological data, such as heart rate and blood pressure, to provide clues about pain levels in people with sickle cell disease. Mark Panaggio of Johns Hopkins University Applied Physics Laboratory and colleagues present these findings in the open-access journal PLOS Computational Biology.
    Pain is subjective, and monitoring pain can be intrusive and time-consuming. Pain medication can help, but accurate knowledge of a patient’s pain is necessary to balance relief against risk of addiction or other unwanted effects. Machine-learning strategies have shown promise in predicting pain from objective physiological measurements, such as muscle activity or facial expressions, but few studies have applied machine learning to routinely collected data.
    Now, Panaggio and colleagues have developed and applied machine-learning models to data from people with sickle cell disease who were hospitalized due to debilitating pain. These statistical models classify whether a patient’s pain was low, moderate, or high at each point during their stay based on routinely collected measurements of their blood pressure, heart rate, temperature, respiratory rate, and oxygen levels.
    The researchers found that these vital signs indeed gave clues into the patients’ reported pain levels. By taking physiological data into account, their models outperformed baseline models in estimating subjective pain levels, detecting changes in pain, and identifying atypical pain levels. Pain predictions were most accurate when they accounted for changes in patients’ vital signs over time.
    “Studies like ours show the potential that data-driven models based on machine learning have to enhance our ability to monitor patients in less invasive ways and ultimately, be able to provide more timely and targeted treatments,” Panaggio says.
    Looking ahead, the researchers hope to leverage more comprehensive data sources and real-time monitoring tools, such as fitness trackers, to build better models for inferring and forecasting pain.

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    After cracking the 'sum of cubes' puzzle for 42, mathematicians discover a new solution for 3

    What do you do after solving the answer to life, the universe, and everything? If you’re mathematicians Drew Sutherland and Andy Booker, you go for the harder problem.
    In 2019, Booker, at the University of Bristol, and Sutherland, principal research scientist at MIT, were the first to find the answer to 42. The number has pop culture significance as the fictional answer to “the ultimate question of life, the universe, and everything,” as Douglas Adams famously penned in his novel “The Hitchhiker’s Guide to the Galaxy.” The question that begets 42, at least in the novel, is frustratingly, hilariously unknown.
    In mathematics, entirely by coincidence, there exists a polynomial equation for which the answer, 42, had similarly eluded mathematicians for decades. The equation x3+y3+z3=k is known as the sum of cubes problem. While seemingly straightforward, the equation becomes exponentially difficult to solve when framed as a “Diophantine equation” — a problem that stipulates that, for any value of k, the values for x, y, and z must each be whole numbers.
    When the sum of cubes equation is framed in this way, for certain values of k, the integer solutions for x, y, and z can grow to enormous numbers. The number space that mathematicians must search across for these numbers is larger still, requiring intricate and massive computations.
    Over the years, mathematicians had managed through various means to solve the equation, either finding a solution or determining that a solution must not exist, for every value of k between 1 and 100 — except for 42.
    In September 2019, Booker and Sutherland, harnessing the combined power of half a million home computers around the world, for the first time found a solution to 42. The widely reported breakthrough spurred the team to tackle an even harder, and in some ways more universal problem: finding the next solution for 3.

    advertisement

    Booker and Sutherland have now published the solutions for 42 and 3, along with several other numbers greater than 100, this week in the Proceedings of the National Academy of Sciences.
    Picking up the gauntlet
    The first two solutions for the equation x3+y3+z3 = 3 might be obvious to any high school algebra student, where x, y, and z can be either 1, 1, and 1, or 4, 4, and -5. Finding a third solution, however, has stumped expert number theorists for decades, and in 1953 the puzzle prompted pioneering mathematician Louis Mordell to ask the question: Is it even possible to know whether other solutions for 3 exist?
    “This was sort of like Mordell throwing down the gauntlet,” says Sutherland. “The interest in solving this question is not so much for the particular solution, but to better understand how hard these equations are to solve. It’s a benchmark against which we can measure ourselves.”
    As decades went by with no new solutions for 3, many began to believe there were none to be found. But soon after finding the answer to 42, Booker and Sutherland’s method, in a surprisingly short time, turned up the next solution for 3:5699368212219623807203 + (−569936821113563493509)3 + (−472715493453327032)3 = 3
    The discovery was a direct answer to Mordell’s question: Yes, it is possible to find the next solution to 3, and what’s more, here is that solution. And perhaps more universally, the solution, involving gigantic, 21-digit numbers that were not possible to sift out until now, suggests that there are more solutions out there, for 3, and other values of k.

    advertisement

    “There had been some serious doubt in the mathematical and computational communities, because [Mordell’s question] is very hard to test,” Sutherland says. “The numbers get so big so fast. You’re never going to find more than the first few solutions. But what I can say is, having found this one solution, I’m convinced there are infinitely many more out there.”
    A solution’s twist
    To find the solutions for both 42 and 3, the team started with an existing algorithm, or a twisting of the sum of cubes equation into a form they believed would be more manageable to solve:
    k − z3 = x3 + y3 = (x + y)(x2 − xy + y2)
    This approach was first proposed by mathematician Roger Heath-Brown, who conjectured that there should be infinitely many solutions for every suitable k. The team further modified the algorithm by representing x+y as a single parameter, d. They then reduced the equation by dividing both sides by d and keeping only the remainder — an operation in mathematics termed “modulo d” — leaving a simplified representation of the problem.
    “You can now think of k as a cube root of z, modulo d,” Sutherland explains. “So imagine working in a system of arithmetic where you only care about the remainder modulo d, and we’re trying to compute a cube root of k.”
    With this sleeker version of the equation, the researchers would only need to look for values of d and z that would guarantee finding the ultimate solutions to x, y, and z, for k=3. But still, the space of numbers that they would have to search through would be infinitely large.
    So, the researchers optimized the algorithm by using mathematical “sieving” techniques to dramatically cut down the space of possible solutions for d.
    “This involves some fairly advanced number theory, using the structure of what we know about number fields to avoid looking in places we don’t need to look,” Sutherland says.
    A global task
    The team also developed ways to efficiently split the algorithm’s search into hundreds of thousands of parallel processing streams. If the algorithm were run on just one computer, it would have taken hundreds of years to find a solution to k=3. By dividing the job into millions of smaller tasks, each independently run on a separate computer, the team could further speed up their search.
    In September 2019, the researchers put their plan in play through Charity Engine, a project that can be downloaded as a free app by any personal computer, and which is designed to harness any spare home computing power to collectively solve hard mathematical problems. At the time, Charity Engine’s grid comprised over 400,000 computers around the world, and Booker and Sutherland were able to run their algorithm on the network as a test of Charity Engine’s new software platform.
    “For each computer in the network, they are told, ‘your job is to look for d’s whose prime factor falls within this range, subject to some other conditions,'” Sutherland says. “And we had to figure out how to divide the job up into roughly 4 million tasks that would each take about three hours for a computer to complete.”
    Very quickly, the global grid returned the very first solution to k=42, and just two weeks later, the researchers confirmed they had found the third solution for k=3 — a milestone that they marked, in part, by printing the equation on t-shirts.
    The fact that a third solution to k=3 exists suggests that Heath-Brown’s original conjecture was right and that there are infinitely more solutions beyond this newest one. Heath-Brown also predicts the space between solutions will grow exponentially, along with their searches. For instance, rather than the third solution’s 21-digit values, the fourth solution for x, y, and z will likely involve numbers with a mind-boggling 28 digits.
    “The amount of work you have to do for each new solution grows by a factor of more than 10 million, so the next solution for 3 will need 10 million times 400,000 computers to find, and there’s no guarantee that’s even enough,” Sutherland says. “I don’t know if we’ll ever know the fourth solution. But I do believe it’s out there.” More

  • in

    How to make all headphones intelligent

    How do you turn “dumb” headphones into smart ones? Rutgers engineers have invented a cheap and easy way by transforming headphones into sensors that can be plugged into smartphones, identify their users, monitor their heart rates and perform other services.
    Their invention, called HeadFi, is based on a small plug-in headphone adapter that turns a regular headphone into a sensing device. Unlike smart headphones, regular headphones lack sensors. HeadFi would allow users to avoid having to buy a new pair of smart headphones with embedded sensors to enjoy sensing features.
    “HeadFi could turn hundreds of millions of existing, regular headphones worldwide into intelligent ones with a simple upgrade,” said Xiaoran Fan, a HeadFi primary inventor. He is a recent Rutgers doctoral graduate who completed the research during his final year at the university and now works at Samsung Artificial Intelligence Center.
    A peer-reviewed Rutgers-led paper on the invention, which results in “earable intelligence,” will be formally published in October at MobiCom 2021, the top international conference on mobile computing and mobile and wireless networking.
    Headphones are among the most popular wearable devices worldwide and they continue to become more intelligent as new functions appear, such as touch-based gesture control, the paper notes. Such functions usually rely on auxiliary sensors, such as accelerometers, gyroscopes and microphones that are available on many smart headphones.
    HeadFi turns the two drivers already inside all headphones into a versatile sensor, and it works by connecting headphones to a pairing device, such as a smartphone. It does not require adding auxiliary sensors and avoids changes to headphone hardware or the need to customize headphones, both of which may increase their weight and bulk. By plugging into HeadFi, a converted headphone can perform sensing tasks and play music at the same time.
    The engineers conducted experiments with 53 volunteers using 54 pairs of headphones with estimated prices ranging from $2.99 to $15,000. HeadFi can achieve 97.2 percent to 99.5 percent accuracy on user identification, 96.8 percent to 99.2 percent on heart rate monitoring and 97.7 percent to 99.3 percent on gesture recognition.

    Story Source:
    Materials provided by Rutgers University. Note: Content may be edited for style and length. More

  • in

    Read to succeed — in math; study shows how reading skill shapes more than just reading

    A University at Buffalo researcher’s recent work on dyslexia has unexpectedly produced a startling discovery which clearly demonstrates how the cooperative areas of the brain responsible for reading skill are also at work during apparently unrelated activities, such as multiplication.
    Though the division between literacy and math is commonly reflected in the division between the arts and sciences, the findings suggest that reading, writing and arithmetic, the foundational skills informally identified as the three Rs, might actually overlap in ways not previously imagined, let alone experimentally validated.
    “These findings floored me,” said Christopher McNorgan, PhD, the paper’s author and an assistant professor in UB’s Department of Psychology. “They elevate the value and importance of literacy by showing how reading proficiency reaches across domains, guiding how we approach other tasks and solve other problems.
    “Reading is everything, and saying so is more than an inspirational slogan. It’s now a definitive research conclusion.”
    And it’s a conclusion that was not originally part of McNorgan’s design. He planned to exclusively explore if it was possible to identify children with dyslexia on the basis of how the brain was wired for reading.
    “It seemed plausible given the work I had recently finished, which identified a biomarker for ADHD,” said McNorgan, an expert in neuroimaging and computational modeling.

    advertisement

    Like that previous study, a novel deep learning approach that makes multiple simultaneous classifications is at the core of McNorgan’s current paper, which appears in the journal Frontiers in Computational Neuroscience.
    Deep learning networks are ideal for uncovering conditional, non-linear relationships.
    Where linear relationships involve one variable directly influencing another, a non-linear relationship can be slippery because changes in one area do not necessarily proportionally influence another area. But what’s challenging for traditional methods is easily handled through deep learning.
    McNorgan identified dyslexia with 94% accuracy when he finished with his first data set, consisting of functional connectivity from 14 good readers and 14 poor readers engaged in a language task.
    But he needed another data set to determine if his findings could be generalized. So McNorgan chose a math study, which relied on a mental multiplication task, and measured functional connectivity from the fMRI information in that second data set.

    advertisement

    Functional connectivity, unlike what the name might imply, is a dynamic description of how the brain is virtually wired from moment to moment. Don’t think in terms of the physical wires used in a network, but instead of how those wires are used throughout the day. When you’re working, your laptop is sending a document to your printer. Later in the day, your laptop might be streaming a movie to your television. How those wires are used depends on whether you’re working or relaxing. Functional connectivity changes according to the immediate task.
    The brain dynamically rewires itself according to the task all the time. Imagine reading a list of restaurant specials while standing only a few steps away from the menu board nailed to the wall. The visual cortex is working whenever you’re looking at something, but because you’re reading, the visual cortex works with, or is wired to, at least for the moment, the auditory cortex.
    Pointing to one of the items on the board, you accidentally knock it from the wall. When you reach out to catch it, your brain wiring changes. You’re no longer reading, but trying to catch a falling object, and your visual cortex now works with the pre-motor cortex to guide your hand.
    Different tasks, different wiring; or, as McNorgan explains, different functional networks.
    In the two data sets McNorgan used, participants were engaged in different tasks: language and math. Yet in each case, the connectivity fingerprint was the same, and he was able to identify dyslexia with 94% accuracy whether testing against the reading group or the math group.
    It was a whim, he said, to see how well his model distinguished good readers from poor readers — or from participants who weren’t reading at all. Seeing the accuracy, and the similarity, changed the direction of the paper McNorgan intended.
    Yes, he could identify dyslexia. But it became obvious that the brain’s wiring for reading was also present for math.
    Different task. Same functional networks.
    “The brain should be dynamically wiring itself in a way that’s specifically relevant to doing math because of the multiplication problem in the second data set, but there’s clear evidence of the dynamic configuration of the reading network showing up in the math task,” McNorgan says.
    He says it’s the sort of finding that strengthens the already strong case for supporting literacy.
    “These results show that the way our brain is wired for reading is actually influencing how the brain functions for math,” he said. “That says your reading skill is going to affect how you tackle problems in other domains, and helps us better understand children with learning difficulties in both reading and math.”
    As the line between cognitive domains becomes more blurred, McNorgan wonders what other domains the reading network is actually guiding.
    “I’ve looked at two domains which couldn’t be farther afield,” he said. “If the brain is showing that its wiring for reading is showing up in mental multiplication, what else might it be contributing toward?”
    That’s an open question, for now, according to McNorgan.
    “What I do know because of this research is that an educational emphasis on reading means much more than improving reading skill,” he said. “These findings suggest that learning how to read shapes so much more.” More

  • in

    Breakthrough lays groundwork for future quantum networks

    New Army-funded research could help lay the groundwork for future quantum communication networks and large-scale quantum computers.
    Researchers sent entangled qubit states through a communication cable linking one quantum network node to a second node.
    Scientists at the Pritzker School of Molecular Engineering at the University of Chicago, funded and managed by the U.S. Army Combat Capability Development, known as DEVCOM, Army Research Laboratory’s Center for Distributed Quantum Information, also amplified an entangled state via the same cable first by using the cable to entangle two qubits in each of two nodes, then entangling these qubits further with other qubits in the nodes. The peer-reviewed journal published the research in its Feb. 24, 2021, issue.
    “The entanglement distribution results the team achieved brought together years of their research related to approaches for transferring quantum states and related to advanced fabrication procedures to realize the experiments,” said Dr. Sara Gamble, program manager at the Army Research Office, an element of the Army’s corporate research laboratory, and co-manager of the CDQI, which funded the work. “This is an exciting achievement and one that paves the way for increasingly complex experiments with additional quantum nodes that we’ll need for the large-scale quantum networks and computers of ultimate interest to the Army.”
    Qubits, or quantum bits, are the basic units of quantum information. By exploiting their quantum properties, like superposition, and their ability to be entangled together, scientists and engineers are creating next-generation quantum computers that will be able solve previously unsolvable problems.
    The research team uses superconducting qubits, tiny cryogenic circuits that can be manipulated electrically.

    advertisement

    “Developing methods that allow us to transfer entangled states will be essential to scaling quantum computing,” said Prof. Andrew Cleland, the John A. MacLean senior professor of Molecular Engineering Innovation and Enterprise at University of Chicago, who led the research.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    Entanglement is a correlation that can be created between quantum entities such as qubits. When two qubits are entangled and a measurement is made on one, it will affect the outcome of a measurement made on the other, even if that second qubit is physically far away.
    To send the entangled states through the communication cable — a one-meter-long superconducting cable — the researchers created an experimental set-up with three superconducting qubits in each of two nodes. They connected one qubit in each node to the cable and then sent quantum states, in the form of microwave photons, through the cable with minimal loss of information. The fragile nature of quantum states makes this process quite challenging.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second). That allowed them to send entangled quantum states with very little information loss.

    advertisement

    The system also allowed them to amplify the entanglement of qubits. The researchers used one qubit in each node and entangled them together by essentially sending a half-photon through the cable. They then extended this entanglement to the other qubits in each node. When they were finished, all six qubits in two nodes were entangled in a single globally entangled state.
    “We want to show that superconducting qubits have a viable role going forward,” Cleland said.
    A quantum communication network could potentially take advantage of this advance. The group plans to extend their system to three nodes to build three-way entanglement.
    The researchers developed a system in which the whole transfer process — node to cable to node — takes only a few tens of nanoseconds (a nanosecond is one billionth of a second).
    “The team was able to identify a primary limiting factor in this current experiment related to loss in some of the components,” said Dr. Fredrik Fatemi, branch chief for quantum sciences, DEVCOM ARL, and co-manager of CDQI. “They have a clear path forward for increasingly complex experiments which will enable us to explore new regimes in distributed entanglement.” More