More stories

  • in

    Researchers 3D print sensors for satellites

    MIT scientists have created the first completely digitally manufactured plasma sensors for orbiting spacecraft. These plasma sensors, also known as retarding potential analyzers (RPAs), are used by satellites to determine the chemical composition and ion energy distribution of the atmosphere.
    The 3D-printed and laser-cut hardware performed as well as state-of-the-art semiconductor plasma sensors that are manufactured in a cleanroom, which makes them expensive and requires weeks of intricate fabrication. By contrast, the 3D-printed sensors can be produced for tens of dollars in a matter of days.
    Due to their low cost and speedy production, the sensors are ideal for CubeSats. These inexpensive, low-power, and lightweight satellites are often used for communication and environmental monitoring in Earth’s upper atmosphere.
    The researchers developed RPAs using a glass-ceramic material that is more durable than traditional sensor materials like silicon and thin-film coatings. By using the glass-ceramic in a fabrication process that was developed for 3D printing with plastics, there were able to create sensors with complex shapes that can withstand the wide temperature swings a spacecraft would encounter in lower Earth orbit.
    “Additive manufacturing can make a big difference in the future of space hardware. Some people think that when you 3D-print something, you have to concede less performance. But we’ve shown that is not always the case. Sometimes there is nothing to trade off,” says Luis Fernando Velásquez-García, a principal scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper presenting the plasma sensors.
    Joining Velásquez-García on the paper are lead author and MTL postdoc Javier Izquierdo-Reyes; graduate student Zoey Bigelow; and postdoc Nicholas K. Lubinsky. The research is published in Additive Manufacturing. More

  • in

    A key role for quantum entanglement

    A method known as quantum key distribution has long held the promise of communication security unattainable in conventional cryptography. An international team of scientists has now demonstrated experimentally, for the first time, an approach to quantum key distribution that is based on high-quality quantum entanglement — offering much broader security guarantees than previous schemes.
    The art of cryptography is to skillfully transform messages so that they become meaningless to everyone but the intended recipients. Modern cryptographic schemes, such as those underpinning digital commerce, prevent adversaries from illegitimately deciphering messages — say, credit-card information — by requiring them to perform mathematical operations that consume a prohibitively large amount of computational power. Starting from the 1980s, however, ingenious theoretical concepts have been introduced in which security does not depend on the eavesdropper’s finite number-crunching capabilities. Instead, basic laws of quantum physics limit how much information, if any, an adversary can ultimately intercept. In one such concept, security can be guaranteed with only a few general assumptions about the physical apparatus used. Implementations of such ‘device-independent’ schemes have long been sought after, but remained out of reach. Until now, that is. Writing in Nature, an international team of researchers from the University of Oxford, EPFL, ETH Zurich, the University of Geneva and CEA report the first demonstration of this sort of protocol — taking a decisive step towards practical devices offering such exquisite security.
    The key is a secret
    Secure communication is all about keeping information private. It might be surprising, therefore, that in real-world applications large parts of the transactions between legitimate users are played out in public. The key is that sender and receiver do not have to keep their entire communication hidden. In essence, they only have to share one ‘secret’; in practice, this secret is string of bits, known as a cryptographic key, that enables everyone in its possession to turn coded messages into meaningful information. Once the legitimate parties have ensured for a given round of communication that they, and only they, share such a key, pretty much all the other communication can happen in plain view, for everyone to see. The question, then, is how to ensure that only the legitimate parties share a secret key. The process of accomplishing this is known as ‘key distribution’.
    In the cryptographic algorithms underlying, for instance, RSA — one of the most widely used cryptographic systems — key distribution is based on the (unproven) conjecture that certain mathematical functions are easy to compute but hard to revert. More specifically, RSA relies on the fact that for today’s computers it is hard to find the prime factors of a large number, whereas it is easy for them to multiply known prime factors to obtain that number. Secrecy is therefore ensured by mathematical difficulty. But what is impossibly difficult today might be easy tomorrow. Famously, quantum computers can find prime factors significantly more efficiently than classical computers. Once quantum computers with a sufficiently large number of qubits become available, RSA encoding is destined to become penetrable.
    But quantum theory provides the basis not only for cracking the cryptosystems at the heart of digital commerce, but also for a potential solution to the problem: a way entirely different from RSA for distributing cryptographic keys — one that has nothing to do with the hardness of performing mathematical operations, but with fundamental physical laws. Enter quantum key distribution, or QKD for short.
    Quantum-certified security
    In 1991, the Polish-British physicist Artur Ekert showed in a seminal paper that the security of the key-distribution process can be guaranteed by directly exploiting a property that is unique to quantum systems, with no equivalent in classical physics: quantum entanglement. Quantum entanglement refers to certain types of correlations in the outcomes of measurements performed on separate quantum systems. Importantly, quantum entanglement between two systems is exclusive, in that nothing else can be correlated to these systems. In the context of cryptography this means that sender and receiver can produce between them shared outcomes through entangled quantum systems, without a third party being able to secretly gain knowledge about these outcomes. Any eavesdropping leaves traces that clearly flag the intrusion. In short: the legitimate parties can interact with one another in ways that are — thanks to quantum theory — fundamentally beyond any adversary’s control. In classical cryptography, an equivalent security guarantee is provably impossible.
    Over the years, it was realized that QKD schemes based on the ideas introduced by Ekert can have a further remarkable benefit: users have to make only very general assumptions regarding the devices employed in the process. By contrast, earlier forms of QKD based on other basic principles require detailed knowledge about the inner workings of the devices used. The novel form of QKD is now generally known as ‘device-independent QKD’ (DIQKD), and an experimental implementation thereof became a major goal in the field. Hence the excitement as such a breakthrough experiment has now finally been achieved.
    Culmination of years of work
    The scale of the challenge is reflected in the breadth of the team, which combines leading experts in theory and experiment. The experiment involved two single ions — one for the sender and one for the receiver — confined in separate traps that were connected with an optical-fibre link. In this basic quantum network, entanglement between the ions was generated with record-high fidelity over millions of runs. Without such a sustained source of high-quality entanglement, the protocol could not have been run in a practically meaningful manner. Equally important was to certify that the entanglement is suitably exploited, which is done by showing that conditions known as Bell inequalities are violated. Moreover, for the analysis of the data and an efficient extraction of the cryptographic key, significant advances in the theory were needed.
    In the experiment, the ‘legitimate parties’ — the ions — were located in one and the same laboratory. But there is a clear route to extending the distance between them to kilometres and beyond. With that perspective, together with further recent progress made in related experiments in Germany and China, there is now a real prospect of turning the theoretical concept of Ekert into practical technology. More

  • in

    Quantum cryptography: Hacking is futile

    The Internet is teeming with highly sensitive information. Sophisticated encryption techniques generally ensure that such content cannot be intercepted and read. But in the future high-performance quantum computers could crack these keys in a matter of seconds. It is just as well, then, that quantum mechanical techniques not only enable new, much faster algorithms, but also exceedingly effective cryptography.
    Quantum key distribution (QKD) — as the jargon has it — is secure against attacks on the communication channel, but not against attacks on or manipulations of the devices themselves. The devices could therefore output a key which the manufacturer had previously saved and might conceivably have forwarded to a hacker. With device- independent QKD (abbreviated to DIQKD), it is a different story. Here, the cryptographic protocol is independent of the device used. Theoretically known since the 1990s, this method has now been experimentally realized for the first time, by an international research group led by LMU physicist Harald Weinfurter and Charles Lim from the National University of Singapore (NUS).
    For exchanging quantum mechanical keys, there are different approaches available. Either light signals are sent by the transmitter to the receiver, or entangled quantum systems are used. In the present experiment, the physicists used two quantum mechanically entangled rubidium atoms, situated in two laboratories located 400 meters from each other on the LMU campus. The two locations are connected via a fiber optic cable 700 meters in length, which runs beneath Geschwister Scholl Square in front of the main building.
    To create an entanglement, first the scientists excite each of the atoms with a laser pulse. After this, the atoms spontaneously fall back into their ground state, each thereby emitting a photon. Due to the conservation of angular momentum, the spin of the atom is entangled with the polarization of its emitted photon. The two light particles travel along the fiber optic cable to a receiver station, where a joint measurement of the photons indicates an entanglement of the atomic quantum memories.
    To exchange a key, Alice und Bob — as the two parties are usually dubbed by cryptographers — measure the quantum states of their respective atom. In each case, this is done randomly in two or four directions. If the directions correspond, the measurement results are identical on account of entanglement and can be used to generate a secret key. With the other measurement results, a so-called Bell inequality can be evaluated. Physicist John Stewart Bell originally developed these inequalities to test whether nature can be described with hidden variables. “It turned out that it cannot,” says Weinfurter. In DIQKD, the test is used “specifically to ensure that there are no manipulations at the devices — that is to say, for example, that hidden measurement results have not been saved in the devices beforehand,” explains Weinfurter.
    In contrast to earlier approaches, the implemented protocol, which was developed by researchers at NUS, uses two measurement settings for key generation instead of one: “By introducing the additional setting for key generation, it becomes more difficult to intercept information, and therefore the protocol can tolerate more noise and generate secret keys even for lower-quality entangled states,” says Charles Lim.
    With conventional QKD methods, by contrast, security is guaranteed only when the quantum devices used have been characterized sufficiently well. “And so, users of such protocols have to rely on the specifications furnished by the QKD providers and trust that the device will not switch into another operating mode during the key distribution,” explains Tim van Leent, one of the four lead authors of the paper alongside Wei Zhang and Kai Redeker. It has been known for at least a decade that older QKD devices could easily be hacked from outside, continues van Leent.
    “With our method, we can now generate secret keys with uncharacterized and potentially untrustworthy devices,” explains Weinfurter. In fact, he had his doubts initially whether the experiment would work. But his team proved his misgivings were unfounded and significantly improved the quality of the experiment, as he happily admits. Alongside the cooperation project between LMU and NUS, another research group from the University of Oxford demonstrated the device-independent key distribution. To do this, the researchers used a system comprising two entangled ions in the same laboratory. “These two projects lay the foundation for future quantum networks, in which absolutely secure communication is possible between far distant locations,” says Charles Lim.
    One of the next goals is to expand the system to incorporate several entangled atom pairs. “This would allow many more entanglement states to be generated, which increases the data rate and ultimately the key security,” says van Leent. In addition, the researchers would like to increase the range. In the present set-up, it was limited by the loss of around half the photons in the fiber between the laboratories. In other experiments, the researchers were able to transform the wavelength of the photons into a low-loss region suitable for telecommunications. In this way, for just a little extra noise, they managed to increase the range of the quantum network connection to 33 kilometers. More

  • in

    Breakthrough quantum algorithm

    City College of New York physicist Pouyan Ghaemi and his research team are claiming significant progress in using quantum computers to study and predict how the state of a large number of interacting quantum particles evolves over time. This was done by developing a quantum algorithm that they run on an IBM quantum computer. “To the best of our knowledge, such particular quantum algorithm which can simulate how interacting quantum particles evolve over time has not been implemented before,” said Ghaemi, associate professor in CCNY’s Division of Science.
    Entitled “Probing geometric excitations of fractional quantum Hall states on quantum computers,” the study appears in the journal of Physical Review Letters.
    “Quantum mechanics is known to be the underlying mechanism governing the properties of elementary particles such as electrons,” said Ghaemi. “But unfortunately there is no easy way to use equations of quantum mechanics when we want to study the properties of large number of electrons that are also exerting force on each other due to their electric charge.
    His team’s discovery, however, changes this and raises other exciting possibilities.
    “On the other front, recently, there has been extensive technological developments in building the so-called quantum computers. These new class of computers utilize the law of quantum mechanics to preform calculations which are not possible with classical computers.”
    We know that when electrons in material interact with each other strongly, interesting properties such as high-temperature superconductivity could emerge,” Ghaemi noted. “Our quantum computing algorithm opens a new avenue to study the properties of materials resulting from strong electron-electron interactions. As a result it can potentially guide the search for useful materials such as high temperature superconductors.”
    He added that based on their results, they can now potentially look at using quantum computers to study many other phenomena that result from strong interaction between electrons in solids. “There are many experimentally observed phenomena that could be potentially understood using the development of quantum algorithms similar to the one we developed.”
    The research was done at CCNY — and involved an interdisciplinary team from the physics and electrical engineering departments — in collaboration with experts from Western Washington University, Leeds University in the UK; and Schlumberger-Doll Research Center in Cambridge, Massachusetts. The research was funded by the National Science Foundation and Britain’s Engineering and Science Research Council.
    Story Source:
    Materials provided by City College of New York. Note: Content may be edited for style and length. More

  • in

    Anti-butterfly effect enables new benchmarking of quantum-computer performance

    Research drawing on the quantum “anti-butterfly effect” solves a longstanding experimental problem in physics and establishes a method for benchmarking the performance of quantum computers.
    “Using the simple, robust protocol we developed, we can determine the degree to which quantum computers can effectively process information, and it applies to information loss in other complex quantum systems, too,” said Bin Yan, a quantum theorist at Los Alamos National Laboratory.
    Yan is corresponding author of the paper on benchmarking information scrambling published today in Physical Review Letters. “Our protocol quantifies information scrambling in a quantum system and unambiguously distinguishes it from fake positive signals in the noisy background caused by quantum decoherence,” he said.
    Noise in the form of decoherence erases all the quantum information in a complex system such as a quantum computer as it couples with the surrounding environment. Information scrambling through quantum chaos, on the other hand, spreads information across the system, protecting it and allowing it to be retrieved.
    Coherence is a quantum state that enables quantum computing, and decoherence refers to the loss of that state as information leaks to the surrounding environment.
    “Our method, which draws on the quantum anti-butterfly effect we discovered two years ago, evolves a system forward and backward through time in a single loop, so we can apply it to any system with time-reversing the dynamics, including quantum computers and quantum simulators using cold atoms,” Yan said.
    The Los Alamos team demonstrated the protocol with simulations on IBM cloud-based quantum computers.
    The inability to distinguish decoherence from information scrambling has stymied experimental research into the phenomenon. First studied in black-hole physics, information scrambling has proved relevant across a wide range of research areas, including quantum chaos in many-body systems, phase transition, quantum machine learning and quantum computing. Experimental platforms for studying information scrambling include superconductors, trapped ions and cloud-based quantum computers.
    Practical application of the quantum anti-butterfly effect
    Yan and co-author Nikolai Sinitsyn published a paper in 2020 proving that evolving quantum processes backwards on a quantum computer to damage information in the simulated past causes little change when returned to the present. In contrast, a classical-physics system smears the information irrecoverably during the back-and-forth time loop.
    Building on this discovery, Yan, Sinitsyn and co-author Joseph Harris, a University of Edinburgh graduate student who worked on the current paper as a participant in the Los Alamos Quantum Computing Summer School, developed the protocol. It prepares a quantum system and subsystem, evolves the full system forward in time, causes a change in a different subsystem, then evolves the system backward for the same amount of time. Measuring the overlap of information between the two subsystems shows how much information has been preserved by scrambling and how much lost to decoherence.
    Story Source:
    Materials provided by DOE/Los Alamos National Laboratory. Note: Content may be edited for style and length. More

  • in

    Engineering roboticists discover alternative physics

    A precursor step to understanding physics is identifying relevant variables. Columbia Engineers developed an AI program to tackle a longstanding problem: whether it is possible to identify state variables from only high-dimensional observational data. Using video recordings of a variety of physical dynamical systems, the algorithm discovered the intrinsic dimension of the observed dynamics and identified candidate sets of state variables — without prior knowledge of the underlying physics.
    Energy, Mass, Velocity. These three variables make up Einstein’s iconic equation E=MC2. But how did Einstein know about these concepts in the first place? A precursor step to understanding physics is identifying relevant variables. Without the concept of energy, mass, and velocity, not even Einstein could discover relativity. But can such variables be discovered automatically? Doing so could greatly accelerate scientific discovery.
    This is the question that researchers at Columbia Engineering posed to a new AI program. The program was designed to observe physical phenomena through a video camera, then try to search for the minimal set of fundamental variables that fully describe the observed dynamics. The study was published on July 25 in Nature Computational Science.
    The researchers began by feeding the system raw video footage of phenomena for which they already knew the answer. For example, they fed a video of a swinging double-pendulum known to have exactly four “state variables” — the angle and angular velocity of each of the two arms. After a few hours of analysis, the AI outputted the answer: 4.7.
    “We thought this answer was close enough,” said Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering, where the work was primarily done. “Especially since all the AI had access to was raw video footage, without any knowledge of physics or geometry. But we wanted to know what the variables actually were, not just their number.”
    The researchers then proceeded to visualize the actual variables that the program identified. Extracting the variables themselves was not easy, since the program cannot describe them in any intuitive way that would be understandable to humans. After some probing, it appeared that two of the variables the program chose loosely corresponded to the angles of the arms, but the other two remain a mystery. “We tried correlating the other variables with anything and everything we could think of: angular and linear velocities, kinetic and potential energy, and various combinations of known quantities,” explained Boyuan Chen PhD ’22, now an assistant professor at Duke University, who led the work. “But nothing seemed to match perfectly.” The team was confident that the AI had found a valid set of four variables, since it was making good predictions, “but we don’t yet understand the mathematical language it is speaking,” he explained. More

  • in

    Seeing the light: Researchers develop new AI system using light to learn associatively

    Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.
    The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response — a conditional reflex.
    Co-first author Dr James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford said: ‘Pavlovian associative learning is regarded as a basic form of learning that shapes the behaviour of humans and animals — but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.’
    The neural networks used in most AI systems often require a substantial number of data examples during a learning process — training a model to reliably recognise a cat could use up to 10,000 cat/non-cat images — at a computational and processing cost.
    Rather than relying on backpropagation favoured by neural networks to ‘fine-tune’ results, the Associative Monadic Learning Element (AMLE) uses a memory material that learns patterns to associate together similar features in datasets — mimicking the conditional reflex observed by Pavlov in the case of a ‘match’.
    The AMLE inputs are paired with the correct outputs to supervise the learning process, and the memory material can be reset using light signals. In testing, the AMLE could correctly identify cat/non-cat images after being trained with just five pairs of images.
    The considerable performance capabilities of the new optical chip over a conventional electronic chip are down to two key differences in design: a unique network architecture incorporating associative learning as a building block rather than using neurons and a neural network the use of ‘wavelength-division multiplexing’to send multiple optical signals on different wavelengths on a single channel to increase computational speed.The chip hardware uses light to send and retrieve data to maximise information density — several signals on different wavelengths are sent simultaneously for parallel processing which increases the detection speed of recognition tasks. Each wavelength increases the computational speed.
    Professor Wolfram Pernice, co-author from Münster University explained: ‘The device naturally captures similarities in datasets while doing so in parallel using light to increase the overall computation speed — which can far exceed the capabilities of conventional electronic chips.’
    An associative learning approach could complement neural networks rather than replace them clarified co-first author Professor Zengguang Cheng, now at Fudan University.
    ‘It is more efficient for problems that don’t need substantial analysis of highly complex features in the datasets’ said Professor Cheng. ‘Many learning tasks are volume based and don’t have that level of complexity — in these cases, associative learning can complete the tasks more quickly and at a lower computational cost.’
    ‘It is increasingly evident that AI will be at the centre of many innovations we will witness in the coming phase of human history. This work paves the way towards realising fast optical processors that capture data associations for particular types of AI computations, although there are still many exciting challenges ahead.’ said Professor Harish Bhaskaran, who led the study.
    Story Source:
    Materials provided by University of Oxford. Note: Content may be edited for style and length. More

  • in

    Improving image sensors for machine vision

    Image sensors measure light intensity, but angle, spectrum, and other aspects of light must also be extracted to significantly advance machine vision.
    In Applied Physics Letters, published by AIP Publishing, researchers at the University of Wisconsin-Madison, Washington University in St. Louis, and OmniVision Technologies highlight the latest nanostructured components integrated on image sensor chips that are most likely to make the biggest impact in multimodal imaging.
    The developments could enable autonomous vehicles to see around corners instead of just a straight line, biomedical imaging to detect abnormalities at different tissue depths, and telescopes to see through interstellar dust.
    “Image sensors will gradually undergo a transition to become the ideal artificial eyes of machines,” co-author Yurui Qu, from the University of Wisconsin-Madison, said. “An evolution leveraging the remarkable achievement of existing imaging sensors is likely to generate more immediate impacts.”
    Image sensors, which converts light into electrical signals, are composed of millions of pixels on a single chip. The challenge is how to combine and miniaturize multifunctional components as part of the sensor.
    In their own work, the researchers detailed a promising approach to detect multiple-band spectra by fabricating an on-chip spectrometer. They deposited photonic crystal filters made up of silicondirectly on top of the pixels to create complex interactions between incident light and the sensor.
    The pixels beneath the films record the distribution of light energy, from which light spectral information can be inferred. The device — less than a hundredth of a square inch in size — is programmable to meet various dynamic ranges, resolution levels, and almost any spectral regime from visible to infrared.
    The researchers built a component that detects angular information to measure depth and construct 3D shapes at subcellular scales. Their work was inspired by directional hearing sensors found in animals, like geckos, whose heads are too small to determine where sound is coming from in the same way humans and other animals can. Instead, they use coupled eardrums to measure the direction of sound within a size that is orders of magnitude smaller than the corresponding acoustic wavelength.
    Similarly, pairs of silicon nanowires were constructed as resonators to support optical resonance. The optical energy stored in two resonators is sensitive to the incident angle. The wire closest to the light sends the strongest current. By comparing the strongest and weakest currents from both wires, the angle of the incoming light waves can be determined.
    Millions of these nanowires can be placed on a 1-square-millimeter chip. The research could support advances in lensless cameras, augmented reality, and robotic vision.
    Story Source:
    Materials provided by American Institute of Physics. Note: Content may be edited for style and length. More