More stories

  • in

    'Beautiful marriage' of quantum enemies

    Cornell University scientists have identified a new contender when it comes to quantum materials for computing and low-temperature electronics.
    Using nitride-based materials, the researchers created a material structure that simultaneously exhibits superconductivity — in which electrical resistance vanishes completely — and the quantum Hall effect, which produces resistance with extreme precision when a magnetic field is applied.
    “This is a beautiful marriage of the two things we know, at the microscale, that give electrons the most startling quantum properties,” said Debdeep Jena, the David E. Burr Professor of Engineering in the School of Electrical and Computer Engineering and Department of Materials Science and Engineering. Jena led the research, published Feb. 19 in Science Advances, with doctoral student Phillip Dang and research associate Guru Khalsa, the paper’s senior authors.
    The two physical properties are rarely seen simultaneously because magnetism is like kryptonite for superconducting materials, according to Jena.
    “Magnetic fields destroy superconductivity, but the quantum Hall effect only shows up in semiconductors at large magnetic fields, so you’re having to play with these two extremes,” Jena said. “Researchers in the past few years have been trying to identify materials which show both properties with mixed success.”
    The research is the latest validation from the Jena-Xing Lab that nitride materials may have more to offer science than previously thought. Nitrides have traditionally been used for manufacturing LEDs and transistors for products like smartphones and home lighting, giving them a reputation as an industrial class of materials that has been overlooked for quantum computation and cryogenic electronics.

    advertisement

    “The material itself is not as perfect as silicon, meaning it has a lot more defects,” said co-author Huili Grace Xing, the William L. Quackenbush Professor of Electrical and Computer Engineering and of Materials Science and Engineering. “But because of its robustness, this material has thrown pleasant surprises to the research community more than once despite its extremely large irregularities in structure. There may be a path forward for us to truly integrate different modalities of quantum computing — computation, memory, communication.”
    Such integration could help to condense the size of quantum computers and other next-generation electronics, just as classical computers have shrunk from warehouse to pocket size.
    “We’re wondering what this sort of material platform can enable because we see that it’s checking off a lot of boxes,” said Jena, who added that new physical phenomena and technological applications could emerge with further research. “It has a superconductor, a semiconductor, a filter material — it has all kinds of other components, but we haven’t put them all together. We’ve just discovered they can coexist.”
    For this research, the Cornell team began engineering epitaxial nitride heterostructures — atomically thin layers of gallium nitride and niobium nitride — and searching for conditions in which magnetic fields and temperatures in the layers would retain their respective quantum Hall and superconducting properties.
    They eventually discovered a small window in which the properties were observed simultaneously, thanks to advances in the quality of the materials and structures produced in close collaboration with colleagues at the Naval Research Laboratory.
    “The quality of the niobium-nitride superconductor was improved enough that it can survive higher magnetic fields, and simultaneously we had to improve the quality of the gallium-nitride semiconductor enough that it could exhibit the quantum Hall effect at lower magnetic fields,” Dang said. “And that’s what will really allow for potential new physics to be seen at low temperature.”
    Potential applications for the material structure include more efficient electronics, such as data centers cooled to extremely low temperatures to eliminate heat waste. And the structure is the first to lay the groundwork for the use of nitride semiconductors and superconductors in topological quantum computing, in which the movement of electrons must be resilient to the material defects typically seen in nitrides.
    “What we’ve shown is that the ingredients you need to make this topological phase can be in the same structure,” Khalsa said, “and I think the flexibility of the nitrides really opens up new possibilities and ways to explore topological states of matter.”
    The research was funded by the Office of Naval Research and the National Science Foundation.

    Story Source:
    Materials provided by Cornell University. Original written by Syl Kacapyr. Note: Content may be edited for style and length. More

  • in

    Lack of symmetry in qubits can't fix errors in quantum computing, might explain matter/antimatter

    A team of quantum theorists seeking to cure a basic problem with quantum annealing computers — they have to run at a relatively slow pace to operate properly — found something intriguing instead. While probing how quantum annealers perform when operated faster than desired, the team unexpectedly discovered a new effect that may account for the imbalanced distribution of matter and antimatter in the universe and a novel approach to separating isotopes.
    “Although our discovery did not the cure the annealing time restriction, it brought a class of new physics problems that can now be studied with quantum annealers without requiring they be too slow,” said Nikolai Sinitsyn, a theoretical physicist at Los Alamos National Laboratory. Sinitsyn is author of the paper published Feb. 19 in Physical Review Letters, with coauthors Bin Yan and Wojciech Zurek, both also of Los Alamos, and Vladimir Chernyak of Wayne State University.
    Significantly, this finding hints at how at least two famous scientific problems may be resolved in the future. The first one is the apparent asymmetry between matter and antimatter in the universe.
    “We believe that small modifications to recent experiments with quantum annealing of interacting qubits made of ultracold atoms across phase transitions will be sufficient to demonstrate our effect,” Sinitsyn said.
    Explaining the Matter/Antimatter Discrepancy
    Both matter and antimatter resulted from the energy excitations that were produced at the birth of the universe. The symmetry between how matter and antimatter interact was broken but very weakly. It is still not completely clear how this subtle difference could lead to the large observed domination of matter compared to antimatter at the cosmological scale.

    advertisement

    The newly discovered effect demonstrates that such an asymmetry is physically possible. It happens when a large quantum system passes through a phase transition, that is, a very sharp rearrangement of quantum state. In such circumstances, strong but symmetric interactions roughly compensate each other. Then subtle, lingering differences can play the decisive role.
    Making Quantum Annealers Slow Enough
    Quantum annealing computers are built to solve complex optimization problems by associating variables with quantum states or qubits. Unlike a classical computer’s binary bits, which can only be in a state, or value, of 0 or 1, qubits can be in a quantum superposition of in-between values. That’s where all quantum computers derive their awesome, if still largely unexploited, powers.
    In a quantum annealing computer, the qubits are initially prepared in a simple lowest energy state by applying a strong external magnetic field. This field is then slowly switched off, while the interactions between the qubits are slowly switched on.
    “Ideally an annealer runs slow enough to run with minimal errors, but because of decoherence, one has to run the annealer faster,” Yan explained. The team studied the emerging effect when the annealers are operated at a faster speed, which limits them to a finite operation time.

    advertisement

    “According to the adiabatic theorem in quantum mechanics, if all changes are very slow, so-called adiabatically slow, then the qubits must always remain in their lowest energy state,” Sinitsyn said. “Hence, when we finally measure them, we find the desired configuration of 0s and 1s that minimizes the function of interest, which would be impossible to get with a modern classical computer.”
    Hobbled by Decoherence
    However, currently available quantum annealers, like all quantum computers so far, are hobbled by their qubits’ interactions with the surrounding environment, which causes decoherence. Those interactions restrict the purely quantum behavior of qubits to about one millionth of a second. In that timeframe, computations have to be fast — nonadiabatic — and unwanted energy excitations alter the quantum state, introducing inevitable computational mistakes.
    The Kibble-Zurek theory, co-developed by Wojciech Zurek, predicts that the most errors occur when the qubits encounter a phase transition, that is, a very sharp rearrangement of their collective quantum state.
    For this paper, the team studied a known solvable model where identical qubits interact only with their neighbors along a chain; the model verifies the Kibble-Zurek theory analytically. In the theorists’ quest to cure limited operation time in quantum annealing computers, they increased the complexity of that model by assuming that the qubits could be partitioned into two groups with identical interactions within each group but slightly different interactions for qubits from the different groups.
    In such a mixture, they discovered an unusual effect: One group still produced a large amount of energy excitations during the passage through a phase transition, but the other group remained in the energy minimum as if the system did not experience a phase transition at all.
    “The model we used is highly symmetric in order to be solvable, and we found a way to extend the model, breaking this symmetry and still solving it,” Sinitsyn explained. “Then we found that the Kibble-Zurek theory survived but with a twist — half of the qubits did not dissipate energy and behaved ‘nicely.’ In other words, they maintained their ground states.”
    Unfortunately, the other half of the qubits did produce many computational errors — thus, no cure so far for a passage through a phase transition in quantum annealing computers.
    A New Way to Separate Isotopes
    Another long-standing problem that can benefit from this effect is isotope separation. For instance, natural uranium often must be separated into the enriched and depleted isotopes, so the enriched uranium can be used for nuclear power or national security purposes. The current separation process is costly and energy intensive. The discovered effect means that by making a mixture of interacting ultra-cold atoms pass dynamically through a quantum phase transition, different isotopes can be selectively excited or not and then separated using available magnetic deflection technique.
    The funding: This work was carried out under the support of the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, Condensed Matter Theory Program. Bin Yan also acknowledges support from the Center for Nonlinear Studies at LANL. More

  • in

    Lonely adolescents are susceptible to internet addiction

    Loneliness is a risk factor associated with adolescents being drawn into compulsive internet use. The risk of compulsive use has grown in the coronavirus pandemic: loneliness has become increasingly prevalent among adolescents, who spend longer and longer periods of time online.
    A study investigating detrimental internet use by adolescents involved a total of 1,750 Finnish study subjects, who were studied at three points in time: at 16, 17 and 18 years of age. The results have been published in the Child Development journal.
    Adolescents’ net use is a two-edged sword: while the consequences of moderate use are positive, the effects of compulsive use can be detrimental. Compulsive use denotes, among other things, gaming addiction or the constant monitoring of likes on social media and comparisons to others.
    “In the coronavirus period, loneliness has increased markedly among adolescents. They look for a sense of belonging from the internet. Lonely adolescents head to the internet and are at risk of becoming addicted. Internet addiction can further aggravate their malaise, such as depression,” says Professor of Education and study lead Katariina Salmela-Aro from the University of Helsinki.
    Highest risk for 16-year-old boys
    The risk of being drawn into problematic internet use was at its highest among 16-year-old adolescents, with the phenomenon being more common among boys. For some, the problem persists into adulthood, but for others it eases up as they grow older. The reduction of problematic internet use is often associated with adolescent development where their self-regulation and control improve, their brains adapt and assignments related to education direct their attention.
    “It’s comforting to know that problematic internet use is adaptive and often changes in late adolescence and during the transition to adulthood. Consequently, attention should be paid to the matter both in school and at home. Addressing loneliness too serves as a significant channel for preventing excessive internet use,” Salmela-Aro notes.
    It was found in the study that the household climate and parenting also matter: the children of distant parents have a higher risk of drifting into detrimental internet use. If parents are not very interested in the lives of their adolescents, the latter may have difficulty drawing the lines for their actions.
    Problematic net use and depression form a cycle
    In the study participants, compulsive internet use had a link to depression. Depression predicted problematic internet use, while problematic use further increased depressive symptoms.
    Additionally, problematic use was predictive of poorer academic success, which may be associated with the fact that internet use consumes a great deal of time and can disrupt adolescents’ sleep rhythm and recovery, consequently eating up the time available for academic effort and performance.

    Story Source:
    Materials provided by University of Helsinki. Original written by Katariina Salmela-Aro, Suvi Uotinen. Note: Content may be edited for style and length. More

  • in

    Positive vibes only: Forego negative texts or risk being labelled a downer

    A new study from researchers at the University of Ottawa’s School of Psychology has found that using negative emojis in text messages produces a negative perception of the sender regardless of their true intent.
    Isabelle Boutet, a Full Professor in Psychology in the Faculty of Social Sciences, and her team’s findings are included in the study ‘Emojis influence emotional communication, social attributions, and information processing’ which was published in Computers in Human Behavior.
    Study background: Eye movements of 38 University of Ottawa volunteer undergraduate student participants were tracked and studied, and the volunteers were shown sentence-emoji pairing under 12 different conditions where sentences could be negative, positive, or neutral, accompanied by a negative emoji, positive emoji, neutral emoji, or no emoji. With an average age of 18, participants were asked to rate each message in terms of emotional state of the sender and how warm they found them to be.
    Dr. Boutet, whose research aims at understanding how humans analyze social cues conveyed by faces, discusses the findings.
    He said, “Emojis are consequential and have an impact on the interpretation of the sender by the receiver and if you display any form of negativity — even pairing a positive emoji with a negative message — it is going to be interpreted negatively. You are going to be perceived as a person who is cold, and you will come across as in a negative mood when using negative emojis, regardless of the tone.
    “Even if you have a positive message with a negative emoji, the receiver will interpret the sender as being in a negative mood. Any reference to negativity will drive how people interpret your emotional state when you write a text message.

    advertisement

    “We also found certain types of messages were more difficult to convey; people have a lot of problems interpreting messages that are meant to convey irony or sarcasm.”
    What does this tell us about texting vs. face-to-face interactions?
    “People often try to control the emotion they convey with their faces to avoid social conflict. Yet people use emojis for fun without giving it much thought when, in fact, they have a strong impact on interpersonal interactions.
    “The big question is do emojis act as proxies, do they engage the same mechanism as facial expressions of emotions that play a large role in face-to-face (FTF) interaction? With FTF interactions, we have — through evolution — developed very evolved mechanisms that process these facial expressions of emotions. Kids use a lot of these digital interactions and they risk losing the ability to interact FTF.”
    How can the use of emojis and their meaning be improved?
    “There are a lot of emojis and many we don’t even know what they mean, and people can easily misinterpret them. We are looking at developing new emojis that convey emotions in more consistent and accurate manner, that better mimic facial expressions of emotions and reduce the lexicon of emojis, which could be especially helpful to less tech-savvy older adults. Our goals are to develop new emojis and/or memojis that convey clear signals that are not as confusing.”
    “You should not think that emojis are a cute little thing that you add in a text message and that it has no consequence on your interaction. Emojis have large consequence and strong impact on how your text message will be interpreted and how you will be perceived.”

    Story Source:
    Materials provided by University of Ottawa. Note: Content may be edited for style and length. More

  • in

    New metalens shifts focus without tilting or moving

    Polished glass has been at the center of imaging systems for centuries. Their precise curvature enables lenses to focus light and produce sharp images, whether the object in view is a single cell, the page of a book, or a far-off galaxy.
    Changing focus to see clearly at all these scales typically requires physically moving a lens, by tilting, sliding, or otherwise shifting the lens, usually with the help of mechanical parts that add to the bulk of microscopes and telescopes.
    Now MIT engineers have fabricated a tunable “metalens” that can focus on objects at multiple depths, without changes to its physical position or shape. The lens is made not of solid glass but of a transparent “phase-changing” material that, after heating, can rearrange its atomic structure and thereby change the way the material interacts with light.
    The researchers etched the material’s surface with tiny, precisely patterned structures that work together as a “metasurface” to refract or reflect light in unique ways. As the material’s property changes, the optical function of the metasurface varies accordingly. In this case, when the material is at room temperature, the metasurface focuses light to generate a sharp image of an object at a certain distance away. After the material is heated, its atomic structure changes, and in response, the metasurface redirects light to focus on a more distant object.
    In this way, the new active “metalens” can tune its focus without the need for bulky mechanical elements. The novel design, which currently images within the infrared band, may enable more nimble optical devices, such as miniature heat scopes for drones, ultracompact thermal cameras for cellphones, and low-profile night-vision goggles.
    “Our result shows that our ultrathin tunable lens, without moving parts, can achieve aberration-free imaging of overlapping objects positioned at different depths, rivaling traditional, bulky optical systems,” says Tian Gu, a research scientist in MIT’s Materials Research Laboratory.

    advertisement

    Gu and his colleagues have published their results today in the journal Nature Communications. His co-authors include Juejun Hu, Mikhail Shalaginov, Yifei Zhang, Fan Yang, Peter Su, Carlos Rios, Qingyang Du, and Anuradha Agarwal at MIT; Vladimir Liberman, Jeffrey Chou, and Christopher Roberts of MIT Lincoln Laboratory; and collaborators at the University of Massachusetts at Lowell, the University of Central Florida, and Lockheed Martin Corporation.
    A material tweak
    The new lens is made of a phase-changing material that the team fabricated by tweaking a material commonly used in rewritable CDs and DVDs. Called GST, it comprises germanium, antimony, and tellurium, and its internal structure changes when heated with laser pulses. This allows the material to switch between transparent and opaque states — the mechanism that enables data stored in CDs to be written, wiped away, and rewritten.
    Earlier this year, the researchers reported adding another element, selenium, to GST to make a new phase-changing material: GSST. When they heated the new material, its atomic structure shifted from an amorphous, random tangle of atoms to a more ordered, crystalline structure. This phase shift also changed the way infrared light traveled through the material, affecting refracting power but with minimal impact on transparency.
    The team wondered whether GSST’s switching ability could be tailored to direct and focus light at specific points depending on its phase. The material then could serve as an active lens, without the need for mechanical parts to shift its focus.

    advertisement

    “In general when one makes an optical device, it’s very challenging to tune its characteristics postfabrication,” Shalaginov says. “That’s why having this kind of platform is like a holy grail for optical engineers, that allows [the metalens] to switch focus efficiently and over a large range.”
    In the hot seat
    In conventional lenses, glass is precisely curved so that incoming light beam refracts off the lens at various angles, converging at a point a certain distance away, known as the lens’ focal length. The lenses can then produce a sharp image of any objects at that particular distance. To image objects at a different depth, the lens must physically be moved.
    Rather than relying on a material’s fixed curvature to direct light, the researchers looked to modify GSST-based metalens in a way that the focal length changes with the material’s phase.
    In their new study, they fabricated a 1-micron-thick layer of GSST and created a “metasurface” by etching the GSST layer into microscopic structures of various shapes that refract light in different ways.
    “It’s a sophisticated process to build the metasurface that switches between different functionalities, and requires judicious engineering of what kind of shapes and patterns to use,” Gu says. “By knowing how the material will behave, we can design a specific pattern which will focus at one point in the amorphous state, and change to another point in the crystalline phase.”
    They tested the new metalens by placing it on a stage and illuminating it with a laser beam tuned to the infrared band of light. At certain distances in front of the lens, they placed transparent objects composed of double-sided patterns of horizontal and vertical bars, known as resolution charts, that are typically used to test optical systems.
    The lens, in its initial, amorphous state, produced a sharp image of the first pattern. The team then heated the lens to transform the material to a crystalline phase. After the transition, and with the heating source removed, the lens produced an equally sharp image, this time of the second, farther set of bars.
    “We demonstrate imaging at two different depths, without any mechanical movement,” Shalaginov says.
    The experiments show that a metalens can actively change focus without any mechanical motions. The researchers say that a metalens could be potentially fabricated with integrated microheaters to quickly heat the material with short millisecond pulses. By varying the heating conditions, they can also tune to other material’s intermediate states, enabling continuous focal tuning.
    “It’s like cooking a steak — one starts from a raw steak, and can go up to well done, or could do medium rare, and anything else in between,” Shalaginov says. “In the future this unique platform will allow us to arbitrarily control the focal length of the metalens.” More

  • in

    Silver and gold nanowires open the way to better electrochromic devices

    The team of Professor Dongling Ma of the Institut national de la recherche scientifique (INRS) developed a new approach for foldable and solid devices.
    Solid and flexible electrochromic (EC) devices, such as smart windows, wearable electronics, foldable displays, and smartphones, are of great interest in research. This importance is due to their unique property: the colour or opacity of the material changes when a voltage is applied.
    Traditionally, electrochromic devices use indium tin oxide (ITO) electrodes. However, the inflexibility of metal oxide and the leakage issue of liquid electrolyte affect the performance and lifetime of EC devices. ITO is also brittle, which is incompatible with flexible substrates.
    Furthermore, there are concerns about the scarcity and cost of indium, a rare element, which raises a question on its long-term sustainability. The fabrication process for the highest quality ITO electrodes is expensive. “With all these limitations, the need for ITO-free optoelectronic devices are considerably high. We were able to achieve such a goal,” says Dongling Ma who led the study recently published in the journal Advanced Functional Materials.
    A new approach
    Indeed, the team has developed a new approach with a cost-effective and easy electrode fabrication that is completely ITO-free. “We reached high stability and flexibility of transparent conductive electrodes (TEC), even in a harsh environment, such as oxidizing solution of H2O2” she adds. They are the first to apply stable nanowires-based TCEs in flexible EC devices, using silver nanowires coated with a compact gold shell.
    Now that they have a proof of concept, the researchers want to scale up the synthesis of TEC and make the nanowires fabrication process even more cost-effective, while maintaining high device performance.

    Story Source:
    Materials provided by Institut national de la recherche scientifique – INRS. Original written by Audrey-Maude Vézina. Note: Content may be edited for style and length. More

  • in

    Speedier network analysis for a range of computer hardware developed

    Graphs — data structures that show the relationship among objects — are highly versatile. It’s easy to imagine a graph depicting a social media network’s web of connections. But graphs are also used in programs as diverse as content recommendation (what to watch next on Netflix?) and navigation (what’s the quickest route to the beach?). As Ajay Brahmakshatriya summarizes: “graphs are basically everywhere.”
    Brahmakshatriya has developed software to more efficiently run graph applications on a wider range of computer hardware. The software extends GraphIt, a state-of-the-art graph programming language, to run on graphics processing units (GPUs), hardware that processes many data streams in parallel. The advance could accelerate graph analysis, especially for applications that benefit from a GPU’s parallelism, such as recommendation algorithms.
    Brahmakshatriya, a PhD student in MIT’s Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, will present the work at this month’s International Symposium on Code Generation and Optimization. Co-authors include Brahmakshatriya’s advisor, Professor Saman Amarasinghe, as well as Douglas T. Ross Career Development Assistant Professor of Software Technology Julian Shun, postdoc Changwan Hong, recent MIT PhD student Yunming Zhang PhD ’20 (now with Google), and Adobe Research’s Shoaib Kamil.
    When programmers write code, they don’t talk directly to the computer hardware. The hardware itself operates in binary — 1s and 0s — while the coder writes in a structured, “high-level” language made up of words and symbols. Translating that high-level language into hardware-readable binary requires programs called compilers. “A compiler converts the code to a format that can run on the hardware,” says Brahmakshatriya. One such compiler, specially designed for graph analysis, is GraphIt.
    The researchers developed GraphIt in 2018 to optimize the performance of graph-based algorithms regardless of the size and shape of the graph. GraphIt allows the user not only to input an algorithm, but also to schedule how that algorithm runs on the hardware. “The user can provide different options for the scheduling, until they figure out what works best for them,” says Brahmakshatriya. “GraphIt generates very specialized code tailored for each application to run as efficiently as possible.”
    A number of startups and established tech firms alike have adopted GraphIt to aid their development of graph applications. But Brahmakshatriya says the first iteration of GraphIt had a shortcoming: It only runs on central processing units or CPUs, the type of processor in a typical laptop.
    “Some algorithms are massively parallel,” says Brahmakshatriya, “meaning they can better utilize hardware like a GPU that has 10,000 cores for execution.” He notes that some types of graph analysis, including recommendation algorithms, require a high degree of parallelism. So Brahmakshatriya extended GraphIt to enable graph analysis to flourish on GPUs.
    Brahmakshatriya’s team preserved the way GraphIt users input algorithms, but adapted the scheduling component for a wider array of hardware. “Our main design decision in extending GraphIt to GPUs was to keep the algorithm representation exactly the same,” says Brahmakshatriya. “Instead, we added a new scheduling language. So, the user can keep the same algorithms that they had before written before [for CPUs], and just change the scheduling input to get the GPU code.”
    This new, optimized scheduling for GPUs gives a boost to graph algorithms that require high parallelism — including recommendation algorithms or internet search functions that sift through millions of websites simultaneously. To confirm the efficacy of GraphIt’s new extension, the team ran 90 experiments pitting GraphIt’s runtime against other state-of-the-art graph compilers on GPUs. The experiments included a range of algorithms and graph types, from road networks to social networks. GraphIt ran fastest in 65 of the 90 cases and was close behind the leading algorithm in the rest of the trials, demonstrating both its speed and versatility.
    Brahmakshatriya says the new GraphIt extension provides a meaningful advance in graph analysis, enabling users to go between CPUs and GPUs with state-of-the-art performance with ease. “The field these days is tooth-and-nail competition. There are new frameworks coming out every day,” He says. But he emphasizes that the payoff for even slight optimization is worth it. “Companies are spending millions of dollars each day to run graph algorithms. Even if you make it run just 5 percent faster, you’re saving many thousands of dollars.”
    This research was funded, in part, by the National Science Foundation, U.S. Department of Energy, the Applications Driving Architectures Center, and the Defense Advanced Research Projects Agency. More

  • in

    A speed limit also applies in the quantum world

    Even in the world of the smallest particles with their own special rules, things cannot proceed infinitely fast. Physicists at the University of Bonn have now shown what the speed limit is for complex quantum operations. The study also involved scientists from MIT, the universities of Hamburg, Cologne and Padua, and the Jülich Research Center. The results are important for the realization of quantum computers, among other things. 
    Suppose you observe a waiter (the lockdown is already history) who on New Year’s Eve has to serve an entire tray of champagne glasses just a few minutes before midnight. He rushes from guest to guest at top speed. Thanks to his technique, perfected over many years of work, he nevertheless manages not to spill even a single drop of the precious liquid.
    A little trick helps him to do this: While the waiter accelerates his steps, he tilts the tray a bit so that the champagne does not spill out of the glasses. Halfway to the table, he tilts it in the opposite direction and slows down. Only when he has come to a complete stop does he hold it upright again.
    Atoms are in some ways similar to champagne. They can be described as waves of matter, which behave not like a billiard ball but more like a liquid. Anyone who wants to transport atoms from one place to another as quickly as possible must therefore be as skillful as the waiter on New Year’s Eve. “And even then, there is a speed limit that this transport cannot exceed,” explains Dr. Andrea Alberti, who led this study at the Institute of Applied Physics of the University of Bonn.
    Cesium atom as a champagne substitute
    In their study, the researchers experimentally investigated exactly where this limit lies. They used a cesium atom as a champagne substitute and two laser beams perfectly superimposed but directed against each other as a tray. This superposition, called interference by physicists, creates a standing wave of light: a sequence of mountains and valleys that initially do not move. “We loaded the atom into one of these valleys, and then set the standing wave in motion — this displaced the position of the valley itself,” says Alberti. “Our goal was to get the atom to the target location in the shortest possible time without it spilling out of the valley, so to speak.”
    The fact that there is a speed limit in the microcosm was already theoretically demonstrated by two Soviet physicists, Leonid Mandelstam and Igor Tamm more than 60 years ago. They showed that the maximum speed of a quantum process depends on the energy uncertainty, i.e., how “free” the manipulated particle is with respect to its possible energy states: the more energetic freedom it has, the faster it is. In the case of the transport of an atom, for example, the deeper the valley into which the cesium atom is trapped, the more spread the energies of the quantum states in the valley are, and ultimately the faster the atom can be transported. Something similar can be seen in the example of the waiter: If he only fills the glasses half full (to the chagrin of the guests), he runs less risk that the champagne spills over as he accelerates and decelerates. However, the energetic freedom of a particle cannot be increased arbitrarily. “We can’t make our valley infinitely deep — it would cost us too much energy,” stresses Alberti.
    Beam me up, Scotty!
    The speed limit of Mandelstam and Tamm is a fundamental limit. However, one can only reach it under certain circumstances, namely in systems with only two quantum states. “In our case, for example, this happens when the point of origin and destination are very close to each other,” the physicist explains. “Then the matter waves of the atom at both locations overlap, and the atom could be transported directly to its destination in one go, that is, without any stops in between — almost like the teleportation in the Starship Enterprise of Star Trek.”
    However, the situation is different when the distance grows to several dozens of matter wave widths as in the Bonn experiment. For these distances, direct teleportation is impossible. Instead, the particle must go through several intermediate states to reach its final destination: The two-level system becomes a multi-level system. The study shows that a lower speed limit applies to such processes than that predicted by the two Soviet physicists: It is determined not only by the energy uncertainty, but also by the number of intermediate states. In this way, the work improves the theoretical understanding of complex quantum processes and their constraints.
    The physicists’ findings are important not least for quantum computing. The computations that are possible with quantum computers are mostly based on the manipulation of multi-level systems. Quantum states are very fragile, though. They last only a short lapse of time, which physicists call coherence time. It is therefore important to pack as many computational operations as possible into this time. “Our study reveals the maximum number of operations we can perform in the coherence time,” Alberti explains. “This makes it possible to make optimal use of it.”
    The study was funded by the German Research Foundation (DFG) as part of the Collaborative Research Center SFB/TR 185 OSCAR. Funding was also provided by the Reinhard Frank Foundation in collaboration with the German Technion Society, and by the German Academic Exchange Service. More