More stories

  • in

    Quantum leap for speed limit bounds

    Nature’s speed limits aren’t posted on road signs, but Rice University physicists have discovered a new way to deduce them that is better — infinitely better, in some cases — than previous methods.
    “The big question is, ‘How fast can anything — information, mass, energy — move in nature?'” said Kaden Hazzard, a theoretical quantum physicist at Rice. “It turns out that if somebody hands you a material, it is incredibly difficult, in general, to answer the question.”
    In a study published today in the American Physical Society journal PRX Quantum, Hazzard and Rice graduate student Zhiyuan Wang describe a new method for calculating the upper bound of speed limits in quantum matter.
    “At a fundamental level, these bounds are much better than what was previously available,” said Hazzard, an assistant professor of physics and astronomy and member of the Rice Center for Quantum Materials. “This method frequently produces bounds that are 10 times more accurate, and it’s not unusual for them to be 100 times more accurate. In some cases, the improvement is so dramatic that we find finite speed limits where previous approaches predicted infinite ones.”
    Nature’s ultimate speed limit is the speed of light, but in nearly all matter around us, the speed of energy and information is much slower. Frequently, it is impossible to describe this speed without accounting for the large role of quantum effects.
    In the 1970s, physicists proved that information must move much slower than the speed of light in quantum materials, and though they could not compute an exact solution for the speeds, physicists Elliott Lieb and Derek Robinson pioneered mathematical methods for calculating the upper bounds of those speeds.

    advertisement

    “The idea is that even if I can’t tell you the exact top speed, can I tell you that the top speed must be less than a particular value,” Hazzard said. “If I can give a 100% guarantee that the real value is less than that upper bound, that can be extremely useful.”
    Hazzard said physicists have long known that some of the bounds produced by the Lieb-Robinson method are “ridiculously imprecise.”
    “It might say that information must move less than 100 miles per hour in a material when the real speed was measured at 0.01 miles per hour,” he said. “It’s not wrong, but it’s not very helpful.”
    The more accurate bounds described in the PRX Quantum paper were calculated by a method Wang created.
    “We invented a new graphical tool that lets us account for the microscopic interactions in the material instead of relying only on cruder properties such as its lattice structure,” Wang said.

    advertisement

    Hazzard said Wang, a third-year graduate student, has an incredible talent for synthesizing mathematical relationships and recasting them in new terms.
    “When I check his calculations, I can go step by step, churn through the calculations and see that they’re valid,” Hazzard said. “But to actually figure out how to get from point A to point B, what set of steps to take when there’s an infinite variety of things you could try at each step, the creativity is just amazing to me.”
    The Wang-Hazzard method can be applied to any material made of particles moving in a discrete lattice. That includes oft-studied quantum materials like high-temperature superconductors, topological materials, heavy fermions and others. In each of these, the behavior of the materials arises from interactions of billions upon billions of particles, whose complexity is beyond direct calculation.
    Hazzard said he expects the new method to be used in several ways.
    “Besides the fundamental nature of this, it could be useful for understanding the performance of quantum computers, in particular in understanding how long they take to solve important problems in materials and chemistry,” he said.
    Hazzard said he is certain the method will also be used to develop numerical algorithms because Wang has shown it can put rigorous bounds on the errors produced by oft-used numerical techniques that approximate the behavior of large systems.
    A popular technique physicists have used for more than 60 years is to approximate a large system by a small one that can be simulated by a computer.
    “We draw a small box around a finite chunk, simulate that and hope that’s enough to approximate the gigantic system,” Hazzard said. “But there has not been a rigorous way of bounding the errors in these approximations.”
    The Wang-Hazzard method of calculating bounds could lead to just that.
    “There is an intrinsic relationship between the error of a numerical algorithm and the speed of information propagation,” Wang explained, using the sound of his voice and the walls in his room to illustrate the link.
    “The finite chunk has edges, just as my room has walls. When I speak, the sound will get reflected by the wall and echo back to me. In an infinite system, there is no edge, so there is no echo.”
    In numerical algorithms, errors are the mathematical equivalent of echoes. They reverberate from the edges of the finite box, and the reflection undermines the algorithms’ ability to simulate the infinite case. The faster information moves through the finite system, the shorter the time the algorithm faithfully represents the infinite. Hazzard said he, Wang and others in his research group are using their method to craft numerical algorithms with guaranteed error bars.
    “We don’t even have to change the existing algorithms to put strict, guaranteed error bars on the calculations,” he said. “But you can also flip it around and use this to make better numerical algorithms. We’re exploring that, and other people are interested in using these as well.” More

  • in

    Battery-free Game Boy runs forever

    A hand-held video game console allowing indefinite gameplay might be a parent’s worst nightmare.
    But this Game Boy is not just a toy. It’s a powerful proof-of-concept, developed by researchers at Northwestern University and the Delft University of Technology (TU Delft) in the Netherlands, that pushes the boundaries of battery-free intermittent computing into the realm of fun and interaction.
    Instead of batteries, which are costly, environmentally hazardous and ultimately end up in landfills, this device harvests energy from the sun — and the user. These advances enable gaming to last forever without having to stop and recharge the battery.
    “It’s the first battery-free interactive device that harvests energy from user actions,” said Northwestern’s Josiah Hester, who co-led the research. “When you press a button, the device converts that energy into something that powers your gaming.”
    “Sustainable gaming will become a reality, and we made a major step in that direction — by getting rid of the battery completely,” said TU Delft’s Przemyslaw Pawelczak, who co-led the research with Hester. “With our platform, we want to make a statement that it is possible to make a sustainable gaming system that brings fun and joy to the user.”
    The teams will present the research virtually at UbiComp 2020, a major conference within the field of interactive systems, on Sept. 15.
    Hester is an assistant professor of electrical and computer engineering and computer science in Northwestern’s McCormick School of Engineering. Pawelczak is an assistant professor in the Embedded Software Lab at TU Delft. Their team includes Jasper de Winkel and Vito Kortbeek, both Ph.D. candidates at TU Delft.
    The researchers’ energy aware gaming platform (ENGAGE) has the size and form factor of the original Game Boy, while being equipped with a set of solar panels around the screen. Button presses by the user are a second source of energy. Most importantly, it impersonates the Game Boy processor. Although this solution requires a lot of computational power, and therefore energy, it allows any popular retro game to be played straight from its original cartridge.
    As the device switches between power sources, it does experience short losses in power. To ensure an acceptable duration of gameplay between power failures, the researchers designed the system hardware and software from the ground up to be energy aware as well as very energy efficient. They also developed a new technique for storing the system state in non-volatile memory, minimizing overhead and allowing quick restoration when power returns. This eliminates the need to press “save” as seen in traditional platforms, as the player can now continue gameplay from the exact point of the device fully losing power — even if Mario is in mid-jump.
    On a not-too-cloudy day, and for games that require at least moderate amounts of clicking, gameplay interruptions typically last less than one second for every 10 seconds of gameplay. The researchers find this to be a playable scenario for some games — including Chess, Solitaire and Tetris — but certainly not yet for all (action) games.
    Although there is still a long way to go before state-of-the-art 21st century hand-held game consoles become fully battery-free, the researchers hope their devices raise awareness of the environmental impact of the small devices that make up the Internet of Things. Batteries are costly, environmentally hazardous and they must eventually be replaced to avoid that the entire device ends up at the landfill.
    “Our work is the antithesis of the Internet of Things, which has many devices with batteries in them,” Hester said. “Those batteries eventually end up in the garbage. If they aren’t fully discharged, they can become hazardous. They are hard to recycle. We want to build devices that are more sustainable and can last for decades.” More

  • in

    New mathematical method shows how climate change led to fall of ancient civilization

    A Rochester Institute of Technology researcher developed a mathematical method that shows climate change likely caused the rise and fall of an ancient civilization. In an article recently featured in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science, Nishant Malik, assistant professor in RIT’s School of Mathematical Sciences, outlined the new technique he developed and showed how shifting monsoon patterns led to the demise of the Indus Valley Civilization, a Bronze Age civilization contemporary to Mesopotamia and ancient Egypt.
    Malik developed a method to study paleoclimate time series, sets of data that tell us about past climates using indirect observations. For example, by measuring the presence of a particular isotope in stalagmites from a cave in South Asia, scientists were able to develop a record of monsoon rainfall in the region for the past 5,700 years. But as Malik notes, studying paleoclimate time series poses several problems that make it challenging to analyze them with mathematical tools typically used to understand climate.
    “Usually the data we get when analyzing paleoclimate is a short time series with noise and uncertainty in it,” said Malik. “As far as mathematics and climate is concerned, the tool we use very often in understanding climate and weather is dynamical systems. But dynamical systems theory is harder to apply to paleoclimate data. This new method can find transitions in the most challenging time series, including paleoclimate, which are short, have some amount of uncertainty and have noise in them.”
    There are several theories about why the Indus Valley Civilization declined — including invasion by nomadic Indo-Aryans and earthquakes — but climate change appears to be the most likely scenario. But until Malik applied his hybrid approach — rooted in dynamical systems but also draws on methods from the fields of machine learning and information theory — there was no mathematical proof. His analysis showed there was a major shift in monsoon patterns just before the dawn of this civilization and that the pattern reversed course right before it declined, indicating it was in fact climate change that caused the fall.
    Malik said he hopes the method will allow scientists to develop more automated methods of finding transitions in paleoclimate data and leads to additional important historical discoveries. The full text of the study is published in Chaos: An Interdisciplinary Journal of Nonlinear Science.

    Story Source:
    Materials provided by Rochester Institute of Technology. Original written by Luke Auburn. Note: Content may be edited for style and length. More

  • in

    Autonomous robot plays with NanoLEGO

    Molecules are the building blocks of everyday life. Many materials are composed of them, a little like a LEGO model consists of a multitude of different bricks. But while individual LEGO bricks can be simply shifted or removed, this is not so easy in the nanoworld. Atoms and molecules behave in a completely different way to macroscopic objects and each brick requires its own “instruction manual.” Scientists from Jülich and Berlin have now developed an artificial intelligence system that autonomously learns how to grip and move individual molecules using a scanning tunnelling microscope. The method, which has been published in Science Advances, is not only relevant for research but also for novel production technologies such as molecular 3D printing.
    Rapid prototyping, the fast and cost-effective production of prototypes or models — better known as 3D printing — has long since established itself as an important tool for industry. “If this concept could be transferred to the nanoscale to allow individual molecules to be specifically put together or separated again just like LEGO bricks, the possibilities would be almost endless, given that there are around 1060 conceivable types of molecule,” explains Dr. Christian Wagner, head of the ERC working group on molecular manipulation at Forschungszentrum Jülich.
    There is one problem, however. Although the scanning tunnelling microscope is a useful tool for shifting individual molecules back and forth, a special custom “recipe” is always required in order to guide the tip of the microscope to arrange molecules spatially in a targeted manner. This recipe can neither be calculated, nor deduced by intuition — the mechanics on the nanoscale are simply too variable and complex. After all, the tip of the microscope is ultimately not a flexible gripper, but rather a rigid cone. The molecules merely adhere lightly to the microscope tip and can only be put in the right place through sophisticated movement patterns.
    “To date, such targeted movement of molecules has only been possible by hand, through trial and error. But with the help of a self-learning, autonomous software control system, we have now succeeded for the first time in finding a solution for this diversity and variability on the nanoscale, and in automating this process,” says a delighted Prof. Dr. Stefan Tautz, head of Jülich’s Quantum Nanoscience institute.
    The key to this development lies in so-called reinforcement learning, a special variant of machine learning. “We do not prescribe a solution pathway for the software agent, but rather reward success and penalize failure,” explains Prof. Dr. Klaus-Robert Müller, head of the Machine Learning department at TU Berlin. The algorithm repeatedly tries to solve the task at hand and learns from its experiences. The general public first became aware of reinforcement learning a few years ago through AlphaGo Zero. This artificial intelligence system autonomously developed strategies for winning the highly complex game of Go without studying human players — and after just a few days, it was able to beat professional Go players.
    “In our case, the agent was given the task of removing individual molecules from a layer in which they are held by a complex network of chemical bonds. To be precise, these were perylene molecules, such as those used in dyes and organic light-emitting diodes,” explains Dr. Christian Wagner. The special challenge here is that the force required to move them must never exceed the strength of the bond with which the tip of the scanning tunnelling microscope attracts the molecule, since this bond would otherwise break. “The microscope tip therefore has to execute a special movement pattern, which we previously had to discover by hand, quite literally,” Wagner adds. While the software agent initially performs completely random movement actions that break the bond between the tip of the microscope and the molecule, over time it develops rules as to which movement is the most promising for success in which situation and therefore gets better with each cycle.
    However, the use of reinforcement learning in the nanoscopic range brings with it additional challenges. The metal atoms that make up the tip of the scanning tunnelling microscope can end up shifting slightly, which alters the bond strength to the molecule each time. “Every new attempt makes the risk of a change and thus the breakage of the bond between tip and molecule greater. The software agent is therefore forced to learn particularly quickly, since its experiences can become obsolete at any time,” Prof. Dr. Stefan Tautz explains. “It’s a little as if the road network, traffic laws, bodywork, and rules for operating the vehicle are constantly changing while driving autonomously.” The researchers have overcome this challenge by making the software learn a simple model of the environment in which the manipulation takes place in parallel with the initial cycles. The agent then simultaneously trains both in reality and in its own model, which has the effect of significantly accelerating the learning process.
    “This is the first time ever that we have succeeded in bringing together artificial intelligence and nanotechnology,” emphasizes Klaus-Robert Müller. “Up until now, this has only been a ‘proof of principle’,” Tautz adds. “However, we are confident that our work will pave the way for the robot-assisted automated construction of functional supramolecular structures, such as molecular transistors, memory cells, or qubits — with a speed, precision, and reliability far in excess of what is currently possible.” More

  • in

    Heavy electronic media use in late childhood linked to lower academic performance

    A new study of 8- to 11-year olds reveals an association between heavy television use and poorer reading performance, as well as between heavy computer use and poorer numeracy — the ability to work with numbers. Lisa Mundy of the Murdoch Children’s Research Institute in Melbourne, Australia, and colleagues present these findings in the open-access journal PLOS ONE on September 2, 2020.
    Previous studies of children and adolescents have found links between use of electronic media — such as television, computers, and videogames — and obesity, poor sleep, and other physical health risks. Electronic media use is also associated with better access to information, tech skills, and social connection. However, comparatively less is known about links with academic performance.
    To help clarify these links, Mundy and colleagues studied 1,239 8- to 9-year olds in Melbourne, Australia. They used a national achievement test data to measure the children’s academic performance at baseline and again after two years. They also asked the children’s parents to report on their kids’ use of electronic media.
    The researchers found that watching two or more hours of television per day at the age of 8 or 9 was associated with lower reading performance compared to peers two years later; the difference was equivalent to losing four months of learning. Using a computer for more than one hour per day was linked to a similar degree of lost numeracy. The analysis showed no links between use of videogames and academic performance.
    By accounting for baseline academic performance and potentially influencing factors such as mental health difficulties and body mass index (BMI) and controlling for prior media use, the researchers were able to pinpoint cumulative television and computer use, as well as short-term use, as associated with poorer academic performance.
    These findings could help parents, teachers, and clinicians refine plans and recommendations for electronic media use in late childhood. Future research could build on these results by examining continued associations in later secondary school.
    The authors add: “The debate about the effects of modern media on children’s learning has never been more important given the effects of today’s pandemic on children’s use of time. This is the first large, longitudinal study of electronic media use and learning in primary school children, and results showed heavier users of television and computers had significant declines in reading and numeracy two years later compared with light users.”

    Story Source:
    Materials provided by PLOS. Note: Content may be edited for style and length. More

  • in

    Revolutionary quantum breakthrough paves way for safer online communication

    The world is one step closer to having a totally secure internet and an answer to the growing threat of cyber-attacks, thanks to a team of international scientists who have created a unique prototype which could transform how we communicate online.
    The invention led by the University of Bristol, revealed today in the journal Science Advances, has the potential to serve millions of users, is understood to be the largest-ever quantum network of its kind, and could be used to secure people’s online communication, particularly in these internet-led times accelerated by the COVID-19 pandemic.
    By deploying a new technique, harnessing the simple laws of physics, it can make messages completely safe from interception while also overcoming major challenges which have previously limited advances in this little used but much-hyped technology.
    Lead author Dr Siddarth Joshi, who headed the project at the university’s Quantum Engineering Technology (QET) Labs, said: “This represents a massive breakthrough and makes the quantum internet a much more realistic proposition. Until now, building a quantum network has entailed huge cost, time, and resource, as well as often compromising on its security which defeats the whole purpose.”
    “Our solution is scalable, relatively cheap and, most important of all, impregnable. That means it’s an exciting game changer and paves the way for much more rapid development and widespread rollout of this technology.”
    The current internet relies on complex codes to protect information, but hackers are increasingly adept at outsmarting such systems leading to cyber-attacks across the world which cause major privacy breaches and fraud running into trillions of pounds annually. With such costs projected to rise dramatically, the case for finding an alternative is even more compelling and quantum has for decades been hailed as the revolutionary replacement to standard encryption techniques.

    advertisement

    So far physicists have developed a form of secure encryption, known as quantum key distribution, in which particles of light, called photons, are transmitted. The process allows two parties to share, without risk of interception, a secret key used to encrypt and decrypt information. But to date this technique has only been effective between two users.
    “Until now efforts to expand the network have involved vast infrastructure and a system which requires the creation of another transmitter and receiver for every additional user. Sharing messages in this way, known as trusted nodes, is just not good enough because it uses so much extra hardware which could leak and would no longer be totally secure,” Dr Joshi said.
    The team’s quantum technique applies a seemingly magical principle, called entanglement, which Albert Einstein described as ‘spooky action at a distance.’ It exploits the power of two different particles placed in separate locations, potentially thousands of miles apart, to simultaneously mimic each other. This process presents far greater opportunities for quantum computers, sensors, and information processing.
    “Instead of having to replicate the whole communication system, this latest methodology, called multiplexing, splits the light particles, emitted by a single system, so they can be received by multiple users efficiently,” Dr Joshi said.
    The team created a network for eight users using just eight receiver boxes, whereas the former method would need the number of users multiplied many times — in this case, amounting to 56 boxes. As the user numbers grow, the logistics become increasingly unviable — for instance 100 users would take 9,900 receiver boxes.

    advertisement

    To demonstrate its functionality across distance, the receiver boxes were connected to optical fibres via different locations across Bristol and the ability to transmit messages via quantum communication was tested using the city’s existing optical fibre network.
    “Besides being completely secure, the beauty of this new technique is its streamline agility, which requires minimal hardware because it integrates with existing technology,” Dr Joshi said.
    The team’s unique system also features traffic management, delivering better network control which allows, for instance, certain users to be prioritised with a faster connection.
    Whereas previous quantum systems have taken years to build, at a cost of millions or even billions of pounds, this network was created within months for less than £300,000. The financial advantages grow as the network expands, so while 100 users on previous quantum systems might cost in the region of £5 billion, Dr Joshi believes multiplexing technology could slash that to around £4.5 million, less than 1 per cent.
    In recent years quantum cryptography has been successfully used to protect transactions between banking centres in China and secure votes at a Swiss election. Yet its wider application has been held back by the sheer scale of resources and costs involved.
    “With these economies of scale, the prospect of a quantum internet for universal usage is much less far-fetched. We have proved the concept and by further refining our multiplexing methods to optimise and share resources in the network, we could be looking at serving not just hundreds or thousands, but potentially millions of users in the not too distant future,” Dr Joshi said.
    “The ramifications of the COVID-19 pandemic have not only shown importance and potential of the internet, and our growing dependence on it, but also how its absolute security is paramount. Multiplexing entanglement could hold the vital key to making this security a much-needed reality.” More

  • in

    Predictive placentas: Using artificial intelligence to protect mothers' future pregnancies

    After a baby is born, doctors sometimes examine the placenta — the organ that links the mother to the baby — for features that indicate health risks in any future pregnancies. Unfortunately, this is a time-consuming process that must be performed by a specialist, so most placentas go unexamined after the birth. A team of researchers from Carnegie Mellon University (CMU) and the University of Pittsburgh Medical Center (UPMC) report the development of a machine learning approach to examine placenta slides in the American Journal of Pathology, published by Elsevier, so more women can be informed of their health risks.
    One reason placentas are examined is to look for a type of blood vessel lesions called decidual vasculopathy (DV). These indicate the mother is at risk for preeclampsia — a complication that can be fatal to the mother and baby — in any future pregnancies. Once detected, preeclampsia can be treated, so there is considerable benefit from identifying at-risk mothers before symptoms appear. However, although there are hundreds of blood vessels in a single slide, only one diseased vessel is needed to indicate risk.
    “Pathologists train for years to be able to find disease in these images, but there are so many pregnancies going through the hospital system that they don’t have time to inspect every placenta,” said Daniel Clymer, PhD, alumnus, Department of Mechanical Engineering, CMU, Pittsburgh, PA, USA. “Our algorithm helps pathologists know which images they should focus on by scanning an image, locating blood vessels, and finding patterns of the blood vessels that identify DV.”
    Machine learning works by “training” the computer to recognize certain features in data files. In this case, the data file is an image of a thin slice of a placenta sample. Researchers show the computer various images and indicate whether the placenta is diseased or healthy. After sufficient training, the computer is able to identify diseased lesions on its own.
    It is quite difficult for a computer to simply look at a large picture and classify it, so the team introduced a novel approach through which the computer follows a series of steps to make the task more manageable. First, the computer detects all blood vessels in an image. Each blood vessel can then be considered individually, creating smaller data packets for analysis. The computer will then access each blood vessel and determine if it should be deemed diseased or healthy. At this stage, the algorithm also considers features of the pregnancy, such as gestational age, birth weight, and any conditions the mother might have. If there are any diseased blood vessels, then the picture — and therefore the placenta — is marked as diseased. The UPMC team provided the de-identified placenta images for training the algorithm.
    “This algorithm isn’t going to replace a pathologist anytime soon,” Dr. Clymer explained. “The goal here is that this type of algorithm might be able to help speed up the process by flagging regions of the image where the pathologist should take a closer look.”
    “This is a beautiful collaboration between engineering and medicine as each brings expertise to the table that, when combined, creates novel findings that can help so many individuals,” added lead investigators Jonathan Cagan, PhD, and Philip LeDuc, PhD, professors of mechanical engineering at CMU, Pittsburgh, PA, USA.
    “As healthcare increasingly embraces the role of artificial intelligence, it is important that doctors partner early on with computer scientists and engineers so that we can design and develop the right tools for the job to positively impact patient outcomes,” noted co-author Liron Pantanowitz, MBBCh, formerly vice chair for pathology informatics at UPMC, Pittsburgh, PA, USA. “This partnership between CMU and UPMC is a perfect example of what can be accomplished when this happens.”

    Story Source:
    Materials provided by Elsevier. Note: Content may be edited for style and length. More

  • in

    A molecular approach to quantum computing

    The technology behind the quantum computers of the future is fast developing, with several different approaches in progress. Many of the strategies, or “blueprints,” for quantum computers rely on atoms or artificial atom-like electrical circuits. In a new theoretical study in the journal Physical Review X, a group of physicists at Caltech demonstrates the benefits of a lesser-studied approach that relies not on atoms but molecules.
    “In the quantum world, we have several blueprints on the table and we are simultaneously improving all of them,” says lead author Victor Albert, the Lee A. DuBridge Postdoctoral Scholar in Theoretical Physics. “People have been thinking about using molecules to encode information since 2001, but now we are showing how molecules, which are more complex than atoms, could lead to fewer errors in quantum computing.”
    At the heart of quantum computers are what are known as qubits. These are similar to the bits in classical computers, but unlike classical bits they can experience a bizarre phenomenon known as superposition in which they exist in two states or more at once. Like the famous Schrödinger’s cat thought experiment, which describes a cat that is both dead and alive at the same time, particles can exist in multiple states at once. The phenomenon of superposition is at the heart of quantum computing: the fact that qubits can take on many forms simultaneously means that they have exponentially more computing power than classical bits.
    But the state of superposition is a delicate one, as qubits are prone to collapsing out of their desired states, and this leads to computing errors.
    “In classical computing, you have to worry about the bits flipping, in which a ‘1’ bit goes to a ‘0’ or vice versa, which causes errors,” says Albert. “This is like flipping a coin, and it is hard to do. But in quantum computing, the information is stored in fragile superpositions, and even the quantum equivalent of a gust of wind can lead to errors.”
    However, if a quantum computer platform uses qubits made of molecules, the researchers say, these types of errors are more likely to be prevented than in other quantum platforms. One concept behind the new research comes from work performed nearly 20 years ago by Caltech researchers John Preskill, Richard P. Feynman Professor of Theoretical Physics and director of the Institute of Quantum Information and Matter (IQIM), and Alexei Kitaev, the Ronald and Maxine Linde Professor of Theoretical Physics and Mathematics at Caltech, along with their colleague Daniel Gottesman (PhD ’97) of the Perimeter Institute in Ontario, Canada. Back then, the scientists proposed a loophole that would provide a way around a phenomenon called Heisenberg’s uncertainty principle, which was introduced in 1927 by German physicist Werner Heisenberg. The principle states that one cannot simultaneously know with very high precision both where a particle is and where it is going.

    advertisement

    “There is a joke where Heisenberg gets pulled over by a police officer who says he knows Heisenberg’s speed was 90 miles per hour, and Heisenberg replies, ‘Now I have no idea where I am,'” says Albert.
    The uncertainty principle is a challenge for quantum computers because it implies that the quantum states of the qubits cannot be known well enough to determine whether or not errors have occurred. However, Gottesman, Kitaev, and Preskill figured out that while the exact position and momentum of a particle could not be measured, it was possible to detect very tiny shifts to its position and momentum. These shifts could reveal that an error has occurred, making it possible to push the system back to the correct state. This error-correcting scheme, known as GKP after its discoverers, has recently been implemented in superconducting circuit devices.
    “Errors are okay but only if we know they happen,” says Preskill, a co-author on the Physical Review X paper and also the scientific coordinator for a new Department of Energy-funded science center called the Quantum Systems Accelerator. “The whole point of error correction is to maximize the amount of knowledge we have about potential errors.”
    In the new paper, this concept is applied to rotating molecules in superposition. If the orientation or angular momentum of the molecule shifts by a small amount, those shifts can be simultaneously corrected.
    “We want to track the quantum information as it’s evolving under the noise,” says Albert. “The noise is kicking us around a little bit. But if we have a carefully chosen superposition of the molecules’ states, we can measure both orientation and angular momentum as long as they are small enough. And then we can kick the system back to compensate.”
    Jacob Covey, a co-author on the paper and former Caltech postdoctoral scholar who recently joined the faculty at the University of Illinois, says that it might be possible to eventually individually control molecules for use in quantum information systems such as these. He and his team have made strides in using optical laser beams, or “tweezers,” to control single neutral atoms (neutral atoms are another promising platform for quantum-information systems).
    “The appeal of molecules is that they are very complex structures that can be very densely packed,” says Covey. “If we can figure out how to utilize molecules in quantum computing, we can robustly encode information and improve the efficiency in which qubits are packed.”
    Albert says that the trio of himself, Preskill, and Covey provided the perfect combination of theoretical and experimental expertise to achieve the latest results. He and Preskill are both theorists while Covey is an experimentalist. “It was really nice to have somebody like John to help me with the framework for all this theory of error-correcting codes, and Jake gave us crucial guidance on what is happening in labs.”
    Says Preskill, “This is a paper that no one of the three of us could have written on our own. What’s really fun about the field of quantum information is that it’s encouraging us to interact across some of these divides, and Caltech, with its small size, is the perfect place to get this done.” More