More stories

  • in

    Engineers teach AI to navigate ocean with minimal energy

    Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
    “When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
    The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.
    In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.
    To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
    Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
    This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
    “We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
    The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.
    “Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.
    Story Source:
    Materials provided by California Institute of Technology. Original written by Robert Perkins. Note: Content may be edited for style and length. More

  • in

    These tiny liquid robots never run out of juice as long as they have food

    When you think of a robot, images of R2-D2 or C-3PO might come to mind. But robots can serve up more than just entertainment on the big screen. In a lab, for example, robotic systems can improve safety and efficiency by performing repetitive tasks and handling harsh chemicals.
    But before a robot can get to work, it needs energy — typically from electricity or a battery. Yet even the most sophisticated robot can run out of juice. For many years, scientists have wanted to make a robot that can work autonomously and continuously, without electrical input.
    Now, as reported last week in the journal Nature Chemistry, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of Massachusetts Amherst have demonstrated just that — through “water-walking” liquid robots that, like tiny submarines, dive below water to retrieve precious chemicals, and then surface to deliver chemicals “ashore” again and again.
    The technology is the first self-powered, aqueous robot that runs continuously without electricity. It has potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.
    “We have broken a barrier in designing a liquid robotic system that can operate autonomously by using chemistry to control an object’s buoyancy,” said senior author Tom Russell, a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts Amherst who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab’s Materials Sciences Division.
    Russell said that the technology significantly advances a family of robotic devices called “liquibots.” In previous studies, other researchers demonstrated liquibots that autonomously perform a task, but just once; and some liquibots can perform a task continuously, but need electricity to keep on running. In contrast, “we don’t have to provide electrical energy because our liquibots get their power or ‘food’ chemically from the surrounding media,” Russell explained. More

  • in

    AI-powered computer model predicts disease progression during aging

    Using artificial intelligence, a team of University at Buffalo researchers has developed a novel system that models the progression of chronic diseases as patients age.
    Published in Oct. in the Journal of Pharmacokinetics and Pharmacodynamics, the model assesses metabolic and cardiovascular biomarkers — measurable biological processes such as cholesterol levels, body mass index, glucose and blood pressure — to calculate health status and disease risks across a patient’s lifespan.
    The findings are critical due to the increased risk of developing metabolic and cardiovascular diseases with aging, a process that has adverse effects on cellular, psychological and behavioral processes.
    “There is an unmet need for scalable approaches that can provide guidance for pharmaceutical care across the lifespan in the presence of aging and chronic co-morbidities,” says lead author Murali Ramanathan, PhD, professor of pharmaceutical sciences in the UB School of Pharmacy and Pharmaceutical Sciences. “This knowledge gap may be potentially bridged by innovative disease progression modeling.”
    The model could facilitate the assessment of long-term chronic drug therapies, and help clinicians monitor treatment responses for conditions such as diabetes, high cholesterol and high blood pressure, which become more frequent with age, says Ramanathan.
    Additional investigators include first author and UB School of Pharmacy and Pharmaceutical Sciences alumnus Mason McComb, PhD; Rachael Hageman Blair, PhD, associate professor of biostatistics in the UB School of Public Health and Health Professions; and Martin Lysy, PhD, associate professor of statistics and actuarial science at the University of Waterloo.
    The research examined data from three case studies within the third National Health and Nutrition Examination Survey (NHANES) that assessed the metabolic and cardiovascular biomarkers of nearly 40,000 people in the United States.
    Biomarkers, which also include measurements such as temperature, body weight and height, are used to diagnose, treat and monitor overall health and numerous diseases.
    The researchers examined seven metabolic biomarkers: body mass index, waist-to-hip ratio, total cholesterol, high-density lipoprotein cholesterol, triglycerides, glucose and glycohemoglobin. The cardiovascular biomarkers examined include systolic and diastolic blood pressure, pulse rate and homocysteine.
    By analyzing changes in metabolic and cardiovascular biomarkers, the model “learns” how aging affects these measurements. With machine learning, the system uses a memory of previous biomarker levels to predict future measurements, which ultimately reveal how metabolic and cardiovascular diseases progress over time.
    Story Source:
    Materials provided by University at Buffalo. Original written by Marcene Robinson. Note: Content may be edited for style and length. More

  • in

    Liquid crystals for fast switching devices

    Liquid crystals are not solid, but some of their physical properties are directional — like in a crystal. This is because their molecules can arrange themselves into certain patterns. The best-known applications include flat screens and digital displays. They are based on pixels of liquid crystals whose optical properties can be switched by electric fields.
    Some liquid crystals form the so-called cholesteric phases: the molecules self-assemble into helical structures, which are characterised by pitch and rotate either to the right or to the left. “The pitch of the cholesteric spirals determines how quickly they react to an applied electric field,” explains Dr. Alevtina Smekhova, physicist at HZB and first author of the study, which has now been published in Soft Matter.
    Simple molecular chain
    In this work, she and partners from the Academies of Sciences in Prague, Moscow and Chernogolovka investigated a liquid crystalline cholesteric compound called EZL10/10, developed in Prague. “Such cholesteric phases are usually formed by molecules with several chiral centres, but here the molecule has only one chiral centre,” explains Dr. Smekhova. It is a simple molecular chain with one lactate unit.
    Ultrashort pitch
    At BESSY II, the team has now examined this compound with soft X-ray light and determined the pitch and space ordering of the spirals. This was the shortest up-to-date reported value of the pitch: only 104 nanometres! This is twice as short as the previously known pitch of spiral structures in liquid crystals. Further analysis showed that in this material the cholesteric spirals form domains with characteristic lengths of about five pitches.
    Outlook
    “This very short pitch makes the material unique and promising for optoelectronic devices with very fast switching times,” Dr. Smekhova points out. In addition, the EZ110/10 compound is thermally and chemically stable and can easily be further varied to obtain structures with customised pitch lengths.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    How statistics can aid in the fight against misinformation

    An American University math professor and his team created a statistical model that can be used to detect misinformation in social posts. The model also avoids the problem of black boxes that occur in machine learning.
    With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don’t understand how the machine arrives at the same decision as human trainers.
    Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU’s Department of Mathematics and Statistics, College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model’s decisions align with those made by humans.
    “We would like to know what a machine is thinking when it makes decisions, and how and why it agrees with the humans that trained it,” Boukouvalas said. “We don’t want to block someone’s social media account because the model makes a biased decision.”
    Boukouvalas’ method is a type of machine learning using statistics. It’s not as popular a field of study as deep learning, the complex, multi-layered type of machine learning and artificial intelligence. Statistical models are effective and provide another, somewhat untapped, way to fight misinformation, Boukouvalas said.
    For a testing set of 112 real and misinformation tweets, the model achieved a high prediction performance and classified them correctly, with an accuracy of nearly 90 percent. (Using such a compact dataset was an efficient way for verifying how the method detected the misinformation tweets.)
    “What’s significant about this finding is that our model achieved accuracy while offering transparency about how it detected the tweets that were misinformation,” Boukouvalas added. “Deep learning methods cannot achieve this kind of accuracy with transparency.” More

  • in

    Twisting elusive quantum particles with a quantum computer

    While the number of qubits and the stability of quantum states are still limiting current quantum computing devices, there are questions where these processors are already able to leverage their enormous computing power. In collaboration with the Google Quantum AI team scientists from the Technical University of Munich (TUM) and the University of Nottingham used a quantum processor to simulate the ground state of a so-called toric code Hamiltonian — an archetypical model system in modern condensed matter physics, which was originally proposed in the context of quantum error correction.
    What would it be like if we lived in a flat two-dimensional world? Physicists predict that quantum mechanics would be even stranger in that case resulting in exotic particles — so-called “anyons” — that cannot exist in the three-dimensional world we live in. This unfamiliar world is not just a curiosity but may be key to unlocking quantum materials and technologies of the future.
    In collaboration with the Google Quantum AI team scientists from the Technical University of Munich and the University of Nottingham used a highly controllable quantum processor to simulate such states of quantum matter. Their results appear in the current issue of the scientific journal Science.
    Emergent quantum particles in two-dimensional systems
    All particles in our universe come in two flavors, bosons or fermions. In the three-dimensional world we live in, this observation stands firm. However, it was theoretically predicted almost 50 years ago that other types of particles, dubbed anyons, could exist when matter is confined to two dimensions.
    While these anyons do not appear as elementary particles in our universe, it turns out that anyonic particles can emerge as collective excitations in so-called topological phases of matter, for which the Nobel prize was awarded in 2016.
    “Twisting pairs of these anyons by moving them around one another in the simulation unveils their exotic properties — physicists call it braiding statistics,” says Dr. Adam Smith from the University of Nottingham.
    A simple picture for these collective excitations is “the wave” in a stadium crowd — it has a well-defined position, but it cannot exist without the thousands of people that make up the crowd. However, realizing and simulating such topologically ordered states experimentally has proven to be extremely challenging.
    Quantum processors as a platform for controlled quantum simulations
    In landmark experiments, the teams from TUM, Google Quantum AI, and the University of Nottingham programmed Google’s quantum processor to simulate these two-dimensional states of quantum matter. “Google’s quantum processor named ‘Sycamore’ can be precisely controlled and is a well-isolated quantum system, which are key requirements for performing quantum computations,” says Kevin Satzinger, a scientist from the Google team.
    The researchers came up with a quantum algorithm to realize a state with topological order, which was confirmed by simulating the creation of anyon excitations and twisting them around one another. Fingerprints from long-range quantum entanglement could be confirmed in their study. As a possible application, such topologically ordered states can be used to improve quantum computers by realizing new ways of error correction. First steps toward this goal have already been achieved in their work.
    “Near term quantum processors will represent an ideal platform to explore the physics of exotic quantum phases matter,” says Prof. Frank Pollmann from TUM. “In the near future, quantum processors promise to solve problems that are beyond the reach of current classical supercomputers.”
    Story Source:
    Materials provided by Technical University of Munich (TUM). Note: Content may be edited for style and length. More

  • in

    Researchers develop an algorithm to increase the efficiency of quantum computers

    Quantum computing is taking a new leap forward due to research done in collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich. The team of researchers have proposed a scheme to reduce the number of calculations needed to read out data stored in the state of a quantum processor. This, in turn, will make quantum computers more efficient, faster, and ultimately more sustainable.
    Quantum computers have the potential to solve important problems that are beyond reach even for the most powerful supercomputers, but they require an entirely new way of programming and creating algorithms.
    Universities and major tech companies are spearheading research on how to develop these new algorithms. In a recent collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich, a team of researchers have developed a new method to speed up calculations on quantum computers. The results are published in the journal PRX Quantum of the American Physical Society.
    – Unlike classical computers, which use bits to store ones and zeros, information is stored in the qubits of a quantum processor in the form of a quantum state, or a wavefunction, says postdoctoral researcher Guillermo García-Pérez from the Department of Physics at the University of Helsinki, first author of the paper.
    Special procedures are thus required to read out data from quantum computers. Quantum algorithms also require a set of inputs, provided for example as real numbers, and a list of operations to be performed on some reference initial state.
    – The quantum state used is, in fact, generally impossible to reconstruct on conventional computers, so useful insights must be extracted by performing specific observations (which quantum physicists refer to as measurements) says García-Pérez.
    The problem with this is the large number of measurements required for many popular applications of quantum computers (like the so-called Variational Quantum Eigensolver, which can be used to overcome important limitations in the study of chemistry, for instance in drug discovery). The number of calculations required is known to grow very quickly with the size of the system one wants to simulate, even if only partial information is needed. This makes the process hard to scale up, slowing down the computation and consuming a lot of computational resources.
    The method proposed by García-Pérez and co-authors uses a generalized class of quantum measurements that are adapted throughout the calculation in order to extract the information stored in the quantum state efficiently. This drastically reduces the number of iterations, and therefore the time and computational cost, needed to obtain high-precision simulations.
    The method can reuse previous measurement outcomes and adjust its own settings. Subsequent runs are increasingly accurate, and the collected data can be reused again and again to calculate other properties of the system without additional costs.
    – We make the most out of every sample by combining all data produced. At the same time, we fine-tune the measurement to produce highly accurate estimates of the quantity under study, such as the energy of a molecule of interest. Putting these ingredients together, we can decrease the expected runtime by several orders of magnitude, says García-Pérez.
    Story Source:
    Materials provided by University of Helsinki. Original written by Paavo Ihalainen. Note: Content may be edited for style and length. More

  • in

    Artificial material protects light states on smallest length scales

    Light not only plays a key role as an information carrier for optical computer chips, but also in particular for the next generation of quantum computers. Its lossless guidance around sharp corners on tiny chips and the precise control of its interaction with other light are the focus of research worldwide. Scientists at Paderborn University have now demonstrated, for the very first time, the spatial confinement of a light wave to a point smaller than the wavelength in a ‘topological photonic crystal’. These are artificial electromagnetic materials that facilitate robust manipulation of light. The state is protected by special properties and is important for use in quantum chips, for example. The findings have now been published in renowned journal “Science Advances.”
    Topological crystals function on the basis of specific structures, the properties of which remain largely unaffected by disturbances and deviations. While in normal photonic crystals the effects needed for light manipulation are fragile and can be affected by defects in the material structure, for example, in topological photonic crystals, they are protected from this. The topological structures allow properties such as unidirectional light propagation and increased robustness for guiding photons, small particles of light — features that are crucial for future light-based technologies.
    Photonic crystals influence the propagation of electromagnetic waves with the help of an optical band gap for photons, which blocks the movement of light in certain directions. Scattering usually occurs — some photons are reflected back, while others are reflected away. “With topological light states that span an extended range of photonic crystals, you can prevent this. In normal optical waveguides and fibers, back reflection poses a major problem because it leads to unwanted feedback. Loss during propagation hinders large-scale integration in optical chips, in which photons are responsible for transmitting information. With the help of topological photonic crystals, novel unidirectional waveguides can be achieved that transmit light without any back reflection, even in the presence of arbitrarily large disorder,” explains Professor Thomas Zentgraf, head of the Ultrafast Nanophotonics research group at Paderborn University. The concept, which has its origins in solid-state physics, has already led to numerous applications, including robust light transmission, topological delay lines, topological lasers and quantum interference.
    “It was also recently proven that topological photonic crystals based on a weak topology with a crystal dislocation in the periodic structure also exhibit these special properties and also support what are known as topologically-protected strongly spatially localised light states. When something is topologically protected, any changes in the parameters do not affect the protected properties. Localised light states are extremely useful for non-linear amplification, miniaturisation of photonic components and integration of photonic quantum chips,” adds Zentgraf. In this context, weak topological states are special states for the light that result not only from the topological band structure, but also from the formation of the crystal structure.
    In a joint experiment, researchers from Paderborn University and RWTH Aachen University used a special near-field optical microscope to demonstrate the existence of such strongly localised light states in topological structures. “We showed that the versatility of weak topology can produce a strongly spatially localised optical field in an intentionally induced structural dislocation,” explains Jinlong Lu, a PhD student in Zentgraf’s group and lead author of the paper. “Our study demonstrates a viable strategy for achieving a topologically-protected, localised zero-dimensional state for light,” adds Zentgraf. With their work, the researchers have proven that near-field microscopy is a valuable tool for characterising topological structures with nanoscale resolution at optical frequencies.
    The findings provide a basis for the use of strongly localised optical light states based on weak topology. Phase-change materials with a tunable refractive index could therefore also be used for the nanostructures used in the experiment to produce robust and active topological photonic elements. “We’re now working on concepts to equip the dislocation centres in the crystal structure with special quantum emitters for single photon generation,” says Zentgraf, adding: “These could then be used in future optical quantum computers, for which single photon generation plays an important role.”
    Story Source:
    Materials provided by Universität Paderborn. Note: Content may be edited for style and length. More