More stories

  • in

    A new super-cooled microwave source boosts the scale-up of quantum computers

    Researchers in Finland have developed a circuit that produces the high-quality microwave signals required to control quantum computers while operating at temperatures near absolute zero. This is a key step towards moving the control system closer to the quantum processor, which may make it possible to greatly increase the number of qubits in the processor.
    One of the factors limiting the size of quantum computers is the mechanism used to control the qubits in quantum processors. This is normally accomplished using a series of microwave pulses, and because quantum processors operate at temperatures near absolute zero, the control pulses are normally brought into the cooled environment via broadband cables from room temperature.
    As the number of qubits grows, so does the number of cables needed. This limits the potential size of a quantum processor, because the refrigerators cooling the qubits would have to become larger to accommodate more and more cables while also working harder to cool them down — ultimately a losing proposition.
    A research consortium led by Aalto University and VTT Technical Research Centre of Finland has now developed a key component of the solution to this conundrum. ‘We have built a precise microwave source that works at the same extremely low temperature as the quantum processors, approximately -273 degrees,’ says Mikko Möttönen, Professor at Aalto University and VTT Technical Research Centre of Finland, who led the team.
    The new microwave source is an on-chip device that can be integrated with a quantum processor. Less than a millimetre in size, it potentially removes the need for high-frequency control cables connecting different temperatures. With this low-power, low-temperature microwave source, it may be possible to use smaller cryostats while still increasing the number of qubits in a processor.
    ‘Our device produces one hundred times more power than previous versions, which is enough to control qubits and carry out quantum logic operations,’ says Möttönen. ‘It produces a very precise sine wave, oscillating over a billion times per second. As a result, errors in qubits from the microwave source are very infrequent, which is important when implementing precise quantum logic operations.’
    However, a continuous-wave microwave source, such as the one produced by this device, cannot be used as is to control qubits. First, the microwaves must be shaped into pulses. The team is currently developing methods to quickly switch the microwave source on and off.
    Even without a switching solution to create pulses, an efficient, low-noise, low-temperature microwave source could be useful in a range of quantum technologies, such as quantum sensors.
    ‘In addition to quantum computers and sensors, the microwave source can act as a clock for other electronic devices. It can keep different devices in the same rhythm, allowing them to induce operations for several different qubits at the desired instant of time,’ explains Möttönen.
    The theoretical analysis and the initial design were carried out by Juha Hassel and others at VTT. Hassel, who started this work at VTT, is currently the head of engineering and development at IQM, a Finnish quantum-computing hardware company. The device was then built at VTT and operated by postdoctoral research Chengyu Yan and his colleagues at Aalto University using the OtaNano research infrastructure. Yan is currently an associate professor at Huazhong University of Science and Technology, China. The teams involved in this research are part of the Academy of Finland Centre of Excellence in Quantum Technology (QTF) and the Finnish Quantum Institute (InstituteQ).
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.
    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.
    The new system, called a differentiable solar cell simulator, is described in a paper published in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.
    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.
    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”
    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance. More

  • in

    Stretchy, washable battery brings wearable devices closer to reality

    UBC researchers have created what could be the first battery that is both flexible and washable. It works even when twisted or stretched to twice its normal length, or after being tossed in the laundry.
    “Wearable electronics are a big market and stretchable batteries are essential to their development,” says Dr. Ngoc Tan Nguyen, a postdoctoral fellow at UBC’s faculty of applied science. “However, up until now, stretchable batteries have not been washable. This is an essential addition if they are to withstand the demands of everyday use.”
    The battery developed by Dr. Nguyen and his colleagues offers a number of engineering advances. In normal batteries, the internal layers are hard materials encased in a rigid exterior. The UBC team made the key compounds — in this case, zinc and manganese dioxide — stretchable by grinding them into small pieces and then embedding them in a rubbery plastic, or polymer. The battery comprises several ultra-thin layers of these polymers wrapped inside a casing of the same polymer. This construction creates an airtight, waterproof seal that ensures the integrity of the battery through repeated use.
    It was team member Bahar Iranpour, a PhD student, who suggested throwing the battery in the wash to test its seal. So far, the battery has withstood 39 wash cycles and the team expects to further improve its durability as they continue to develop the technology.
    “We put our prototypes through an actual laundry cycle in both home and commercial-grade washing machines. They came out intact and functional and that’s how we know this battery is truly resilient,” says Iranpour.
    The choice of zinc and manganese dioxide chemistry also confers another important advantage. “We went with zinc-manganese because for devices worn next to the skin, it’s a safer chemistry than lithium-ion batteries, which can produce toxic compounds when they break,” says Nguyen.
    An affordable option
    Ongoing work is underway to increase the battery’s power output and cycle life, but already the innovation has attracted commercial interest. The researchers believe that when the new battery is ready for consumers, it could cost the same as an ordinary rechargeable battery.
    “The materials used are incredibly low-cost, so if this is made in large numbers, it will be cheap,” says electrical and computer engineering professor Dr. John Madden, director of UBC’s Advanced Materials and Process Engineering Lab who supervised the work. In addition to watches and patches for measuring vital signs, the battery might also be integrated with clothing that can actively change colour or temperature.
    “Wearable devices need power. By creating a cell that is soft, stretchable and washable, we are making wearable power comfortable and convenient.”
    Story Source:
    Materials provided by University of British Columbia. Note: Content may be edited for style and length. More

  • in

    Analog computers now just one step from digital

    The future of computing may be analog.
    The digital design of our everyday computers is good for reading email and gaming, but today’s problem-solving computers are working with vast amounts of data. The ability to both store and process this information can lead to performance bottlenecks due to the way computers are built.
    The next computer revolution might be a new kind of hardware, called processing-in-memory (PIM), an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine — no 1s or 0s needed to do the processing digitally.
    At Washington University in St. Louis, researchers from the lab of Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit, which brings the flexibility of neural networks to bear on PIM computing. The circuit has the potential to increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.
    Their research was published online Oct. 27 in the journal IEEE Transactions on Computers. The work was a collaboration with Li Jiang at Shanghai Jiao Tong University in China.
    Traditionally designed computers are built using a Von Neuman architecture. Part of this design separates the memory — where data is stored — and the processor — where the actual computing is performed. More

  • in

    Engineers teach AI to navigate ocean with minimal energy

    Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
    “When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
    The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.
    In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.
    To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
    Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
    This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
    “We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
    The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.
    “Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.
    Story Source:
    Materials provided by California Institute of Technology. Original written by Robert Perkins. Note: Content may be edited for style and length. More

  • in

    Wildfire smoke may ramp up toxic ozone production in cities

    Wildfire smoke and urban air pollution bring out the worst in each other.

    As wildfires rage, they transform their burned fuel into a complex chemical cocktail of smoke. Many of these airborne compounds, including ozone, cause air quality to plummet as wind carries the smoldering haze over cities. But exactly how — and to what extent — wildfire emissions contribute to ozone levels downwind of the fires has been a matter of debate for years, says Joel Thornton, an atmospheric scientist at the University of Washington in Seattle.

    A new study has now revealed the elusive chemistry behind ozone production in wildfire plumes. The findings suggest that mixing wildfire smoke with nitrogen oxides — toxic gases found in car exhaust — could pump up ozone levels in urban areas, researchers report December 8 in Science Advances.

    Atmospheric ozone is a major component of smog that can trigger respiratory problems in humans and wildlife (SN: 1/4/21). Many ingredients for making ozone — such as volatile organic compounds and nitrogen oxides — can be found in wildfire smoke, says Lu Xu, an atmospheric chemist currently at the National Oceanographic and Atmospheric Administration Chemical Sciences Laboratory in Boulder, Colo. But a list of ingredients isn’t enough to replicate a wildfire’s ozone recipe. So Xu and colleagues took to the sky to observe the chemistry in action.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Through a joint project with NASA and NOAA, the researchers worked with the Fire Influence on Regional to Global Environments and Air Quality flight campaign to transform a jetliner into a flying laboratory. In July and August 2019, the flight team collected air samples from smoldering landscapes across the western United States. As the plane passed headlong through the plumes, instruments onboard recorded the kinds and amounts of each molecule detected in the haze. By weaving in and out of the smoke as it drifted downwind from the flames, the team also analyzed how the plume’s chemical composition changed over time.

    Using these measurements along with the wind patterns and fuel from each wildfire sampled, the researchers created a straightforward equation to calculate ozone production from wildfire emissions. “We took a complex question and gave it a simple answer,” says Xu, who did the work while at Caltech.

    As expected, the researchers found that wildfire emissions contain a dizzying array of organic compounds and nitrogen oxide species among other molecules that contribute to ozone formation. Yet their analysis showed that the concentration of nitrogen oxides decreases in the hours after the plume is swept downwind. Without this key ingredient, ozone production slows substantially.  

    Air pollution from cities and other urban areas is chock full of noxious gases. So when wildfire smoke wafts over cityscapes, a boost of nitrous oxides could jump-start ozone production again, Xu says.

    In a typical fire season, mixes like these could increase ozone levels by as much as 3 parts per billion in the western United States, the researchers estimate. This concentration is far below the U.S. Environmental Protection Agency’s health safety standard of 70 parts per billion, but the incremental increase could still pose a health risk to people who are regularly exposed to smoke, Xu says.

    With climate change increasing the frequency and intensity of wildfires, this new ozone production mechanism has important implications for urban air quality, says Qi Zhang, an atmospheric chemist at the University of California, Davis who was not involved in the study (SN: 9/18/20). She says the work provides an “important missing link” between wildfire emissions and ozone chemistry.

    The findings may also pose a challenge for environmental policy makers, says Thornton, who was not involved in the research. Though state and local authorities set strict regulations to limit atmospheric ozone, wildfire smoke may undermine those strategies, he says. This could make it more difficult for cities, especially in the western United States, to meet EPA ozone standards despite air quality regulations. More

  • in

    These tiny liquid robots never run out of juice as long as they have food

    When you think of a robot, images of R2-D2 or C-3PO might come to mind. But robots can serve up more than just entertainment on the big screen. In a lab, for example, robotic systems can improve safety and efficiency by performing repetitive tasks and handling harsh chemicals.
    But before a robot can get to work, it needs energy — typically from electricity or a battery. Yet even the most sophisticated robot can run out of juice. For many years, scientists have wanted to make a robot that can work autonomously and continuously, without electrical input.
    Now, as reported last week in the journal Nature Chemistry, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of Massachusetts Amherst have demonstrated just that — through “water-walking” liquid robots that, like tiny submarines, dive below water to retrieve precious chemicals, and then surface to deliver chemicals “ashore” again and again.
    The technology is the first self-powered, aqueous robot that runs continuously without electricity. It has potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.
    “We have broken a barrier in designing a liquid robotic system that can operate autonomously by using chemistry to control an object’s buoyancy,” said senior author Tom Russell, a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts Amherst who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab’s Materials Sciences Division.
    Russell said that the technology significantly advances a family of robotic devices called “liquibots.” In previous studies, other researchers demonstrated liquibots that autonomously perform a task, but just once; and some liquibots can perform a task continuously, but need electricity to keep on running. In contrast, “we don’t have to provide electrical energy because our liquibots get their power or ‘food’ chemically from the surrounding media,” Russell explained. More

  • in

    AI-powered computer model predicts disease progression during aging

    Using artificial intelligence, a team of University at Buffalo researchers has developed a novel system that models the progression of chronic diseases as patients age.
    Published in Oct. in the Journal of Pharmacokinetics and Pharmacodynamics, the model assesses metabolic and cardiovascular biomarkers — measurable biological processes such as cholesterol levels, body mass index, glucose and blood pressure — to calculate health status and disease risks across a patient’s lifespan.
    The findings are critical due to the increased risk of developing metabolic and cardiovascular diseases with aging, a process that has adverse effects on cellular, psychological and behavioral processes.
    “There is an unmet need for scalable approaches that can provide guidance for pharmaceutical care across the lifespan in the presence of aging and chronic co-morbidities,” says lead author Murali Ramanathan, PhD, professor of pharmaceutical sciences in the UB School of Pharmacy and Pharmaceutical Sciences. “This knowledge gap may be potentially bridged by innovative disease progression modeling.”
    The model could facilitate the assessment of long-term chronic drug therapies, and help clinicians monitor treatment responses for conditions such as diabetes, high cholesterol and high blood pressure, which become more frequent with age, says Ramanathan.
    Additional investigators include first author and UB School of Pharmacy and Pharmaceutical Sciences alumnus Mason McComb, PhD; Rachael Hageman Blair, PhD, associate professor of biostatistics in the UB School of Public Health and Health Professions; and Martin Lysy, PhD, associate professor of statistics and actuarial science at the University of Waterloo.
    The research examined data from three case studies within the third National Health and Nutrition Examination Survey (NHANES) that assessed the metabolic and cardiovascular biomarkers of nearly 40,000 people in the United States.
    Biomarkers, which also include measurements such as temperature, body weight and height, are used to diagnose, treat and monitor overall health and numerous diseases.
    The researchers examined seven metabolic biomarkers: body mass index, waist-to-hip ratio, total cholesterol, high-density lipoprotein cholesterol, triglycerides, glucose and glycohemoglobin. The cardiovascular biomarkers examined include systolic and diastolic blood pressure, pulse rate and homocysteine.
    By analyzing changes in metabolic and cardiovascular biomarkers, the model “learns” how aging affects these measurements. With machine learning, the system uses a memory of previous biomarker levels to predict future measurements, which ultimately reveal how metabolic and cardiovascular diseases progress over time.
    Story Source:
    Materials provided by University at Buffalo. Original written by Marcene Robinson. Note: Content may be edited for style and length. More