More stories

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.
    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.
    The new system, called a differentiable solar cell simulator, is described in a paper published in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.
    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.
    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”
    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance. More

  • in

    Stretchy, washable battery brings wearable devices closer to reality

    UBC researchers have created what could be the first battery that is both flexible and washable. It works even when twisted or stretched to twice its normal length, or after being tossed in the laundry.
    “Wearable electronics are a big market and stretchable batteries are essential to their development,” says Dr. Ngoc Tan Nguyen, a postdoctoral fellow at UBC’s faculty of applied science. “However, up until now, stretchable batteries have not been washable. This is an essential addition if they are to withstand the demands of everyday use.”
    The battery developed by Dr. Nguyen and his colleagues offers a number of engineering advances. In normal batteries, the internal layers are hard materials encased in a rigid exterior. The UBC team made the key compounds — in this case, zinc and manganese dioxide — stretchable by grinding them into small pieces and then embedding them in a rubbery plastic, or polymer. The battery comprises several ultra-thin layers of these polymers wrapped inside a casing of the same polymer. This construction creates an airtight, waterproof seal that ensures the integrity of the battery through repeated use.
    It was team member Bahar Iranpour, a PhD student, who suggested throwing the battery in the wash to test its seal. So far, the battery has withstood 39 wash cycles and the team expects to further improve its durability as they continue to develop the technology.
    “We put our prototypes through an actual laundry cycle in both home and commercial-grade washing machines. They came out intact and functional and that’s how we know this battery is truly resilient,” says Iranpour.
    The choice of zinc and manganese dioxide chemistry also confers another important advantage. “We went with zinc-manganese because for devices worn next to the skin, it’s a safer chemistry than lithium-ion batteries, which can produce toxic compounds when they break,” says Nguyen.
    An affordable option
    Ongoing work is underway to increase the battery’s power output and cycle life, but already the innovation has attracted commercial interest. The researchers believe that when the new battery is ready for consumers, it could cost the same as an ordinary rechargeable battery.
    “The materials used are incredibly low-cost, so if this is made in large numbers, it will be cheap,” says electrical and computer engineering professor Dr. John Madden, director of UBC’s Advanced Materials and Process Engineering Lab who supervised the work. In addition to watches and patches for measuring vital signs, the battery might also be integrated with clothing that can actively change colour or temperature.
    “Wearable devices need power. By creating a cell that is soft, stretchable and washable, we are making wearable power comfortable and convenient.”
    Story Source:
    Materials provided by University of British Columbia. Note: Content may be edited for style and length. More

  • in

    Analog computers now just one step from digital

    The future of computing may be analog.
    The digital design of our everyday computers is good for reading email and gaming, but today’s problem-solving computers are working with vast amounts of data. The ability to both store and process this information can lead to performance bottlenecks due to the way computers are built.
    The next computer revolution might be a new kind of hardware, called processing-in-memory (PIM), an emerging computing paradigm that merges the memory and processing unit and does its computations using the physical properties of the machine — no 1s or 0s needed to do the processing digitally.
    At Washington University in St. Louis, researchers from the lab of Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical & Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit, which brings the flexibility of neural networks to bear on PIM computing. The circuit has the potential to increase PIM computing’s performance by orders of magnitude beyond its current theoretical capabilities.
    Their research was published online Oct. 27 in the journal IEEE Transactions on Computers. The work was a collaboration with Li Jiang at Shanghai Jiao Tong University in China.
    Traditionally designed computers are built using a Von Neuman architecture. Part of this design separates the memory — where data is stored — and the processor — where the actual computing is performed. More

  • in

    Engineers teach AI to navigate ocean with minimal energy

    Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
    “When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
    The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.
    In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.
    To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
    Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
    This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
    “We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
    The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.
    “Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.
    Story Source:
    Materials provided by California Institute of Technology. Original written by Robert Perkins. Note: Content may be edited for style and length. More

  • in

    These tiny liquid robots never run out of juice as long as they have food

    When you think of a robot, images of R2-D2 or C-3PO might come to mind. But robots can serve up more than just entertainment on the big screen. In a lab, for example, robotic systems can improve safety and efficiency by performing repetitive tasks and handling harsh chemicals.
    But before a robot can get to work, it needs energy — typically from electricity or a battery. Yet even the most sophisticated robot can run out of juice. For many years, scientists have wanted to make a robot that can work autonomously and continuously, without electrical input.
    Now, as reported last week in the journal Nature Chemistry, scientists at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of Massachusetts Amherst have demonstrated just that — through “water-walking” liquid robots that, like tiny submarines, dive below water to retrieve precious chemicals, and then surface to deliver chemicals “ashore” again and again.
    The technology is the first self-powered, aqueous robot that runs continuously without electricity. It has potential as an automated chemical synthesis or drug delivery system for pharmaceuticals.
    “We have broken a barrier in designing a liquid robotic system that can operate autonomously by using chemistry to control an object’s buoyancy,” said senior author Tom Russell, a visiting faculty scientist and professor of polymer science and engineering from the University of Massachusetts Amherst who leads the Adaptive Interfacial Assemblies Towards Structuring Liquids program in Berkeley Lab’s Materials Sciences Division.
    Russell said that the technology significantly advances a family of robotic devices called “liquibots.” In previous studies, other researchers demonstrated liquibots that autonomously perform a task, but just once; and some liquibots can perform a task continuously, but need electricity to keep on running. In contrast, “we don’t have to provide electrical energy because our liquibots get their power or ‘food’ chemically from the surrounding media,” Russell explained. More

  • in

    AI-powered computer model predicts disease progression during aging

    Using artificial intelligence, a team of University at Buffalo researchers has developed a novel system that models the progression of chronic diseases as patients age.
    Published in Oct. in the Journal of Pharmacokinetics and Pharmacodynamics, the model assesses metabolic and cardiovascular biomarkers — measurable biological processes such as cholesterol levels, body mass index, glucose and blood pressure — to calculate health status and disease risks across a patient’s lifespan.
    The findings are critical due to the increased risk of developing metabolic and cardiovascular diseases with aging, a process that has adverse effects on cellular, psychological and behavioral processes.
    “There is an unmet need for scalable approaches that can provide guidance for pharmaceutical care across the lifespan in the presence of aging and chronic co-morbidities,” says lead author Murali Ramanathan, PhD, professor of pharmaceutical sciences in the UB School of Pharmacy and Pharmaceutical Sciences. “This knowledge gap may be potentially bridged by innovative disease progression modeling.”
    The model could facilitate the assessment of long-term chronic drug therapies, and help clinicians monitor treatment responses for conditions such as diabetes, high cholesterol and high blood pressure, which become more frequent with age, says Ramanathan.
    Additional investigators include first author and UB School of Pharmacy and Pharmaceutical Sciences alumnus Mason McComb, PhD; Rachael Hageman Blair, PhD, associate professor of biostatistics in the UB School of Public Health and Health Professions; and Martin Lysy, PhD, associate professor of statistics and actuarial science at the University of Waterloo.
    The research examined data from three case studies within the third National Health and Nutrition Examination Survey (NHANES) that assessed the metabolic and cardiovascular biomarkers of nearly 40,000 people in the United States.
    Biomarkers, which also include measurements such as temperature, body weight and height, are used to diagnose, treat and monitor overall health and numerous diseases.
    The researchers examined seven metabolic biomarkers: body mass index, waist-to-hip ratio, total cholesterol, high-density lipoprotein cholesterol, triglycerides, glucose and glycohemoglobin. The cardiovascular biomarkers examined include systolic and diastolic blood pressure, pulse rate and homocysteine.
    By analyzing changes in metabolic and cardiovascular biomarkers, the model “learns” how aging affects these measurements. With machine learning, the system uses a memory of previous biomarker levels to predict future measurements, which ultimately reveal how metabolic and cardiovascular diseases progress over time.
    Story Source:
    Materials provided by University at Buffalo. Original written by Marcene Robinson. Note: Content may be edited for style and length. More

  • in

    Liquid crystals for fast switching devices

    Liquid crystals are not solid, but some of their physical properties are directional — like in a crystal. This is because their molecules can arrange themselves into certain patterns. The best-known applications include flat screens and digital displays. They are based on pixels of liquid crystals whose optical properties can be switched by electric fields.
    Some liquid crystals form the so-called cholesteric phases: the molecules self-assemble into helical structures, which are characterised by pitch and rotate either to the right or to the left. “The pitch of the cholesteric spirals determines how quickly they react to an applied electric field,” explains Dr. Alevtina Smekhova, physicist at HZB and first author of the study, which has now been published in Soft Matter.
    Simple molecular chain
    In this work, she and partners from the Academies of Sciences in Prague, Moscow and Chernogolovka investigated a liquid crystalline cholesteric compound called EZL10/10, developed in Prague. “Such cholesteric phases are usually formed by molecules with several chiral centres, but here the molecule has only one chiral centre,” explains Dr. Smekhova. It is a simple molecular chain with one lactate unit.
    Ultrashort pitch
    At BESSY II, the team has now examined this compound with soft X-ray light and determined the pitch and space ordering of the spirals. This was the shortest up-to-date reported value of the pitch: only 104 nanometres! This is twice as short as the previously known pitch of spiral structures in liquid crystals. Further analysis showed that in this material the cholesteric spirals form domains with characteristic lengths of about five pitches.
    Outlook
    “This very short pitch makes the material unique and promising for optoelectronic devices with very fast switching times,” Dr. Smekhova points out. In addition, the EZ110/10 compound is thermally and chemically stable and can easily be further varied to obtain structures with customised pitch lengths.
    Story Source:
    Materials provided by Helmholtz-Zentrum Berlin für Materialien und Energie. Note: Content may be edited for style and length. More

  • in

    How statistics can aid in the fight against misinformation

    An American University math professor and his team created a statistical model that can be used to detect misinformation in social posts. The model also avoids the problem of black boxes that occur in machine learning.
    With the use of algorithms and computer models, machine learning is increasingly playing a role in helping to stop the spread of misinformation, but a main challenge for scientists is the black box of unknowability, where researchers don’t understand how the machine arrives at the same decision as human trainers.
    Using a Twitter dataset with misinformation tweets about COVID-19, Zois Boukouvalas, assistant professor in AU’s Department of Mathematics and Statistics, College of Arts and Sciences, shows how statistical models can detect misinformation in social media during events like a pandemic or a natural disaster. In newly published research, Boukouvalas and his colleagues, including AU student Caitlin Moroney and Computer Science Prof. Nathalie Japkowicz, also show how the model’s decisions align with those made by humans.
    “We would like to know what a machine is thinking when it makes decisions, and how and why it agrees with the humans that trained it,” Boukouvalas said. “We don’t want to block someone’s social media account because the model makes a biased decision.”
    Boukouvalas’ method is a type of machine learning using statistics. It’s not as popular a field of study as deep learning, the complex, multi-layered type of machine learning and artificial intelligence. Statistical models are effective and provide another, somewhat untapped, way to fight misinformation, Boukouvalas said.
    For a testing set of 112 real and misinformation tweets, the model achieved a high prediction performance and classified them correctly, with an accuracy of nearly 90 percent. (Using such a compact dataset was an efficient way for verifying how the method detected the misinformation tweets.)
    “What’s significant about this finding is that our model achieved accuracy while offering transparency about how it detected the tweets that were misinformation,” Boukouvalas added. “Deep learning methods cannot achieve this kind of accuracy with transparency.” More