More stories

  • in

    COVID calculations spur solution to old problem in computer science

    During the corona epidemic many of us became amateur mathematicians. How quickly would the number of hospitalized patients rise, and when would herd immunity be achieved? Professional mathematicians were challenged as well, and a researcher at University of Copenhagen became inspired to solve a 30-year-old problem in computer science. The breakthrough has just been published in th Journal of the ACM (Association for Computing Machinery).
    “Like many others, I was out to calculate how the epidemic would develop. I wanted to investigate certain ideas from theoretical computer science in this context. However, I realized that the lack of solution to the old problem was a showstopper,” says Joachim Kock, Associate Professor at the Department of Mathematics, University of Copenhagen.
    His solution to the problem can be of use in epidemiology and computer science, and potentially in other fields as well. A common feature for these fields is the presence of systems where the various components exhibit mutual influence. For instance, when a healthy person meets a person infected with COVID, the result can be two people infected.
    Smart method invented by German teenager
    To understand the breakthrough, one needs to know that such complex systems can be described mathematically through so-called Petri nets. The method was invented in 1939 by German Carl Adam Petri (by the way at the age of only 13) for chemistry applications. Just like a healthy person meeting a person infected with COVID can trigger a change, the same may happen when two chemical substances mix and react.
    In a Petri net the various components are drawn as circles while events such as a chemical reaction or an infection are drawn as squares. Next, circles and squares are connected by arrows which show the interdependencies in the system.

    A simple version of a Petri net for COVID infection. The starting point is a non-infected person. “S” denotes “susceptible.” Contact with an infected person (“I”) is an event which leads to two persons being infected. Later another event will happen, removing a person from the group of infected. Here, “R” denotes “recovered” which in this context could be either cured or dead. Either outcome would remove the person from the infected group.
    Computer scientists regarded the problem as unsolvable
    In chemistry, Petri nets are applied for calculating how the concentrations of various chemical substances in a mixture will evolve. This manner of thinking has influenced the use of Petri nets in other fields such as epidemiology: we are starting out with a high “concentration” of un-infected people, whereafter the “concentration” of infected starts to rise. In computer science, the use of Petri nets is somewhat different: the focus is on individuals rather than concentrations, and the development happens in steps rather than continuously.
    What Joachim Kock had in mind was to apply the more individual-oriented Petri nets from computer science for COVID calculations. This was when he encountered the old problem:
    “Basically, the processes in a Petri net can be described through two separate approaches. The first approach regards a process as a series of events, while the second approach sees the net as a graphical expression of the interdependencies between components and events,” says Joachim Kock, adding:
    “The serial approach is well suited for performing calculations. However, it has a downside since it describes causalities less accurately than the graphical approach. Further, the serial approach tends to fall short when dealing with events that take place simultaneously.”

    “The problem was that nobody had been able to unify the two approaches. The computer scientists had more or less resigned, regarding the problem as unsolvable. This was because no-one had realized that you need to go all the way back and revise the very definition of a Petri net,” says Joachim Kock.
    Small modification with large impact
    The Danish mathematician realized that a minor modification to the definition of a Petri net would enable a solution to the problem:
    “By allowing parallel arrows rather than just counting them and writing a number, additional information is made available. Things work out and the two approaches can be unified.”
    The exact mathematical reason why this additional information matters is complex, but can be illustrated by an analogy:
    “Assigning numbers to objects has helped humanity greatly. For instance, it is highly practical that I can arrange the right number of chairs in advance for a dinner party instead of having to experiment with different combinations of chairs and guests after they have arrived. However, the number of chairs and guests does not reveal who will be sitting where. Some information is lost when we consider numbers instead of the real objects.”
    Similarly, information is lost when the individual arrows of the Petri net are replaced by a number.
    “It takes a bit more effort to treat the parallel arrows individually, but one is amply rewarded as it becomes possible to combine the two approaches so that the advantages of both can be obtained simultaneously.”
    The circle to COVID has been closed
    The solution helps our mathematical understanding of how to describe complex systems with many interdependencies, but will not have much practical effect on the daily work of computer scientists using Petri nets, according to Joachim Kock:
    “This is because the necessary modifications are mostly back-compatible and can be applied without need for revision of the entire Petri net theory.”
    “Somewhat surprisingly, some epidemiologists have started using the revised Petri nets. So, one might say the circle has been closed!”
    Joachim Kock does see a further point to the story:
    “I wasn’t out to find a solution to the old problem in computer science at all. I just wanted to do COVID calculations. This was a bit like looking for your pen but realizing that you must find your glasses first. So, I would like to take the opportunity to advocate the importance of research which does not have a predefined goal. Sometimes research driven by curiosity will lead to breakthroughs.” More

  • in

    Clinical trial results indicate low rate of adverse events associated with implanted brain computer interface

    For people with paralysis caused by neurologic injury or disease — such as ALS (also known as Lou Gehrig’s disease), stroke, or spinal cord injury — brain-computer interfaces (BCIs) have the potential to restore communication, mobility, and independence by transmitting information directly from the brain to a computer or other assistive technology.
    Although implanted brain sensors, the core component of many brain-computer interfaces, have been used in neuroscientific studies with animals for decades and have been approved for short term use ( More

  • in

    AI discovers new nanostructures

    Scientists at the U.S. Department of Energy’s (DOE) Brookhaven National Laboratory have successfully demonstrated that autonomous methods can discover new materials. The artificial intelligence (AI)-driven technique led to the discovery of three new nanostructures, including a first-of-its-kind nanoscale “ladder.” The research was published today in Science Advances.
    The newly discovered structures were formed by a process called self-assembly, in which a material’s molecules organize themselves into unique patterns. Scientists at Brookhaven’s Center for Functional Nanomaterials (CFN) are experts at directing the self-assembly process, creating templates for materials to form desirable arrangements for applications in microelectronics, catalysis, and more. Their discovery of the nanoscale ladder and other new structures further widens the scope of self-assembly’s applications.
    “Self-assembly can be used as a technique for nanopatterning, which is a driver for advances in microelectronics and computer hardware,” said CFN scientist and co-author Gregory Doerk. “These technologies are always pushing for higher resolution using smaller nanopatterns. You can get really small and tightly controlled features from self-assembling materials, but they do not necessarily obey the kind of rules that we lay out for circuits, for example. By directing self-assembly using a template, we can form patterns that are more useful.”
    Staff scientists at CFN, which is a DOE Office of Science User Facility, aim to build a library of self-assembled nanopattern types to broaden their applications. In previous studies, they demonstrated that new types of patterns are made possible by blending two self-assembling materials together.
    “The fact that we can now create a ladder structure, which no one has ever dreamed of before, is amazing,” said CFN group leader and co-author Kevin Yager. “Traditional self-assembly can only form relatively simple structures like cylinders, sheets, and spheres. But by blending two materials together and using just the right chemical grating, we’ve found that entirely new structures are possible.”
    Blending self-assembling materials together has enabled CFN scientists to uncover unique structures, but it has also created new challenges. With many more parameters to control in the self-assembly process, finding the right combination of parameters to create new and useful structures is a battle against time. To accelerate their research, CFN scientists leveraged a new AI capability: autonomous experimentation.

    In collaboration with the Center for Advanced Mathematics for Energy Research Applications (CAMERA) at DOE’s Lawrence Berkeley National Laboratory, Brookhaven scientists at CFN and the National Synchrotron Light Source II (NSLS-II), another DOE Office of Science User Facility at Brookhaven Lab, have been developing an AI framework that can autonomously define and perform all the steps of an experiment. CAMERA’s gpCAM algorithm drives the framework’s autonomous decision-making. The latest research is the team’s first successful demonstration of the algorithm’s ability to discover new materials.
    “gpCAM is a flexible algorithm and software for autonomous experimentation,” said Berkeley Lab scientist and co-author Marcus Noack. “It was used particularly ingeniously in this study to autonomously explore different features of the model.”
    “With help from our colleagues at Berkeley Lab, we had this software and methodology ready to go, and now we’ve successfully used it to discover new materials,” Yager said. “We’ve now learned enough about autonomous science that we can take a materials problem and convert it into an autonomous problem pretty easily.”
    To accelerate materials discovery using their new algorithm, the team first developed a complex sample with a spectrum of properties for analysis. Researchers fabricated the sample using the CFN nanofabrication facility and carried out the self-assembly in the CFN material synthesis facility.
    “An old school way of doing material science is to synthesize a sample, measure it, learn from it, and then go back and make a different sample and keep iterating that process,” Yager said. “Instead, we made a sample that has a gradient of every parameter we’re interested in. That single sample is thus a vast collection of many distinct material structures.”
    Then, the team brought the sample to NSLS-II, which generates ultrabright x-rays for studying the structure of materials. CFN operates three experimental stations in partnership with NSLS-II, one of which was used in this study, the Soft Matter Interfaces (SMI) beamline.

    “One of the SMI beamline’s strengths is its ability to focus the x-ray beam on the sample down to microns,” said NSLS-II scientist and co-author Masa Fukuto. “By analyzing how these microbeam x-rays get scattered by the material, we learn about the material’s local structure at the illuminated spot. Measurements at many different spots can then reveal how the local structure varies across the gradient sample. In this work, we let the AI algorithm pick, on the fly, which spot to measure next to maximize the value of each measurement.”
    As the sample was measured at the SMI beamline, the algorithm, without human intervention, created of model of the material’s numerous and diverse set of structures. The model updated itself with each subsequent x-ray measurement, making every measurement more insightful and accurate.
    In a matter of hours, the algorithm had identified three key areas in the complex sample for the CFN researchers to study more closely. They used the CFN electron microscopy facility to image those key areas in exquisite detail, uncovering the rails and rungs of a nanoscale ladder, among other novel features.
    From start to finish, the experiment ran about six hours. The researchers estimate they would have needed about a month to make this discovery using traditional methods.
    “Autonomous methods can tremendously accelerate discovery,” Yager said. “It’s essentially ‘tightening’ the usual discovery loop of science, so that we cycle between hypotheses and measurements more quickly. Beyond just speed, however, autonomous methods increase the scope of what we can study, meaning we can tackle more challenging science problems.”
    “Moving forward, we want to investigate the complex interplay among multiple parameters. We conducted simulations using the CFN computer cluster that verified our experimental results, but they also suggested how other parameters, such as film thickness, can also play an important role,” Doerk said.
    The team is actively applying their autonomous research method to even more challenging material discovery problems in self-assembly, as well as other classes of materials. Autonomous discovery methods are adaptable and can be applied to nearly any research problem.
    “We are now deploying these methods to the broad community of users who come to CFN and NSLS-II to conduct experiments,” Yager said. “Anyone can work with us to accelerate the exploration of their materials research. We foresee this empowering a host of new discoveries in the coming years, including in national priority areas like clean energy and microelectronics.”
    This research was supported by the DOE Office of Science. More

  • in

    AI improves detail, estimate of urban air pollution

    Using artificial intelligence, Cornell University engineers have simplified and reinforced models that accurately calculate the fine particulate matter (PM2.5) — the soot, dust and exhaust emitted by trucks and cars that get into human lungs — contained in urban air pollution.
    Now, city planners and government health officials can obtain a more precise accounting about the well-being of urban dwellers and the air they breathe, from new research published December 2022 in the journal Transportation Research Part D.
    “Infrastructure determines our living environment, our exposure,” said senior author Oliver Gao, the Howard Simpson Professor of Civil and Environmental Engineering in the College of Engineering at Cornell University. “Air pollution impact due to transportation — put out as exhaust from the cars and trucks that drive on our streets — is very complicated. Our infrastructure, transportation and energy policies are going to impact air pollution and hence public health.”
    Previous methods to gauge air pollution were cumbersome and reliant on extraordinary amounts of data points. “Older models to calculate particulate matter were computationally and mechanically consuming and complex,” said Gao, a faculty fellow at the Cornell Atkinson Center for Sustainability. “But if you develop an easily accessible data model, with the help of artificial intelligence filling in some of the blanks, you can have an accurate model at a local scale.”
    Lead author Salil Desai and visiting scientist Mohammad Tayarani, together with Gao, published “Developing Machine Learning Models for Hyperlocal Traffic Related Particulate Matter Concentration Mapping,” to offer a leaner, less data-intensive method for making accurate models.
    Ambient air pollution is a leading cause of premature death around the world. Globally, more than 4.2 million annual fatalities — in the form of cardiovascular disease, ischemic heart disease, stroke and lung cancer — were attributed to air pollution in 2015, according to a Lancet study cited in the Cornell research.
    In this work, the group developed four machine learning models for traffic-related particulate matter concentrations in data gathered in New York City’s five boroughs, which have a combined population of 8.2 million people and a daily-vehicle miles traveled of 55 million miles.
    The equations use few inputs such as traffic data, topology and meteorology in an AI algorithm to learn simulations for a wide range of traffic-related, air-pollution concentration scenarios.
    Their best performing model was the Convolutional Long Short-term Memory, or ConvLSTM, which trained the algorithm to predict many spatially correlated observations.
    “Our data-driven approach — mainly based on vehicle emission data — requires considerably fewer modeling steps,” Desai said. Instead of focusing on stationary locations, the method provides a high-resolution estimation of the city street pollution surface. Higher resolution can help transportation and epidemiology studies assess health, environmental justice and air quality impacts.
    Funding for this research came from the U.S. Department of Transportation’s University Transportation Centers Program and Cornell Atkinson. More

  • in

    A precision arm for miniature robots

    We are all familiar with robots equipped with moving arms. They stand in factory halls, perform mechanical work and can be programmed. A single robot can be used to carry out a variety of tasks.
    Until today, miniature systems that transport miniscule amounts of liquid through fine capillaries have had little association with such robots. Developed by researchers as an aid for laboratory analysis, such systems are known as microfluidics or lab-on-a-chip and generally make use of external pumps to move the liquid through the chips. To date, such systems have been difficult to automate, and the chips have had to be custom-designed and manufactured for each specific application.
    Ultrasound needle oscillations
    Scientists led by ETH Professor Daniel Ahmed are now combining conventional robotics and microfluidics. They have developed a device that uses ultrasound and can be attached to a robotic arm. It is suitable for performing a wide range of tasks in microrobotic and microfluidic applications and can also be used to automate such applications. The scientists have reported on this development in Nature Communications.
    The device comprises a thin, pointed glass needle and a piezoelectric transducer that causes the needle to oscillate. Similar transducers are used in loudspeakers, ultrasound imaging and professional dental cleaning equipment. The ETH researchers can vary the oscillation frequency of their glass needle. By dipping the needle into a liquid they create a three-dimensional pattern composed of multiple vortices. Since this pattern depends on the oscillation frequency, it can be controlled accordingly.
    The researchers were able to use this to demonstrate several applications. First, they were able to mix tiny droplets of highly viscous liquids. “The more viscous liquids are, the more difficult it is to mix them,” Professor Ahmed explains. “However, our method succeeds in doing this because it allows us to not only create a single vortex, but to also efficiently mix the liquids using a complex three-dimensional pattern composed of multiple strong vortices.”
    Second, the scientists were able to pump fluids through a mini-channel system by creating a specific pattern of vortices and placing the oscillating glass needle close to the channel wall.
    Third, they succeeded in using their robot-assisted acoustic device to trap fine particles present in the fluid. This works because a particle’s size determines its reaction to the sound waves. Relatively large particles move towards the oscillating glass needle, where they accumulate. The researchers demonstrated how this method can capture not only inanimate particles but also fish embryos. They believe it should also be capable of capturing biological cells in the fluid. “In the past, manipulating microscopic particles in three dimensions was always challenging. Our microrobotic arm makes it easy,” Ahmed says.
    “Until now, advancements in large, conventional robotics and microfluidic applications have been made separately,” Ahmed says. “Our work helps to bring the two approaches together.” As a result, future microfluidic systems could be designed similarly to today’s robotic systems. An appropriately programmed single device would be able to handle a variety of tasks. “Mixing and pumping liquids and trapping particles — we can do it all with one device,” Ahmed says. This means tomorrow’s microfluidic chips will no longer have to be custom-developed for each specific application. The researchers would next like to combine several glass needles to create even more complex vortex patterns in liquids.
    In addition to laboratory analysis, Ahmed can envisage other applications for microrobotic arms, such as sorting tiny objects. The arms could conceivably also be used in biotechnology as a way of introducing DNA into individual cells. It should ultimately be possible to employ them in additive manufacturing and 3D printing. More

  • in

    Feathered robotic wing paves way for flapping drones

    Birds fly more efficiently by folding their wings during the upstroke, according to a recent study led by Lund University in Sweden. The results could mean that wing-folding is the next step in increasing the propulsive and aerodynamic efficiency of flapping drones.
    Even the precursors to birds — extinct bird-like dinosaurs — benefited from folding their wings during the upstroke, as they developed active flight. Among flying animals alive today, birds are the largest and most efficient. This makes them particularly interesting as inspiration for the development of drones. However, determining which flapping strategy is best requires aerodynamic studies of various ways of flapping the wings. Therefore, a Swedish-Swiss research team has constructed a robotic wing that can achieve just that — flapping like a bird, and beyond.
    “We have built a robot wing that can flap more like a bird than previous robots, but also flap in way that birds cannot do. By measuring the performance of the wing in our wind tunnel, we have studied how different ways of achieving the wing upstroke affect force and energy in flight,” says Christoffer Johansson, biology researcher at Lund University.
    Previous studies have shown that birds flap their wings more horizontally when flying slowly. The new study shows that the birds probably do it, even though it requires more energy, because it is easier to create a sufficiently large forces to stay aloft and propel themselves. This is something drones can emulate to increase the range of speeds they can fly at.
    “The new robotic wing can be used to answer questions about bird flight that would be impossible simply by observing flying birds. Research into the flight ability of living birds is limited to the flapping movement that the bird actually uses,” explains Christoffer Johansson.
    The research explains why birds flap the way they do, by finding out which movement patterns create the most force and are the most efficient. The results can also be used in other research areas, such as better understanding how the migration of birds is affected by climate change and access to food. There are also many potential uses for drones where these insights can be put to good use. One area might be using drones to deliver goods.
    “Flapping drones could be used for deliveries, but they would need to be efficient enough and able to lift the extra weight this entails. How the wings move is of great importance for performance, so this is where our research could come in handy,” concludes Christoffer Johansson. More

  • in

    Using machine learning to help monitor climate-induced hazards

    Combining satellite technology with machine learning may allow scientists to better track and prepare for climate-induced natural hazards, according to research presented last month at the annual meeting of the American Geophysical Union.
    Over the last few decades, rising global temperatures have caused many natural phenomena like hurricanes, snowstorms, floods and wildfires to grow in intensity and frequency.
    While humans can’t prevent these disasters from occurring, the rapidly increasing number of satellites that orbit the Earth from space offers a greater opportunity to monitor their evolution, said C.K Shum, co-author of the study and a professor at the Byrd Polar Research Center and in earth sciences at The Ohio State University. He said that potentially allowing people in the area to make informed decisions could improve the effectiveness of local disaster response and management.
    “Predicting the future is a pretty difficult task, but by using remote sensing and machine learning, our research aims to help create a system that will be able to monitor these climate-induced hazards in a manner that enables a timely and informed disaster response,” said Shum.
    Shum’s research uses geodesy — the science of measuring the planet’s size, shape and orientation in space — to study phenomena related to global climate change.
    Using geodetic data gathered from various space agency satellites, researchers conducted several case studies to test whether a mix of remote sensing and deep machine learning analytics could accurately monitor abrupt weather episodes, including floods, droughts and storm surges in some areas of the world.
    In one experiment, the team used these methods to determine if radar signals from Earth’s Global Navigation Satellite System (GNSS), which were reflected over the ocean and received by GNSS receivers located at towns offshore in the Gulf of Mexico, could be used to track hurricane evolution by measuring rising sea levels after landfall. Between 2020 and 2021, the team studied how seven storms, such as Hurricane Hana and Hurricane Delta, affected coastal sea levels before they made landfall in the Gulf of Mexico. By monitoring these complex changes, they found a positive correlation between higher sea levels and how intense the storm surges were.
    The data they used was collected by NASA and the German Aerospace Center’s Gravity Recovery And Climate Experiment (GRACE) mission, and its successor, GRACE Follow-On. Both satellites have been used to monitor changes in Earth’s mass over the past two decades, but so far, have only been able to view the planet from a little more than 400 miles up. But using deep machine learning analytics, Shum’s team was able to reduce this resolution to about 15 miles, effectively improving society’s ability to monitor natural hazards.
    “Taking advantage of deep machine learning means having to condition the algorithm to continuously learn from various data inputs to achieve the goal you want to accomplish,” Shum said. In this instance, satellites allowed researchers to quantify the path and evolution of two Category 4 Atlantic hurricane-induced storm surges during their landfalls over Texas and Louisiana, Hurricane Harvey in August 2017 and Hurricane Laura in August 2020, respectively.
    Accurate measurements of these natural hazards could one day help improve hurricane forecasting, said Shum. But in the short term, Shum would like to see countries and organizations make their satellite data more readily available to scientists, as projects that rely on deep machine learning often need large amounts of wide-ranging data to help make accurate forecasts.
    “Many of these novel satellite techniques require time and effort to process massive amounts of accurate data,” said Shum. “If researchers have access to more resources, we’ll be able to potentially develop technologies to better prepare people to adapt, as well as allow disaster management agencies to improve their response to intense and frequent climate-induced natural hazards.”
    Co-authors of the project were Yu Zhang, Yuanyuan Jia, Yihang Ding and Junyi Guo of Ohio State; Orhan Akyilmaz and Metehan Uz of Istanbul Technical University; and Kazim Atman of Queen Mary University of London. This work was supported by the United States Agency for International Development (USAID), the National Science Foundation (NSF), the National Aeronautics and Space Administration and the Scientific and Technological Research Council of Türkiye (TÜB?TAK). More

  • in

    Novel design helps develop powerful microbatteries

    Translating electrochemical performance of large format batteries to microscale power sources has been a long-standing technological challenge, limiting the ability of batteries to power microdevices, microrobots and implantable medical devices. University of Illinois Urbana-Champaign researchers have created a high-voltage microbattery ( > 9 V), with high-energy and -power density, unparalleled by any existing battery design.
    Material Science and Engineering Professor Paul Braun (Grainger Distinguished Chair in Engineering, Materials Research Laboratory Director), Dr. Sungbong Kim (Postdoc, MatSE, current assistant professor at Korea Military Academy, co-first author), and Arghya Patra (Graduate Student, MatSE, MRL, co-first author) recently published their paper “Serially integrated high-voltage and high-power miniature batteries” in Cell Reports Physical Science.
    The team demonstrated hermetically sealed (tightly closed to prevent exposure to ambient air), durable, compact, lithium batteries with exceptionally low package mass fraction in single-, double-, and triple-stacked configurations with unprecedented operating voltages, high power densities, and energy densities.
    Braun explains, “We need powerful tiny batteries to unlock the full potential of microscale devices, by improving the electrode architectures and coming up with innovative battery designs.” The problem is that as batteries become smaller, the packaging dominates the battery volume and mass while the electrode area becomes smaller. This results in drastic reductions in energy and power of the battery.
    In their unique design of powerful microbatteries, the team developed novel packaging technology that used the positive and negative terminal current collectors as part of the packaging itself (rather than a separate entity). This allowed for the compact volume (? 0.165 cm3) andlow package mass fraction (10.2%) of the batteries. In addition, they vertically stacked the electrode cells in series (so the voltage of each cell adds), which enabled the high operating voltage of the battery.
    Another way these microbatteries are improved is by using very dense electrodes which offers energy density. Normal electrodes are almost 40% by volume occupied by polymers and carbon additives (not active materials). Braun’s group has grown electrodes by an intermediate temperature direct electrodeposition technique which are fully dense and without polymer and carbon additives. These fully dense electrodes offer more volumetric energy density than their commercial counterparts. The microbatteries in this research were fabricated using the dense electroplated DirectPlateTM LiCoO2 electrodes manufactured by Xerion Advanced Battery Corporation (XABC, Dayton, Ohio), a company that spun out of Braun’s research.
    Patra mentions, “To date, electrode architectures and cell designs at the micro-nano scale have been limited to power dense designs that came at the cost of porosity and volumetric energy density. Our work has been successful to create a microscale energy source that exhibits both high power density and volumetric energy density.”
    An important application space of these microbatteries includes powering insect-size microrobots to obtain valuable information during natural disasters, search and rescue missions, and in hazardous environments where direct human access is impossible. Co-author James Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania) points out that “the high voltage is important for reducing the electronic payload that a microrobot needs to carry. 9 V can directly power motors and reduce the energy loss associated with boosting the voltage to the hundreds or thousands of volts needed from some actuators. This means that these batteries enable system level improvements beyond their energy density enhancement so that the small robots can travel farther or send more critical information to human operators.”
    Kim adds, “Our work bridges the knowledge gap at the intersection of materials chemistry, unique materials manufacturing requirements for energy dense planar microbattery configurations, and applied nano-microelectronics that require a high-voltage, on-board type power source to drive microactuators and micromotors.”
    Braun, a pioneer in the field of battery miniaturization, concludes, “our current microbattery design is well-suited for high-energy, high-power, high-voltage, single-discharge applications. The next step is to translate the design to all solid-state microbattery platforms, batteries which would inherently be safer and more energy dense than liquid-cell counterparts.”
    Other contributors to this work include Dr. James H. Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania), Dr. John B. Cook (XABC), Dr. Ryan Kohlmeyer (XABC), Dr. Beniamin Zahiri (Research Assistant Professor, MRL, UIUC) and Dr. Pengcheng Sun (Research Scientist, MRL, UIUC). More