More stories

  • in

    AI improves detail, estimate of urban air pollution

    Using artificial intelligence, Cornell University engineers have simplified and reinforced models that accurately calculate the fine particulate matter (PM2.5) — the soot, dust and exhaust emitted by trucks and cars that get into human lungs — contained in urban air pollution.
    Now, city planners and government health officials can obtain a more precise accounting about the well-being of urban dwellers and the air they breathe, from new research published December 2022 in the journal Transportation Research Part D.
    “Infrastructure determines our living environment, our exposure,” said senior author Oliver Gao, the Howard Simpson Professor of Civil and Environmental Engineering in the College of Engineering at Cornell University. “Air pollution impact due to transportation — put out as exhaust from the cars and trucks that drive on our streets — is very complicated. Our infrastructure, transportation and energy policies are going to impact air pollution and hence public health.”
    Previous methods to gauge air pollution were cumbersome and reliant on extraordinary amounts of data points. “Older models to calculate particulate matter were computationally and mechanically consuming and complex,” said Gao, a faculty fellow at the Cornell Atkinson Center for Sustainability. “But if you develop an easily accessible data model, with the help of artificial intelligence filling in some of the blanks, you can have an accurate model at a local scale.”
    Lead author Salil Desai and visiting scientist Mohammad Tayarani, together with Gao, published “Developing Machine Learning Models for Hyperlocal Traffic Related Particulate Matter Concentration Mapping,” to offer a leaner, less data-intensive method for making accurate models.
    Ambient air pollution is a leading cause of premature death around the world. Globally, more than 4.2 million annual fatalities — in the form of cardiovascular disease, ischemic heart disease, stroke and lung cancer — were attributed to air pollution in 2015, according to a Lancet study cited in the Cornell research.
    In this work, the group developed four machine learning models for traffic-related particulate matter concentrations in data gathered in New York City’s five boroughs, which have a combined population of 8.2 million people and a daily-vehicle miles traveled of 55 million miles.
    The equations use few inputs such as traffic data, topology and meteorology in an AI algorithm to learn simulations for a wide range of traffic-related, air-pollution concentration scenarios.
    Their best performing model was the Convolutional Long Short-term Memory, or ConvLSTM, which trained the algorithm to predict many spatially correlated observations.
    “Our data-driven approach — mainly based on vehicle emission data — requires considerably fewer modeling steps,” Desai said. Instead of focusing on stationary locations, the method provides a high-resolution estimation of the city street pollution surface. Higher resolution can help transportation and epidemiology studies assess health, environmental justice and air quality impacts.
    Funding for this research came from the U.S. Department of Transportation’s University Transportation Centers Program and Cornell Atkinson. More

  • in

    A precision arm for miniature robots

    We are all familiar with robots equipped with moving arms. They stand in factory halls, perform mechanical work and can be programmed. A single robot can be used to carry out a variety of tasks.
    Until today, miniature systems that transport miniscule amounts of liquid through fine capillaries have had little association with such robots. Developed by researchers as an aid for laboratory analysis, such systems are known as microfluidics or lab-on-a-chip and generally make use of external pumps to move the liquid through the chips. To date, such systems have been difficult to automate, and the chips have had to be custom-designed and manufactured for each specific application.
    Ultrasound needle oscillations
    Scientists led by ETH Professor Daniel Ahmed are now combining conventional robotics and microfluidics. They have developed a device that uses ultrasound and can be attached to a robotic arm. It is suitable for performing a wide range of tasks in microrobotic and microfluidic applications and can also be used to automate such applications. The scientists have reported on this development in Nature Communications.
    The device comprises a thin, pointed glass needle and a piezoelectric transducer that causes the needle to oscillate. Similar transducers are used in loudspeakers, ultrasound imaging and professional dental cleaning equipment. The ETH researchers can vary the oscillation frequency of their glass needle. By dipping the needle into a liquid they create a three-dimensional pattern composed of multiple vortices. Since this pattern depends on the oscillation frequency, it can be controlled accordingly.
    The researchers were able to use this to demonstrate several applications. First, they were able to mix tiny droplets of highly viscous liquids. “The more viscous liquids are, the more difficult it is to mix them,” Professor Ahmed explains. “However, our method succeeds in doing this because it allows us to not only create a single vortex, but to also efficiently mix the liquids using a complex three-dimensional pattern composed of multiple strong vortices.”
    Second, the scientists were able to pump fluids through a mini-channel system by creating a specific pattern of vortices and placing the oscillating glass needle close to the channel wall.
    Third, they succeeded in using their robot-assisted acoustic device to trap fine particles present in the fluid. This works because a particle’s size determines its reaction to the sound waves. Relatively large particles move towards the oscillating glass needle, where they accumulate. The researchers demonstrated how this method can capture not only inanimate particles but also fish embryos. They believe it should also be capable of capturing biological cells in the fluid. “In the past, manipulating microscopic particles in three dimensions was always challenging. Our microrobotic arm makes it easy,” Ahmed says.
    “Until now, advancements in large, conventional robotics and microfluidic applications have been made separately,” Ahmed says. “Our work helps to bring the two approaches together.” As a result, future microfluidic systems could be designed similarly to today’s robotic systems. An appropriately programmed single device would be able to handle a variety of tasks. “Mixing and pumping liquids and trapping particles — we can do it all with one device,” Ahmed says. This means tomorrow’s microfluidic chips will no longer have to be custom-developed for each specific application. The researchers would next like to combine several glass needles to create even more complex vortex patterns in liquids.
    In addition to laboratory analysis, Ahmed can envisage other applications for microrobotic arms, such as sorting tiny objects. The arms could conceivably also be used in biotechnology as a way of introducing DNA into individual cells. It should ultimately be possible to employ them in additive manufacturing and 3D printing. More

  • in

    Feathered robotic wing paves way for flapping drones

    Birds fly more efficiently by folding their wings during the upstroke, according to a recent study led by Lund University in Sweden. The results could mean that wing-folding is the next step in increasing the propulsive and aerodynamic efficiency of flapping drones.
    Even the precursors to birds — extinct bird-like dinosaurs — benefited from folding their wings during the upstroke, as they developed active flight. Among flying animals alive today, birds are the largest and most efficient. This makes them particularly interesting as inspiration for the development of drones. However, determining which flapping strategy is best requires aerodynamic studies of various ways of flapping the wings. Therefore, a Swedish-Swiss research team has constructed a robotic wing that can achieve just that — flapping like a bird, and beyond.
    “We have built a robot wing that can flap more like a bird than previous robots, but also flap in way that birds cannot do. By measuring the performance of the wing in our wind tunnel, we have studied how different ways of achieving the wing upstroke affect force and energy in flight,” says Christoffer Johansson, biology researcher at Lund University.
    Previous studies have shown that birds flap their wings more horizontally when flying slowly. The new study shows that the birds probably do it, even though it requires more energy, because it is easier to create a sufficiently large forces to stay aloft and propel themselves. This is something drones can emulate to increase the range of speeds they can fly at.
    “The new robotic wing can be used to answer questions about bird flight that would be impossible simply by observing flying birds. Research into the flight ability of living birds is limited to the flapping movement that the bird actually uses,” explains Christoffer Johansson.
    The research explains why birds flap the way they do, by finding out which movement patterns create the most force and are the most efficient. The results can also be used in other research areas, such as better understanding how the migration of birds is affected by climate change and access to food. There are also many potential uses for drones where these insights can be put to good use. One area might be using drones to deliver goods.
    “Flapping drones could be used for deliveries, but they would need to be efficient enough and able to lift the extra weight this entails. How the wings move is of great importance for performance, so this is where our research could come in handy,” concludes Christoffer Johansson. More

  • in

    Using machine learning to help monitor climate-induced hazards

    Combining satellite technology with machine learning may allow scientists to better track and prepare for climate-induced natural hazards, according to research presented last month at the annual meeting of the American Geophysical Union.
    Over the last few decades, rising global temperatures have caused many natural phenomena like hurricanes, snowstorms, floods and wildfires to grow in intensity and frequency.
    While humans can’t prevent these disasters from occurring, the rapidly increasing number of satellites that orbit the Earth from space offers a greater opportunity to monitor their evolution, said C.K Shum, co-author of the study and a professor at the Byrd Polar Research Center and in earth sciences at The Ohio State University. He said that potentially allowing people in the area to make informed decisions could improve the effectiveness of local disaster response and management.
    “Predicting the future is a pretty difficult task, but by using remote sensing and machine learning, our research aims to help create a system that will be able to monitor these climate-induced hazards in a manner that enables a timely and informed disaster response,” said Shum.
    Shum’s research uses geodesy — the science of measuring the planet’s size, shape and orientation in space — to study phenomena related to global climate change.
    Using geodetic data gathered from various space agency satellites, researchers conducted several case studies to test whether a mix of remote sensing and deep machine learning analytics could accurately monitor abrupt weather episodes, including floods, droughts and storm surges in some areas of the world.
    In one experiment, the team used these methods to determine if radar signals from Earth’s Global Navigation Satellite System (GNSS), which were reflected over the ocean and received by GNSS receivers located at towns offshore in the Gulf of Mexico, could be used to track hurricane evolution by measuring rising sea levels after landfall. Between 2020 and 2021, the team studied how seven storms, such as Hurricane Hana and Hurricane Delta, affected coastal sea levels before they made landfall in the Gulf of Mexico. By monitoring these complex changes, they found a positive correlation between higher sea levels and how intense the storm surges were.
    The data they used was collected by NASA and the German Aerospace Center’s Gravity Recovery And Climate Experiment (GRACE) mission, and its successor, GRACE Follow-On. Both satellites have been used to monitor changes in Earth’s mass over the past two decades, but so far, have only been able to view the planet from a little more than 400 miles up. But using deep machine learning analytics, Shum’s team was able to reduce this resolution to about 15 miles, effectively improving society’s ability to monitor natural hazards.
    “Taking advantage of deep machine learning means having to condition the algorithm to continuously learn from various data inputs to achieve the goal you want to accomplish,” Shum said. In this instance, satellites allowed researchers to quantify the path and evolution of two Category 4 Atlantic hurricane-induced storm surges during their landfalls over Texas and Louisiana, Hurricane Harvey in August 2017 and Hurricane Laura in August 2020, respectively.
    Accurate measurements of these natural hazards could one day help improve hurricane forecasting, said Shum. But in the short term, Shum would like to see countries and organizations make their satellite data more readily available to scientists, as projects that rely on deep machine learning often need large amounts of wide-ranging data to help make accurate forecasts.
    “Many of these novel satellite techniques require time and effort to process massive amounts of accurate data,” said Shum. “If researchers have access to more resources, we’ll be able to potentially develop technologies to better prepare people to adapt, as well as allow disaster management agencies to improve their response to intense and frequent climate-induced natural hazards.”
    Co-authors of the project were Yu Zhang, Yuanyuan Jia, Yihang Ding and Junyi Guo of Ohio State; Orhan Akyilmaz and Metehan Uz of Istanbul Technical University; and Kazim Atman of Queen Mary University of London. This work was supported by the United States Agency for International Development (USAID), the National Science Foundation (NSF), the National Aeronautics and Space Administration and the Scientific and Technological Research Council of Türkiye (TÜB?TAK). More

  • in

    Novel design helps develop powerful microbatteries

    Translating electrochemical performance of large format batteries to microscale power sources has been a long-standing technological challenge, limiting the ability of batteries to power microdevices, microrobots and implantable medical devices. University of Illinois Urbana-Champaign researchers have created a high-voltage microbattery ( > 9 V), with high-energy and -power density, unparalleled by any existing battery design.
    Material Science and Engineering Professor Paul Braun (Grainger Distinguished Chair in Engineering, Materials Research Laboratory Director), Dr. Sungbong Kim (Postdoc, MatSE, current assistant professor at Korea Military Academy, co-first author), and Arghya Patra (Graduate Student, MatSE, MRL, co-first author) recently published their paper “Serially integrated high-voltage and high-power miniature batteries” in Cell Reports Physical Science.
    The team demonstrated hermetically sealed (tightly closed to prevent exposure to ambient air), durable, compact, lithium batteries with exceptionally low package mass fraction in single-, double-, and triple-stacked configurations with unprecedented operating voltages, high power densities, and energy densities.
    Braun explains, “We need powerful tiny batteries to unlock the full potential of microscale devices, by improving the electrode architectures and coming up with innovative battery designs.” The problem is that as batteries become smaller, the packaging dominates the battery volume and mass while the electrode area becomes smaller. This results in drastic reductions in energy and power of the battery.
    In their unique design of powerful microbatteries, the team developed novel packaging technology that used the positive and negative terminal current collectors as part of the packaging itself (rather than a separate entity). This allowed for the compact volume (? 0.165 cm3) andlow package mass fraction (10.2%) of the batteries. In addition, they vertically stacked the electrode cells in series (so the voltage of each cell adds), which enabled the high operating voltage of the battery.
    Another way these microbatteries are improved is by using very dense electrodes which offers energy density. Normal electrodes are almost 40% by volume occupied by polymers and carbon additives (not active materials). Braun’s group has grown electrodes by an intermediate temperature direct electrodeposition technique which are fully dense and without polymer and carbon additives. These fully dense electrodes offer more volumetric energy density than their commercial counterparts. The microbatteries in this research were fabricated using the dense electroplated DirectPlateTM LiCoO2 electrodes manufactured by Xerion Advanced Battery Corporation (XABC, Dayton, Ohio), a company that spun out of Braun’s research.
    Patra mentions, “To date, electrode architectures and cell designs at the micro-nano scale have been limited to power dense designs that came at the cost of porosity and volumetric energy density. Our work has been successful to create a microscale energy source that exhibits both high power density and volumetric energy density.”
    An important application space of these microbatteries includes powering insect-size microrobots to obtain valuable information during natural disasters, search and rescue missions, and in hazardous environments where direct human access is impossible. Co-author James Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania) points out that “the high voltage is important for reducing the electronic payload that a microrobot needs to carry. 9 V can directly power motors and reduce the energy loss associated with boosting the voltage to the hundreds or thousands of volts needed from some actuators. This means that these batteries enable system level improvements beyond their energy density enhancement so that the small robots can travel farther or send more critical information to human operators.”
    Kim adds, “Our work bridges the knowledge gap at the intersection of materials chemistry, unique materials manufacturing requirements for energy dense planar microbattery configurations, and applied nano-microelectronics that require a high-voltage, on-board type power source to drive microactuators and micromotors.”
    Braun, a pioneer in the field of battery miniaturization, concludes, “our current microbattery design is well-suited for high-energy, high-power, high-voltage, single-discharge applications. The next step is to translate the design to all solid-state microbattery platforms, batteries which would inherently be safer and more energy dense than liquid-cell counterparts.”
    Other contributors to this work include Dr. James H. Pikul (Assistant Professor, Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania), Dr. John B. Cook (XABC), Dr. Ryan Kohlmeyer (XABC), Dr. Beniamin Zahiri (Research Assistant Professor, MRL, UIUC) and Dr. Pengcheng Sun (Research Scientist, MRL, UIUC). More

  • in

    New studies suggest social isolation is a risk factor for dementia in older adults, point to ways to reduce risk

    In two studies using nationally representative data from the National Health and Aging Trends Study gathered on thousands of Americans, researchers from the Johns Hopkins University School of Medicine and Bloomberg School of Public Health have significantly added to evidence that social isolation is a substantial risk factor for dementia in community-dwelling (noninstitutionalized) older adults, and identified technology as an effective way to intervene.
    Collectively, the studies do not establish a direct cause and effect between dementia and social isolation, defined as lack of social contact and interactions with people on a regular basis. But, the researchers say, the studies strengthen observations that such isolation increases the risk of dementia, and suggest that relatively simple efforts to increase social support of older adults — such as texting and use of email — may reduce that risk. In the United States, an estimated 1 in 4 people over age 65 experience social isolation, according to the National Institute on Aging.
    “Social connections matter for our cognitive health, and it is potentially easily modifiable for older adults without the use of medication,” says Thomas Cudjoe, M.D., M.P.H., assistant professor of medicine at the Johns Hopkins University School of Medicine and senior author of both of the new studies.
    The first study, described Jan. 11 in the Journal of the American Geriatrics Society, used data collected on a group of 5,022 Medicare beneficiaries for a long-term study known as the National Health and Aging Trends, which began in 2011. All participants were 65 or older, and were asked to complete an annual two-hour, in-person interview to assess cognitive function, health status and overall well-being.
    At the initial interview, 23% of the 5,022 participants were socially isolated and showed no signs of dementia. However, by the end of this nine-year study, 21% of the total sample of participants had developed dementia. The researchers concluded that risk of developing dementia over nine years was 27% higher among socially isolated older adults compared with older adults who were not socially isolated.
    “Socially isolated older adults have smaller social networks, live alone and have limited participation in social activities,” says Alison Huang, Ph.D., M.P.H., senior research associate at the Johns Hopkins Bloomberg School of Public Health. “One possible explanation is that having fewer opportunities to socialize with others decreases cognitive engagement as well, potentially contributing to increased risk of dementia.”
    Interventions to reduce that risk are possible, according to results of the second study, published Dec. 15 in the Journal of the American Geriatrics Society. Specifically, researchers found the use of communications technology such as telephone and email lowered the risk for social isolation.
    Researchers for the second study used data from participants in the same National Health and Aging Trends study, and found that more than 70% of people age 65 and up who were not socially isolated at their initial appointment had a working cellphone and/or computer, and regularly used email or texting to initiate and respond to others. Over the four-year research period for this second study, older adults who had access to such technology consistently showed a 31% lower risk for social isolation than the rest of the cohort.
    “Basic communications technology is a great tool to combat social isolation,” says Mfon Umoh, M.D., Ph.D., postdoctoral fellow in geriatric medicine at the Johns Hopkins University School of Medicine. “This study shows that access and use of simple technologies are important factors that protect older adults against social isolation, which is associated with significant health risks. This is encouraging because it means simple interventions may be meaningful.”
    Social isolation has gained significant attention in the past decade, especially due to restrictions implemented for the COVID-19 pandemic, but more work needs to be done to identify at-risk populations and create tools for providers and caregivers to minimize risk, the researchers say. Future research in this area should focus on increased risks based on biological sex, physical limitations, race and income level.
    Other scientists who contributed to this research are Laura Prichett, Cynthia Boyd, David Roth, Tom Cidav, Shang-En Chung, Halima Amjad, and Roland Thorpe of the Johns Hopkins University School of Medicine and Bloomberg School of Public Health.
    This research was funded by the Caryl & George Bernstein Human Aging Project, the Johns Hopkins University Center for Innovative Medicine, the National Center for Advancing Translational Sciences, the National Institute on Aging, the Secunda Family Foundation, the Patient-Centered Care for Older Adults with Multiple Chronic Conditions, and the National Institute on Minority Health and Health Disparities. More

  • in

    Computers that power self-driving cars could be a huge driver of global carbon emissions

    In the future, the energy needed to run the powerful computers on board a global fleet of autonomous vehicles could generate as many greenhouse gas emissions as all the data centers in the world today.
    That is one key finding of a new study from MIT researchers that explored the potential energy consumption and related carbon emissions if autonomous vehicles are widely adopted.
    The data centers that house the physical computing infrastructure used for running applications are widely known for their large carbon footprint: They currently account for about 0.3 percent of global greenhouse gas emissions, or about as much carbon as the country of Argentina produces annually, according to the International Energy Agency. Realizing that less attention has been paid to the potential footprint of autonomous vehicles, the MIT researchers built a statistical model to study the problem. They determined that 1 billion autonomous vehicles, each driving for one hour per day with a computer consuming 840 watts, would consume enough energy to generate about the same amount of emissions as data centers currently do.
    The researchers also found that in over 90 percent of modeled scenarios, to keep autonomous vehicle emissions from zooming past current data center emissions, each vehicle must use less than 1.2 kilowatts of power for computing, which would require more efficient hardware. In one scenario — where 95 percent of the global fleet of vehicles is autonomous in 2050, computational workloads double every three years, and the world continues to decarbonize at the current rate — they found that hardware efficiency would need to double faster than every 1.1 years to keep emissions under those levels.
    “If we just keep the business-as-usual trends in decarbonization and the current rate of hardware efficiency improvements, it doesn’t seem like it is going to be enough to constrain the emissions from computing onboard autonomous vehicles. This has the potential to become an enormous problem. But if we get ahead of it, we could design more efficient autonomous vehicles that have a smaller carbon footprint from the start,” says first author Soumya Sudhakar, a graduate student in aeronautics and astronautics.
    Sudhakar wrote the paper with her co-advisors Vivienne Sze, associate professor in the Department of Electrical Engineering and Computer Science (EECS) and a member of the Research Laboratory of Electronics (RLE); and Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems (LIDS). The research appears in the January-February issue of IEEE Micro.

    Modeling emissions
    The researchers built a framework to explore the operational emissions from computers on board a global fleet of electric vehicles that are fully autonomous, meaning they don’t require a back-up human driver.
    The model is a function of the number of vehicles in the global fleet, the power of each computer on each vehicle, the hours driven by each vehicle, and the carbon intensity of the electricity powering each computer.
    “On its own, that looks like a deceptively simple equation. But each of those variables contains a lot of uncertainty because we are considering an emerging application that is not here yet,” Sudhakar says.
    For instance, some research suggests that the amount of time driven in autonomous vehicles might increase because people can multitask while driving and the young and the elderly could drive more. But other research suggests that time spent driving might decrease because algorithms could find optimal routes that get people to their destinations faster.

    In addition to considering these uncertainties, the researchers also needed to model advanced computing hardware and software that doesn’t exist yet.
    To accomplish that, they modeled the workload of a popular algorithm for autonomous vehicles, known as a multitask deep neural network because it can perform many tasks at once. They explored how much energy this deep neural network would consume if it were processing many high-resolution inputs from many cameras with high frame rates, simultaneously.
    When they used the probabilistic model to explore different scenarios, Sudhakar was surprised by how quickly the algorithms’ workload added up.
    For example, if an autonomous vehicle has 10 deep neural networks processing images from 10 cameras, and that vehicle drives for one hour a day, it will make 21.6 million inferences each day. One billion vehicles would make 21.6 quadrillion inferences. To put that into perspective, all of Facebook’s data centers worldwide make a few trillion inferences each day (1 quadrillion is 1,000 trillion).
    “After seeing the results, this makes a lot of sense, but it is not something that is on a lot of people’s radar. These vehicles could actually be using a ton of computer power. They have a 360-degree view of the world, so while we have two eyes, they may have 20 eyes, looking all over the place and trying to understand all the things that are happening at the same time,” Karaman says.
    Autonomous vehicles would be used for moving goods, as well as people, so there could be a massive amount of computing power distributed along global supply chains, he says. And their model only considers computing — it doesn’t take into account the energy consumed by vehicle sensors or the emissions generated during manufacturing.
    Keeping emissions in check
    To keep emissions from spiraling out of control, the researchers found that each autonomous vehicle needs to consume less than 1.2 kilowatts of energy for computing. For that to be possible, computing hardware must become more efficient at a significantly faster pace, doubling in efficiency about every 1.1 years.
    One way to boost that efficiency could be to use more specialized hardware, which is designed to run specific driving algorithms. Because researchers know the navigation and perception tasks required for autonomous driving, it could be easier to design specialized hardware for those tasks, Sudhakar says. But vehicles tend to have 10- or 20-year lifespans, so one challenge in developing specialized hardware would be to “future-proof” it so it can run new algorithms.
    In the future, researchers could also make the algorithms more efficient, so they would need less computing power. However, this is also challenging because trading off some accuracy for more efficiency could hamper vehicle safety.
    Now that they have demonstrated this framework, the researchers want to continue exploring hardware efficiency and algorithm improvements. In addition, they say their model can be enhanced by characterizing embodied carbon from autonomous vehicles — the carbon emissions generated when a car is manufactured — and emissions from a vehicle’s sensors.
    While there are still many scenarios to explore, the researchers hope that this work sheds light on a potential problem people may not have considered.
    “We are hoping that people will think of emissions and carbon efficiency as important metrics to consider in their designs. The energy consumption of an autonomous vehicle is really critical, not just for extending the battery life, but also for sustainability,” says Sze.
    This research was funded, in part, by the National Science Foundation and the MIT-Accenture Fellowship. More

  • in

    Screen-printing method can make wearable electronics less expensive

    The glittering, serpentine structures that power wearable electronics can be created with the same technology used to print rock concert t-shirts, new research shows.
    The study, led by Washington State University researchers, demonstrates that electrodes can be made using just screen printing, creating a stretchable, durable circuit pattern that can be transferred to fabric and worn directly on human skin. Such wearable electronics can be used for health monitoring in hospitals or at home.
    “We wanted to make flexible, wearable electronics in a way that is much easier, more convenient and lower cost,” said corresponding author Jong-Hoon Kim, associate professor at the WSU Vancouver’s School of Engineering and Computer Science. “That’s why we focused on screen printing: it’s easy to use. It has a simple setup, and it is suitable for mass production.”
    Current commercial manufacturing of wearable electronics requires expensive processes involving clean rooms. While some use screen printing for parts of the process, this new method relies wholly on screen printing, which has advantages for manufacturers and ultimately, consumers.
    In the study, published in the ACS Applied Materials and Interfaces journal, Kim and his colleagues detail the electrode screen-printing process and demonstrate how the resulting electrodes can be used for electrocardiogram monitoring, also known as ECG.
    They used a multi-step process to layer polymer and metal inks to create snake-like structures of the electrode. While the resulting thin pattern appears delicate, the electrodes are not fragile. The study showed they could be stretched by 30% and bend to 180 degrees.
    Multiple electrodes are printed onto a pre-treated glass slide, which allows them to be easily peeled off and transferred onto fabric or other material. After printing the electrodes, the researchers transferred them onto an adhesive fabric that was then worn directly on the skin by volunteers. The wireless electrodes accurately recorded heart and respiratory rates, sending the data to a mobile phone.
    While this study focused on ECG monitoring, the screen-printing process can be used to create electrodes for a range of uses, including those that serve similar functions to smart watches or fitness trackers, Kim said.
    Kim’s lab is currently working on expanding this technology to print different electrodes as well as entire electronic chips and even potentially whole circuit boards.
    In addition to Kim, co-authors on the study includes researchers from the Georgia Institute of Technology and Pukyong National University in South Korea as well as others from WSU Vancouver. This research received support from the National Science Foundation. More