More stories

  • in

    An app can transform smartphones into thermometers that accurately detect fevers

    If you’ve ever thought you may be running a temperature yet couldn’t find a thermometer, you aren’t alone. A fever is the most commonly cited symptom of COVID-19 and an early sign of many other viral infections. For quick diagnoses and to prevent viral spread, a temperature check can be crucial. Yet accurate at-home thermometers aren’t commonplace, despite the rise of telehealth consultations.
    There are a few potential reasons for that. The devices can range from $15 to $300, and many people need them only a few times a year. In times of sudden demand — such as the early days of the COVID-19 pandemic — thermometers can sell out. Many people, particularly those in under-resourced areas, can end up without a vital medical device when they need it most.
    To address this issue, a team led by researchers at the University of Washington has created an app called FeverPhone, which transforms smartphones into thermometers without adding new hardware. Instead, it uses the phone’s touchscreen and repurposes the existing battery temperature sensors to gather data that a machine learning model uses to estimate people’s core body temperatures. When the researchers tested FeverPhone on 37 patients in an emergency department, the app estimated core body temperatures with accuracy comparable to some consumer thermometers. The team published its findings March 28 in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.
    “In undergrad, I was doing research in a lab where we wanted to show that you could use the temperature sensor in a smartphone to measure air temperature,” said lead author Joseph Breda, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “When I came to the UW, my adviser and I wondered how we could apply a similar technique for health. We decided to measure fever in an accessible way. The primary concern with temperature isn’t that it’s a difficult signal to measure; it’s just that people don’t have thermometers.”
    The app is the first to use existing phone sensors and screens to estimate whether people have fevers. It needs more training data to be widely used, Breda said, but for doctors, the potential of such technology is exciting.
    “People come to the ER all the time saying, ‘I think I was running a fever.’ And that’s very different than saying ‘I was running a fever,'” said Dr. Mastafa Springston, a co-author on the study and a UW clinical instructor at the Department of Emergency Medicine in the UW School of Medicine. “In a wave of influenza, for instance, people running to the ER can take five days, or even a week sometimes. So if people were to share fever results with public health agencies through the app, similar to how we signed up for COVID exposure warnings, this earlier sign could help us intervene much sooner.”
    Clinical-grade thermometers use tiny sensors known as thermistors to estimate body temperature. Off-the-shelf smartphones also happen to contain thermistors; they’re mostly used to monitor the temperature of the battery. But the UW researchers realized they could use these sensors to track heat transfer between a person and a phone. The phone touchscreen could sense skin-to-phone contact, and the thermistors could gauge the air temperature and the rise in heat when the phone touched a body.

    To test this idea, the team started by gathering data in a lab. To simulate a warm forehead, the researchers heated a plastic bag of water with a sous-vide machine and pressed phone screens against the bag. To account for variations in circumstances, such as different people using different phones, the researchers tested three phone models. They also added accessories such as a screen protector and a case and changed the pressure on the phone.
    The researchers used the data from different test cases to train a machine learning model that used the complex interactions to estimate body temperature. Since the sensors are supposed to gauge the phone’s battery heat, the app tracks how quickly the phone heats up and then uses the touchscreen data to account for how much of that comes from a person touching it. As they added more test cases, the researchers were able to calibrate the model to account for the variations in things such as phone accessories.
    Then the team was ready to test the app on people. The researchers took FeverPhone to the UW School of Medicine’s Emergency Department for a clinical trial where they compared its temperature estimates against an oral thermometer reading. They recruited 37 participants, 16 of whom had at least a mild fever.
    To use FeverPhone, the participants held the phones like point-and-shoot cameras — with forefingers and thumbs touching the corner edges to reduce heat from the hands being sensed (some had the researcher hold the phone for them). Then participants pressed the touchscreen against their foreheads for about 90 seconds, which the researchers found to be the ideal time to sense body heat transferring to the phone.
    Overall, FeverPhone estimated patient core body temperatures with an average error of about 0.41 degrees Fahrenheit (0.23 degrees Celsius), which is in the clinically acceptable range of 0.5 C.
    The researchers have highlighted a few areas for further investigation. The study didn’t include participants with severe fevers above 101.5 F (38.6 C), because these temperatures are easy to diagnose and because sweaty skin tends to confound other skin-contact thermometers, according to the team. Also, FeverPhone was tested on only three phone models. Training it to run on other smartphones, as well as devices such as smartwatches, would increase its potential for public health applications, the teamsaid.
    “We started with smartphones since they’re ubiquitous and easy to get data from,” Breda said. “I am already working on seeing if we can get a similar signal with a smartwatch. What’s nice, because watches are much smaller, is their temperature will change more quickly. So you could imagine having a user put a Fitbit to their forehead and measure in 10 seconds whether they have a fever or not.”
    Shwetak Patel, a UW professor in the Allen School and the electrical and computer engineering department, was a senior author on the paper, and Alex Mariakakis, an assistant professor in the University of Toronto’s computer science department, was a co-author. This research was supported by the University of Washington Gift Fund. More

  • in

    AI that uses sketches to detect objects within an image could boost tumor detection, and search for rare bird species

    Teaching machine learning tools to detect specific objects in a specific image and discount others is a “game-changer” that could lead to advancements in cancer detection, according to leading researchers from the University of Surrey.
    Surrey is set to present its unique sketch-based object detection tool at this year’s Computer Vision, Pattern, and Recognition Conference (CVPR). The tool allows the user to sketch an object, which the AI will use as a basis to search within an image to find something that matches the sketch — while discounting more general options.
    Professor Yi-Zhe Song, leads this research at the University of Surrey’s Institute for People-Centred AI. He commented:
    “An artist’s sketch is full of individual cues that words cannot convey concisely, reiterating the phrase ‘a picture paints a thousand words’. For newer AI systems, simple descriptive words help to generate images, but none can express the individualism of the user or the exact match the user is looking for.
    “This is where our sketch-based tool comes into play. AI is instructed by the artist via sketches to find an exact object and discount others. Which can be amazingly helpful in medicine, by finding more aggressive tumours, or helping to protect wildlife conservation by detecting rare animals.”
    An example that researchers use in their paper to the conference is of the tool helping to search a picture full of zebras — with only a sketch of a single zebra eating to direct its search. The AI tool takes visual cues into account, such as pose and structure, but bases the decisions off the exact requirements given by the amateur artist.
    Professor Song continued:
    “The ability for AI to detect objects based on individual amateur sketches introduces a significant leap in harnessing human creativity in Computer Vision. It allows humans to interact with AI from a whole different perspective, no longer letting AI dictate the decisions, but asking it to behave exactly as instructed, keeping necessary human intervention.”
    This research will be presented at the Computer Vision, Pattern, and Recognition Conference (CVPR) 2023 which showcases world-leading AI research on a global stage. The University of Surrey sees an exceptional number of papers accepted to the CVPR 2023, far above other educational institutions, with over 18 papers accepted and one nominated for the Best Paper Award.
    The University of Surrey is a research-intensive university, producing world-leading research and delivering innovation in teaching to transform lives and change the world for the better. The University of Surrey’s Institute for People-Centred AI combines over 30 years of technical excellence in the field of machine learning with multi-disciplinary research to answer the technical, ethical and governance questions that will enable the future of AI to be truly people-centred. A focus on research that makes a difference to the world has contributed to Surrey being ranked 55th in the world in the Times Higher Education (THE) University Impact Rankings 2022, which assesses more than 1,400 universities’ performance against the United Nations’ Sustainable Development Goals (SDGs). More

  • in

    AI reveals hidden traits about our planet’s flora to help save species

    In a world-first, scientists from UNSW and Botanic Gardens of Sydney, have trained AI to unlock data from millions of plant specimens kept in herbaria around the world, to study and combat the impacts of climate change on flora.
    “Herbarium collections are amazing time capsules of plant specimens,” says lead author on the study, Associate Professor Will Cornwell. “Each year over 8000 specimens are added to the National Herbarium of New South Wales alone, so it’s not possible to go through things manually anymore.”
    Using a new machine learning algorithm to process over 3000 leaf samples, the team discovered that contrary to frequently observed interspecies patterns, leaf size doesn’t increase in warmer climates within a single species.
    Published in the American Journal of Botany, this research not only reveals that factors other than climate have a strong effect on leaf size within a plant species, but demonstrates how AI can be used to transform static specimen collections and to quickly and effectively document climate change effects.
    Herbarium collections move to the digital world
    Herbaria are scientific libraries of plant specimens that have existed since at least the 16th century.

    “Historically, a valuable scientific effort was to go out, collect plants, and then keep them in a herbarium. Every record has a time and a place and a collector and a putative species ID,” says A/Prof. Cornwell, a researcher at the School of BEES and a member of UNSW Data Science Hub.
    A couple of years ago, to help facilitate scientific collaboration, there was a movement to transfer these collections online.
    “The herbarium collections were locked in small boxes in particular places, but the world is very digital now. So to get the information about all of the incredible specimens to the scientists who are now scattered across the world, there was an effort to scan the specimens to produce high resolution digital copies of them.”
    The largest herbarium imaging project was undertaken at the Botanic Gardens of Sydney when over 1 million plant specimens at the National Herbarium of New South Wales were transformed into high-resolution digital images.
    “The digitisation project took over two years and shortly after completion, one of the researchers — Dr Jason Bragg — contacted me from the Botanic Gardens of Sydney. He wanted to see how we could incorporate machine learning with some of these high-resolution digital images of the Herbarium specimens.”
    “I was excited to work with A/Prof. Cornwell in developing models to detect leaves in the plant images, and to then use those big datasets to study relationships between leaf size and climate,” says Dr Bragg.

    “Computer vision” measures leaf sizes
    Together with Dr Bragg at the Botanic Gardens of Sydney and UNSW Honours student Brendan Wilde, A/Prof. Cornwell created an algorithm that could be automated to detect and measure the size of leaves of scanned herbarium samples for two plant genera — Syzygium (generally known as lillipillies, brush cherries or satinas) and Ficus (a genus of about 850 species of woody trees, shrubs and vines).
    “This is a type of AI is called a convolutional neural network, also known as Computer Vision,” says A/Prof. Cornwell. The process essentially teaches the AI to see and identify the components of a plant in the same way a human would.
    “We had to build a training data set to teach the computer, this is a leaf, this is a stem, this is a flower,” says A/Prof. Cornwell. “So we basically taught the computer to locate the leaves and then measure the size of them.
    “Measuring the size of leaves is not novel, because lots of people have done this. But the speed with which these specimens can be processed and their individual characteristics can be logged is a new development.”
    A break in frequently observed patterns
    A general rule of thumb in the botanical world is that in wetter climates, like tropical rainforests, the leaves of plants are bigger compared to drier climates, such as deserts.
    “And that’s a very consistent pattern that we see in leaves between species all across the globe,” says A/Prof. Cornwell. “The first test we did was to see if we could reconstruct that relationship from the machine learned data, which we could. But the second question was, because we now have so much more data than we had before, do we see the same thing within species?”
    The machine learning algorithm was developed, validated, and applied to analyse the relationship between leaf size and climate within and among species for Syzygium and Ficus plants.
    The results from this test were surprising — the team discovered that while this pattern can be seen between different plant species, the same correlation isn’t seen within a single species across the globe, likely because a different process, known as gene flow, is operating within species. That process weakens plant adaptation on a local scale and could be preventing the leaf size-climate relationship from developing within species.
    Using AI to predict future climate change responses
    The machine learning approach used here to detect and measure leaves, though not pixel perfect, provided levels of accuracy suitable for examining links between leaf traits and climate.
    “But because the world is changing quite fast, and there is so much data, these kinds of machine learning methods can be used to effectively document climate change effects,” says A/Prof. Cornwell.
    What’s more, the machine learning algorithms can be trained to identify trends that might not be immediately obvious to human researchers. This could lead to new insights into plant evolution and adaptations, as well as predictions about how plants might respond to future effects of climate change. More

  • in

    Open-source software to speed up quantum research

    Quantum technology is expected to fundamentally change many key areas of society. Researchers are convinced that there are many more useful quantum properties and applications to explore than those we know today. A team of researchers at Chalmers University of Technology in Sweden have now developed open-source, freely available software that will pave the way for new discoveries in the field and accelerate quantum research significantly.
    Within a few decades, quantum technology is expected to become a key technology in areas such as health, communication, defence and energy. The power and potential of the technology lie in the odd and very special properties of quantum particles. Of particular interest to researchers in the field are the superconducting properties of quantum particles that give components perfect conductivity with unique magnetic properties. These superconducting properties are considered conventional today and have already paved the way for entirely new technologies used in applications such as magnetic resonance imaging equipment, maglev trains and quantum computer components. However, years of research and development remain before a quantum computer can be expected to solve real computing problems in practice, for example. The research community is convinced that there are many more revolutionary discoveries to be made in quantum technology than those we know today.
    Open-source code to explore new superconducting properties
    Basic research in quantum materials is the foundation of all quantum technology innovation, from the birth of the transistor in 1947, through the laser in the 1960s to the quantum computers of today. However, experiments on quantum materials are often very resource-intensive to develop and conduct, take many years to prepare and mostly produce results that are difficult to interpret. Now, however, a team of researchers at Chalmers have developed the open-source software SuperConga, which is free for everyone to use, and specifically designed to perform advanced simulations and analyses of quantum components. The programme operates at the mesoscopic level, which means that it can carry out simulations that are capable of ‘picking up’ the strange properties of quantum particles, and also apply them in practice. The open-source code is the first of its kind in the world and is expected to be able to explore completely new superconducting properties and eventually pave the way for quantum computers that can use advanced computing to tackle societal challenges in several areas.
    “We are specifically interested in unconventional superconductors, which are an enigma in terms of how they even work and what their properties are. We know that they have some desirable properties that allow quantum information to be protected from interference and fluctuations. Interference is what currently limits us from having a quantum computer that can be used in practice. And this is where basic research into quantum materials is crucial if we are to make any progress,” says Mikael Fogelström, Professor of Theoretical Physics at Chalmers.
    These new superconductors continue to be highly enigmatic materials — just as their conventional siblings once were when they were discovered in a laboratory more than a hundred years ago. After that discovery, it would be more than 40 years before researchers could describe them in theory. The Chalmers researchers now hope that their open-source code can contribute to completely new findings and areas of application.
    “We want to find out about all the other exciting properties of unconventional superconductors. Our software is powerful, educational and user-friendly, and we hope that it will help generate new understanding and suggest entirely new applications for these unexplored superconductors,” says Patric Holmvall, postdoctoral researcher in condensed matter physics at Uppsala University.
    Desire to make life easier for quantum researchers and students
    To be able to explore revolutionary new discoveries, tools are needed that can study and utilise the extraordinary quantum properties at the minimal particle level, and can also be scaled up large enough to be used in practice. Researchers need to work at mesoscopic scale. This lies at the interface between the microscopic scale, i.e. the atomic level at which the quantum properties of the particles can still be utilised, and the macroscopic scale which measures everyday objects in our world which, unlike quantum particles, are subject to the laws of classical physics. On account of the software’s ability to work at this mesoscopic level, the Chalmers researchers now hope to make life easier for researchers and students working with quantum physics.
    “Extremely simplified models based on either the microscopic or macroscopic scale are often used at present. This means that they do not manage to identify all the important physics or that they cannot be used in practice. With this free software, we want to make it easier for others to accelerate and improve their quantum research without having to reinvent the wheel every time,” says Tomas Löfwander, Professor of Applied Quantum Physics at Chalmers. More

  • in

    Bridging traditional economics and econophysics

    How do asset markets work? Which stocks behave similarly? Economists, physicists, and mathematicians work intensively to draw a picture but need to learn what is happening outside their discipline. A new paper now builds a bridge.
    In a new study, researchers of the Complexity Science Hub highlight the connecting elements between traditional financial market research and econophysics. “We want to create an overview of the models that exist in financial economics and those that researchers in physics and mathematics have developed so that everybody can benefit from it,” explains Matthias Raddant from the Complexity Science Hub and the University for Continuing Education Krems.
    Scientists from both fields try to classify or even predict how the market will behave. They aim to create a large-scale correlation matrix describing the correlation of one stock to all other stocks. “Progress, however, is often barely noticed, if at all, by researchers in other disciplines. Researchers in finance hardly know that physicists are researching similar topics and just call it something different. That’s why we want to build a bridge,” says Raddant.
    WHAT ARE THE DIFFERENCES?
    Experts in the traditional financial markets field are very concerned with accurately describing how volatile stocks are statistically. However, their fine-grained models no longer work adequately when the data set becomes too large and includes tens of thousands of stocks.
    Physicists, on the other hand, can handle large amounts of data very well. Their motto is: “The more data I have, the nicer it is because then I can see certain regularities better,” explains Raddant. They also work based on correlations, but they model financial markets as evolving complex networks. These networks describe dependencies that can reveal asset comovement, i.e., which stocks behave fundamentally similarly and therefore group together. However, physicists and mathematicians may not know what insights already exist in the finance literature and what factors need to be considered.
    DIFFERENT LANGUAGE
    In their study, Raddant and his co-author, CSH external faculty member Tiziana Di Matteo of King’s College London, note that the mechanical parts that go into these models are often relatively similar, but their language is different. On the one hand, researchers in finance try to discover companies’ connecting features. On the other hand, physicists and mathematicians are working on creating order out of many time series of stocks, where certain regularities occur. “What physicists and mathematicians call regularities, economists call properties of companies, for example,” says Raddant.
    AVOIDING RESEARCH THAT GETS LOST
    “Through this study, we wish to sensitize young scientists, in particular, who are working on an interdisciplinary basis in financial markets, to the connecting elements between the disciplines,” says Raddant. So that researchers who do not come from financial economics know what the vocabulary is and what the essential research questions are that they have to address. Otherwise, there is a risk of producing research that is of no interest to anyone in finance and financial economics.
    On the other hand, scientists from the disciplines traditionally involved with financial markets must understand how to describe large data sets and statistical regularities with methods from physics and network science. More

  • in

    AI could replace humans in social science research

    In an article published yesterday in the journal Science, leading researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania look at how AI (large language models or LLMs in particular) could change the nature of their work.
    “What we wanted to explore in this article is how social science research practices can be adapted, even reinvented, to harness the power of AI,” said Igor Grossmann, professor of psychology at Waterloo.
    Grossmann and colleagues note that large language models trained on vast amounts of text data are increasingly capable of simulating human-like responses and behaviours. This offers novel opportunities for testing theories and hypotheses about human behaviour at great scale and speed.
    Traditionally, social sciences rely on a range of methods, including questionnaires, behavioral tests, observational studies, and experiments. A common goal in social science research is to obtain a generalized representation of characteristics of individuals, groups, cultures, and their dynamics. With the advent of advanced AI systems, the landscape of data collection in social sciences may shift.
    “AI models can represent a vast array of human experiences and perspectives, possibly giving them a higher degree of freedom to generate diverse responses than conventional human participant methods, which can help to reduce generalizability concerns in research,” said Grossmann.
    “LLMs might supplant human participants for data collection,” said UPenn psychology professor Philip Tetlock. “In fact, LLMs have already demonstrated their ability to generate realistic survey responses concerning consumer behaviour. Large language models will revolutionize human-based forecasting in the next 3 years. It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put an 90% chance on that. Of course, how humans react to all of that is another matter.”
    While opinions on the feasibility of this application of advanced AI systems vary, studies using simulated participants could be used to generate novel hypotheses that could then be confirmed in human populations.
    But the researchers warn of some of the possible pitfalls in this approach — including the fact that LLMs are often trained to exclude socio-cultural biases that exist for real-life humans. This means that sociologists using AI in this way couldn’t study those biases.
    Professor Dawn Parker, a co-author on the article from the University of Waterloo, notes that researchers will need to establish guidelines for the governance of LLMs in research.
    “Pragmatic concerns with data quality, fairness, and equity of access to the powerful AI systems will be substantial,” Parker said. “So, we must ensure that social science LLMs, like all scientific models, are open-source, meaning that their algorithms and ideally data are available to all to scrutinize, test, and modify. Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience.” More

  • in

    Advanced universal control system may revolutionize lower limb exoskeleton control and optimize user experience

    A team of researchers has developed a new method for controlling lower limb exoskeletons using deep reinforcement learning. The method, described in a study published in the Journal of NeuroEngineering and Rehabilitation on March 19, 2023, enables more robust and natural walking control for users of lower limb exoskeletons. “Robust walking control of a lower limb rehabilitation exoskeleton coupled with a musculoskeletal model via deep reinforcement learning” is available open access.
    While advances in wearable robotics have helped restore mobility for people with lower limb impairments, current control methods for exoskeletons are limited in their ability to provide natural and intuitive movements for users. This can compromise balance and contribute to user fatigue and discomfort. Few studies have focused on the development of robust controllers that can optimize the user’s experience in terms of safety and independence.
    Existing exoskeletons for lower limb rehabilitation employ a variety of technologies to help the user maintain balance, including special crutches and sensors, according to co-author Ghaith Androwis, PhD, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and director of the Center’s Rehabilitation Robotics and Research Laboratory. Exoskeletons that operate without such helpers allow more independent walking, but at the cost of added weight and slow walking speed.
    “Advanced control systems are essential to developing a lower limb exoskeleton that enables autonomous, independent walking under a range of conditions,” said Dr. Androwis. The novel method developed by the research team uses deep reinforcement learning to improve exoskeleton control. Reinforcement learning is a type of artificial intelligence that enables machines to learn from their own experiences through trial and error.
    “Using a musculoskeletal model coupled with an exoskeleton, we simulated the movements of the lower limb and trained the exoskeleton control system to achieve natural walking patterns using reinforcement learning,” explained corresponding author Xianlian Zhou, PhD, associate professor and director of the BioDynamics Lab in the Department of Biomedical Engineering at New Jersey Institute of Technology (NJIT). “We are testing the system in real-world conditions with a lower limb exoskeleton being developed by our team and the results show the potential for improved walking stability and reduced user fatigue.”
    The team determined that their proposed model generated a universal robust walking controller capable of handling various levels of human-exoskeleton interactions without the need for tuning parameters. The new system has the potential to benefit a wide range of users, including those with spinal cord injuries, multiple sclerosis, stroke, and other neurological conditions. The researchers plan to continue testing the system with users and further refine the control algorithms to improve walking performance.
    “We are excited about the potential of this new system to improve the quality of life for people with lower limb impairments,” said Dr. Androwis. “By enabling more natural and intuitive walking patterns, we hope to help users of exoskeletons to move with greater ease and confidence.” More

  • in

    Energy harvesting via vibrations: Researchers develop highly durable and efficient device

    An international research group has engineered a new energy-generating device by combining piezoelectric composites with carbon fiber-reinforced polymer (CFRP), a commonly used material that is both light and strong. The new device transforms vibrations from the surrounding environment into electricity, providing an efficient and reliable means for self-powered sensors.
    Details of the group’s research were published in the journal Nano Energy on June 13, 2023.
    Energy harvesting involves converting energy from the environment into usable electrical energy and is something crucial for ensuring a sustainable future.
    “Everyday items, from fridges to street lamps, are connected to the internet as part of the Internet of Things (IoT), and many of them are equipped with sensors that collect data,” says Fumio Narita, co-author of the study and professor at Tohoku University’s Graduate School of Environmental Studies. “But these IoT devices need power to function, which is challenging if they are in remote places, or if there are lots of them.”
    The sun’s rays, heat, and vibration all can generate electrical power. Vibrational energy can be utilized thanks to piezoelectric materials’ ability to generate electricity when physically stressed. Meanwhile, CFRP lends itself to applications in the aerospace and automotive industries, sports equipment, and medical equipment because of its durability and lightness.
    “We pondered whether a piezoelectric vibration energy harvester (PVEH), harnessing the robustness of CFRP together with a piezoelectric composite, could be a more efficient and durable means of harvesting energy,” says Narita.
    The group fabricated the device using a combination of CFRP and potassium sodium niobate (KNN) nanoparticles mixed with epoxy resin. The CFRP served as both an electrode and a reinforcement substrate.
    The so-called C-PVEH device lived up to its expectations. Tests and simulations revealed that it could maintain high performance even after being bent more than 100,000 times. It proved capable of storing the generated electricity and powering LED lights. Additionally, it outperformed other KNN-based polymer composites in terms of energy output density.
    The C-PVEH will help propel the development of self-powered IoT sensors, leading to more energy-efficient IoT devices.
    Narita and his colleagues are also excited about the technological advancements of their breakthrough. “As well as the societal benefits of our C-PVEH device, we are thrilled with the contributions we have made to the field of energy harvesting and sensor technology. The blend of excellent energy output density and high resilience can guide future research into other composite materials for diverse applications.” More