More stories

  • in

    Deep neural networks show promise as models of human hearing

    Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.
    In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.
    The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.
    “What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
    MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.
    Models of hearing
    Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

    “These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.
    When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.
    In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.
    Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.
    For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.
    When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.

    “If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.
    Hierarchical processing
    The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.
    Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.
    “Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.
    McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.
    “A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.
    The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, and a Department of Energy Computational Science Graduate Fellowship. More

  • in

    Smartwatches can pick up abnormal heart rhythms in kids, study finds

    Smartwatches can help physicians detect and diagnose irregular heart rhythms in children, according to a new study from the Stanford School of Medicine.
    The finding comes from a survey of electronic medical records for pediatric cardiology patients receiving care at Stanford Medicine Children’s Health. The study will publish online Dec. 13 in Communications Medicine.
    Over a four-year period, patients’ medical records mentioned “Apple Watch” 145 times. Among patients whose medical records mentioned the smartwatch, 41 had abnormal heart rhythms confirmed by traditional diagnostic methods; of these, 29 children had their arrythmias diagnosed for the first time.
    “I was surprised by how often our standard monitoring didn’t pick up arrythmias and thewatch did,” said senior study author Scott Ceresnak, MD, professor of pediatrics. Ceresnak is a pediatric cardiologist who treats patients at Stanford Medicine. “It’s awesome to see that newer technology can really make a difference in how we’re able to care for patients.”
    The study’s lead author is Aydin Zahedivash, MD, a clinical instructor in pediatrics.
    Most of the abnormal rhythms detected were not life-threatening, Ceresnak said. However, he added that the arrythmias detected can cause distressing symptoms such as a racing heartbeat, dizziness and fainting.
    Skipping a beat, sometimes
    Doctors face two challenges in diagnosing children’s cardiac arrythmias, or heart rhythm abnormalities.

    The first is that cardiac diagnostic devices, though they have improved in recent years, still aren’t ideal for kids. Ten to 20 years ago, a child had to wear, for 24 to 48 hours, a Holter monitor consisting of a device about the size of a smartphone attached by wires to five electrodes that were adhered to the child’s chest. Patients can now wear event monitors — in the form of a single sticker placed on the chest — for a few weeks. Although the event monitors are more comfortable and can be worn longer than a Holter monitor, they sometimes fall off early or cause problems such as skin irritation from adhesives.
    The second challenge is that even a few weeks of continuous monitoring may not capture the heart’s erratic behavior, as children experience arrythmias unpredictably. Kids may go months between episodes, making it tricky for their doctors to determine what’s going on.
    Connor Heinz and his family faced both challenges when he experienced periods of a racing heartbeat starting at age 12: An adhesive monitor was too irritating, and he was having irregular heart rhythms only once every few months. Ceresnak thought he knew what was causing the racing rhythms, but he wanted confirmation. He suggested that Connor and his mom, Amy Heinz, could try using Amy’s smartwatch to record the rhythm the next time Connor’s heart began racing.
    Using smartwatches for measuring children’s heart rhythms is limited by the fact that existing smartwatch algorithms that detect heart problems have not been optimized for kids. Children have faster heartbeats than adults; they also tend to experience different types of abnormal rhythms than do adults who have cardiac arrythmias.
    The paper showed that the smartwatches appear to help detect arrhythmias in kids, suggesting that it would be useful to design versions of the smartwatch algorithms based on real-world heart rhythm data from children.
    Evaluating medical records
    The researchers searched patients’ electronic medical records from 2018 to 2022 for the phrase “Apple Watch,” then checked to see which patients with this phrase in their records had submitted smartwatch data and received a diagnosis of a cardiac arrythmia.

    Data from watches included alerts about patients’ heart rates and patient-initiated electrocardiograms, or ECGs, from an app that uses the electrical sensors in the watch. When patients activate the app, the ECG function records the heart’s electrical signals; physicians can use this pattern of electrical pulses to diagnose different types of heart problems.
    From 145 mentions of the smartwatch in patient records, 41 patients had arrythmias confirmed. Of these, 18 patients had collected an ECG with their watches, and 23 patients had received a notification from the watch about a high heart rate.
    The information from the smartwatches prompted the children’s physicians to conduct medical workups, from which 29 children received new arrythmia diagnoses. In 10 patients, the smartwatch diagnosed arrythmias that traditional monitoring methods never picked up.
    One of those patients was Connor Heinz.
    “At a basketball tryout, he had another episode,” Amy Heinz recalled. “I put the watch on him and emailed a bunch of captures [of his heartbeat] to Dr. Ceresnak.” The information from the watch confirmed Ceresnak’s suspicion that Connor had supraventricular tachycardia.
    Most children with arrythmias had the same condition as Connor, a pattern of racing heartbeats originating in the heart’s upper chambers.
    “These irregular heartbeats are not life-threatening, but they make kids feel terrible,” Ceresnak said. “They can be a problem and they’re scary, and if wearable devices can help us get to the bottom of what this arrythmia is, that’s super helpful.”
    In many cases of supraventricular tachycardia, the abnormal heart rhythm is caused by a small short-circuit in the heart’s electrical circuitry. The problem can often be cured by a medical procedure called catheter ablation that destroys a small, precisely targeted region of heart cells causing the short circuit.
    Now 15, Connor has been successfully treated with catheter ablation and is playing basketball for his high school team in Menlo Park, California.
    The study also found smartwatch use noted in the medical records of 73 patients who did not ultimately receive diagnoses of arrythmias.
    “A lot of kids have palpitations, a feeling of funny heartbeats, but the vast majority don’t have medically significant arrythmias,” Ceresnak said. “In the future, I think this technology may help us rule out anything serious.”
    A new study
    The Stanford Medicine research team plans to conduct a study to further assess the utility of the Apple Watch for detecting children’s heart problems. The study will measure whether, in kids, heart rate and heart rhythm measurements from the watches match measurements from standard diagnostic devices.
    The study is open only to children who are already cardiology patients at Stanford Medicine Children’s Health.
    “The wearable market is exploding, and our kids are going to use them,” Ceresnak said. “We want to make sure the data we get from these devices is reliable and accurate for children. Down the road, we’d love to help develop pediatric-specific algorithms for monitoring heart rhythm.”
    The study was conducted without external funding. Apple was not involved in the work. Apple’s Investigator Support Program has agreed to donate watches for the next phase of the research.
    Apple’s Irregular Rhythm Notification and ECG app are cleared by the Food and Drug Administration for use by people 22 years of age or older. The high heart rate notification is available only to users 13 years of age or older. More

  • in

    Highly resolved precipitation maps based on AI

    Strong precipitation may cause natural disasters, such as flooding or landslides. Global climate models are required to forecast the frequency of these extreme events, which is expected to change as a result of climate change. Researchers of Karlsruhe Institute of Technology (KIT) have now developed a first method based on artificial intelligence (AI), by means of which the precision of coarse precipitation fields generated by global climate models can be increased. The researchers succeeded in improving spatial resolution of precipitation fields from 32 to two kilometers and temporal resolution from one hour to ten minutes. This higher resolution is required to better forecast the more frequent occurrence of heavy local precipitation and the resulting natural disasters in future.
    Many natural disasters, such as flooding or landslides, are directly caused by extreme precipitation. Researchers expect that increasing average temperatures will cause extreme precipitation events to further increase. To adapt to a changing climate and prepare for disasters at an early stage, precise local and global data on the current and future water cycle are indispensable. “Precipitation is highly variable in space and time and, hence, difficult to forecast, in particular on the local level,” says Dr. Christian Chwala from the Atmospheric Environmental Research Division of KIT’s Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpine in Garmisch-Partenkirchen.” For this reason, we want to enhance the resolution of precipitation fields generated e.g. by global climate models and improve their classification as regards possible threats, such as floodings.”
    Higher Resolution for More Precise Regional Climate Models
    Currently used global climate models are based on a grid that is not fine enough to precisely present the variability of precipitation. Highly resolved precipitation maps can only be produced with computationally expensive and, hence, spatially or temporally limited models. “For this reason, we have developed an AI-based generative neural network, called GAN, and trained it with high-resolution radar precipitation fields. In this way, the GAN learns how to generate realistic precipitation fields and derive their temporal sequence from coarsely resolved data,” says Luca Glawion from IMK-IFU. “The network is able to generate highly resolved radar precipitation films from very coarsely resolved maps.” These refined radar maps not only show how rain cells develop and move, but precisely reconstruct local rain statistics and the corresponding extreme value distribution.
    “Our method serves as a basis to increase the resolution of coarsely grained precipitation fields, such that the high spatial and temporal variability of precipitation can be reproduced adequately and local effects can be studied,” says Julius Polz from IMK-IFU. “Our deep learning method is quicker by several orders of magnitude than the calculation of such highly resolved precipitation fields with numerical weather models usually applied to regionally refine data of global climate models.” The researchers point out that their method also generates an ensemble of different potential precipitation fields. This is important, as a multitude of physically plausible highly resolved solutions exists for each coarsely resolved precipitation field. Similar to a weather forecast, an ensemble allows for a more precise determination of the associated uncertainty.
    Higher Resolution for Better Forecasts under Climate Change
    The results show that the AI model and methodology developed by the researchers will enable future use of neural networks to improve the spatial and temporal resolution of precipitation calculated by climate models. This will allow for a more precise analysis of the impacts and developments of precipitation in a changing climate.
    “In a next step, we will apply the method to global climate simulations that transfer specific large-scale weather situations to a future world with a changed climate, e.g. to the year of 2100. The higher resolution of precipitation events simulated with our method will allow for a better estimation of the impacts the weather conditions that caused the flooding of the river Ahr in 2021 would have had in a world warmer by 2 degrees,” Glawion explains. Such information is of decisive importance to develop climate adaptation methods. More

  • in

    Saving endangered species: New AI method counts manatee clusters in real time

    Manatees are endangered species volatile to the environment. Because of their voracious appetites, they often spend up to eight hours a day grazing for food within shallow waters, making them vulnerable to environmental changes and other risks.
    Accurately counting manatee aggregations within a region is not only biologically meaningful in observing their habit, but also crucial for designing safety rules for boaters and divers as well as scheduling nursing, intervention, and other plans. Nevertheless, counting manatees is challenging.
    Because manatees tend to live in herds, they often block each other when viewed from the surface. As a result, small manatees are likely to be partially or completely blocked from view. In addition, water reflections tend to make manatees invisible, and they also can be mistaken for other objects such as rocks and branches.
    While aerial survey data are used in some regions to count manatees, this method is time-consuming and costly, and the accuracy depends on factors such as observer bias, weather conditions and time of day. Moreover, it is crucial to have a low-cost method that provides a real-time count to alert ecologists of threats early to enable them to act proactively to protect manatees.
    Artificial intelligence is used in a wide spectrum of fields, and now, researchers from Florida Atlantic University’s College of Engineering and Computer Science have harnessed its powers to help save the beloved manatee. They are among the first to use a deep learning-based crowd counting approach to automatically count the number of manatees in a designated region, using images captured from CCTV cameras, which are readily available, as input.
    This pioneering study, published Scientific Reports, not only addresses the technical challenges of counting in complex outdoor environments but also offers potential ways to aid endangered species.
    To determine manatee densities and calculate their numbers, researchers used generic images captured from surveillance videos from the water surface. They then used a unique design matching to manatees’ shape — Anisotropic Gaussian Kernel (AGK) — to transform the images into manatee customized density maps, representing manatees’ unique body shapes.

    Although many methods exist for counting, most of the existing counting methods are applied to crowds to count the number of people, due to their relevance to important applications such as urban planning and public safety.
    To save labeling costs, researchers used line-label based annotation with a single straight line to mark each manatee. The goal of the study was to learn to count the number of objects within a scene and obtain labels to support counting.
    Results of the study reveal that the FAU-developed method outperformed other baselines, including the traditional Gaussian kernel-based approach. Transitioning from dot to line labeling also improved wheat head counting accuracy, an important role in crop yield estimation, suggesting broader applications for convex-shaped objects in diverse contexts. This approach worked particularly well when the image had a high density of manatees in a complicated background.
    By formatting manatee counting as a deep neural network density estimation learning task, this approach balanced the labeling costs vs. counting efficiency. As a result, this method delivers a simple and high throughput solution for manatee counting that requires very little labeling efforts. A direct impact is that state parks can leverage this method to understand the number of manatees in different regions, by using their existing CCTV cameras, in real time.
    “There are many ways to use computational methods to help save endangered species, such as detecting the presence of the species and counting them to collect information about numbers and density,” said Xingquan (Hill) Zhu, Ph.D., senior author, an IEEE Fellow and a professor in FAU’s Department of Electrical Engineering and Computer Science. “Our method considered distortions caused by the perspective between the water space and the image plane. Since the shape of the manatee is closer to an ellipse than a circle, we used AGK to best represent the manatee contour and estimate manatee density in the scene. This allows density map to be more accurate, in terms of mean absolute errors and root mean square error, than other alternatives in estimating manatees’ numbers.”
    To validate their method and facilitate further research in this domain, the researchers developed a comprehensive manatee counting dataset, along with their source code, published through GitHub for public access at github.com/yeyimilk/deep-learning-for-manatee-counting.

    “Manatees are one of the wildlife species being affected by human-related threats. Therefore, calculating their numbers and gathering patterns in real time is vital for understanding their population dynamics,” said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science. “The methodology developed by professor Zhu and our graduate students provides a promising trajectory for broader applications, especially for convex-shaped objects, to improve counting techniques that may foretell better ecological results from management decisions.”
    Manatees can be found from Brazil to Florida and all the way around the Caribbean islands. Some species including the Florida Manatee are considered endangered by the International Union for Conservation of Nature.
    Study co-authors are FAU graduate students Zhiqiang Wang; Yiran Pang; and Cihan Ulus, also a teaching assistant, all within the Department of Electrical Engineering and Computer Science.
    The research was sponsored by the United States National Science Foundation. More

  • in

    Can AI be too good to use?

    Much of the discussion around implementing artificial intelligence systems focuses on whether an AI application is “trustworthy”: Does it produce useful, reliable results, free of bias, while ensuring data privacy? But a new paper published Dec. 7 in Frontiers in Artificial Intelligence poses a different question: What if an AI is just too good?
    Carrie Alexander, a postdoctoral researcher at the AI Institute for Next Generation Food Systems, or AIFS, at the University of California, Davis, interviewed a wide range of food industry stakeholders, including business leaders and academic and legal experts, on the attitudes of the food industry toward adopting AI. A notable issue was whether gaining extensive new knowledge about their operations might inadvertently create new liability risks and other costs.
    For example, an AI system in a food business might reveal potential contamination with pathogens. Having that information could be a public benefit but also open the firm to future legal liability, even if the risk is very small.
    “The technology most likely to benefit society as a whole may be the least likely to be adopted, unless new legal and economic structures are adopted,” Alexander said.
    An on-ramp for AI
    Alexander and co-authors Professor Aaron Smith of the UC Davis Department of Agricultural and Resource Economics and Professor Renata Ivanek of Cornell University, argue for a temporary “on-ramp” that would allow companies to begin using AI, while exploring the benefits, risks and ways to mitigate them. This would also give the courts, legislators and government agencies time to catch up and consider how best to use the information generated by AI systems in legal, political and regulatory decisions.
    “We need ways for businesses to opt in and try out AI technology,” Alexander said. Subsidies, for example for digitizing existing records, might be helpful especially for small companies.
    “We’re really hoping to generate more research and discussion on what could be a significant issue,” Alexander said. “It’s going to take all of us to figure it out.”
    The work was supported in part by a grant from the USDA National Institute of Food and Agriculture. The AI Institute for Next Generation Food Systems is funded by a grant from USDA-NIFA and is one of 25 AI institutes established by the National Science Foundation in partnership with other agencies. More

  • in

    Artificial intelligence systems excel at imitation, but not innovation

    Artificial intelligence (AI) systems are often depicted as sentient agents poised to overshadow the human mind. But AI lacks the crucial human ability of innovation, researchers at the University of California, Berkeley have found.
    While children and adults alike can solve problems by finding novel uses for everyday objects, AI systems often lack the ability to view tools in a new way, according to findings published according to findings published in Perspectives on Psychological Science, a journal of the Association for Psychological Science.
    AI language models like ChatGPT are passively trained on data sets containing billions of words and images produced by humans. This allows AI systems to function as a “cultural technology” similar to writing that can summarize existing knowledge, Eunice Yiu, a co-author of the article, explained in an interview. But unlike humans, they struggle when it comes to innovating on these ideas, she said.
    “Even young human children can produce intelligent responses to certain questions that [language learning models] cannot,” Yiu said. “Instead of viewing these AI systems as intelligent agents like ourselves, we can think of them as a new form of library or search engine. They effectively summarize and communicate the existing culture and knowledge base to us.”
    Yiu and Eliza Kosoy, along with their doctoral advisor and senior author on the paper, developmental psychologist Alison Gopnik, tested how the AI systems’ ability to imitate and innovate differs from that of children and adults. They presented 42 children ages 3 to 7 and 30 adults with text descriptions of everyday objects. In the first part of the experiment, 88% of children and 84% of adults were able to correctly identify which objects would “go best” with another. For example, they paired a compass with a ruler instead of a teapot.
    In the next stage of the experiment, 85% of children and 95% of adults were also able to innovate on the expected use of everyday objects to solve problems. In one task, for example, participants were asked how they could draw a circle without using a typical tool such as a compass. Given the choice between a similar tool like a ruler, a dissimilar tool such as a teapot with a round bottom, and an irrelevant tool such as a stove, the majority of participants chose the teapot, a conceptually dissimilar tool that could nonetheless fulfill the same function as the compass by allowing them to trace the shape of a circle.
    When Yiu and colleagues provided the same text descriptions to five large language models, the models performed similarly to humans on the imitation task, with scores ranging from 59% for the worst-performing model to 83% for the best-performing model. The AIs’ answers to the innovation task were far less accurate, however. Effective tools were selected anywhere from 8% of the time by the worst-performing model to 75% by the best-performing model.

    “Children can imagine completely novel uses for objects that they have not witnessed or heard of before, such as using the bottom of a teapot to draw a circle,” Yiu said. “Large models have a much harder time generating such responses.”
    In a related experiment, the researchers noted, children were able to discover how a new machine worked just by experimenting and exploring. But when the researchers gave several large language models text descriptions of the evidence that the children produced, they struggled to make the same inferences, likely because the answers were not explicitly included in their training data, Yiu and colleagues wrote.
    These experiments demonstrate that AI’s reliance on statistically predicting linguistic patterns is not enough to discover new information about the world, Yiu and colleagues wrote.
    “AI can help transmit information that is already known, but it is not an innovator,” Yiu said. “These models can summarize conventional wisdom but they cannot expand, create, change, abandon, evaluate, and improve on conventional wisdom in the way a young human can.” The development of AI is still in its early days, though, and much remains to be learned about how to expand the learning capacity of AI, Yiu said. Taking inspiration from children’s curious, active, and intrinsically motivated approach to learning could help researchers design new AI systems that are better prepared to explore the real world, she said. More

  • in

    Made-to-order diagnostic tests may be on the horizon

    McGill University researchers have made a breakthrough in diagnostic technology, inventing a ‘lab on a chip’ that can be 3D-printed in just 30 minutes. The chip has the potential to make on-the-spot testing widely accessible.
    As part of a recent study, the results of which were published in the journal Advanced Materials, the McGill team developed capillaric chips that act as miniature laboratories. Unlike other computer microprocessors, these chips are single-use and require no external power source — a simple paper strip suffices. They function through capillary action — the very phenomena by which a spilled liquid on the kitchen table spontaneously wicks into the paper towel used to wipe it up.
    “Traditional diagnostics require peripherals, while ours can circumvent them. Our diagnostics are a bit what the cell phone was to traditional desktop computers that required a separate monitor, keyboard and power supply to operate,” explains Prof. David Juncker, Chair of the Department of Biomedical Engineering at McGill and senior author on the study.
    At-home testing became crucial during the COVID-19 pandemic. But rapid tests have limited availability and can only drive one liquid across the strip, meaning most diagnostics are still done in central labs. Notably, the capillaric chips can be 3D-printed for various tests, including COVID-19 antibody quantification.
    The study brings 3D-printed home diagnostics one step closer to reality, though some challenges remain, such as regulatory approvals and securing necessary test materials. The team is actively working to make their technology more accessible, adapting it for use with affordable 3D printers. The innovation aims to speed up diagnoses, enhance patient care, and usher in a new era of accessible testing.
    “This advancement has the capacity to empower individuals, researchers, and industries to explore new possibilities and applications in a more cost-effective and user-friendly manner,” says Prof. Juncker. “This innovation also holds the potential to eventually empower health professionals with the ability to rapidly create tailored solutions for specific needs right at the point-of-care.” More

  • in

    New conductive, cotton-based fiber developed for smart textiles

    A single strand of fiber developed at Washington State University has the flexibility of cotton and the electric conductivity of a polymer, called polyaniline.
    The newly developed material showed good potential for wearable e-textiles. The WSU researchers tested the fibers with a system that powered an LED light and another that sensed ammonia gas, detailing their findings in the journal Carbohydrate Polymers.
    “We have one fiber in two sections: one section is the conventional cotton: flexible and strong enough for everyday use, and the other side is the conductive material,” said Hang Liu, WSU textile researcher and the study’s corresponding author. “The cotton can support the conductive material which can provide the needed function.”
    While more development is needed, the idea is to integrate fibers like these into apparel as sensor patches with flexible circuits. These patches could be part of uniforms for firefighters, soldiers or workers who handle chemicals to detect for hazardous exposures. Other applications include health monitoring or exercise shirts that can do more than current fitness monitors.
    “We have some smart wearables, like smart watches, that can track your movement and human vital signs, but we hope that in the future your everyday clothing can do these functions as well,” said Liu. “Fashion is not just color and style, as a lot of people think about it: fashion is science.”
    In this study, the WSU team worked to overcome the challenges of mixing the conductive polymer with cotton cellulose. Polymers are substances with very large molecules that have repeating patterns. In this case, the researchers used polyaniline, also known as PANI, a synthetic polymer with conductive properties already used in applications such as printed circuit board manufacturing.
    While intrinsically conductive, polyaniline is brittle and by itself, cannot be made into a fiber for textiles. To solve this, the WSU researchers dissolved cotton cellulose from recycled t-shirts into a solution and the conductive polymer into another separate solution. These two solutions were then merged together side-by-side, and the material was extruded to make one fiber.

    The result showed good interfacial bonding, meaning the molecules from the different materials would stay together through stretching and bending.
    Achieving the right mixture at the interface of cotton cellulose and polyaniline was a delicate balance, Liu said.
    “We wanted these two solutions to work so that when the cotton and the conductive polymer contact each other they mix to a certain degree to kind of glue together, but we didn’t want them to mix too much, otherwise the conductivity would be reduced,” she said.
    Additional WSU authors on this study included first author Wangcheng Liu as well as Zihui Zhao, Dan Liang, Wei-Hong Zhong and Jinwen Zhang. This research received support from the National Science Foundation and the Walmart Foundation Project. More