More stories

  • in

    Largest-ever antibiotic discovery effort uses AI to uncover potential cures in microbial dark matter

    Almost a century ago, the discovery of antibiotics like penicillin revolutionized medicine by harnessing the natural bacteria-killing abilities of microbes. Today, a new study co-led by researchers at the Perelman School of Medicine at the University of Pennsylvania suggests that natural-product antibiotic discovery is about to accelerate into a new era, powered by artificial intelligence (AI).
    The study, published in Cell, details how the researchers used a form of AI called machine learning to search for antibiotics in a vast dataset containing the recorded genomes of tens of thousands of bacteria and other primitive organisms. This unprecedented effort yielded nearly one million potential antibiotic compounds, with dozens showing promising activity in initial tests against disease-causing bacteria.
    “AI in antibiotic discovery is now a reality and has significantly accelerated our ability to discover new candidate drugs. What once took years can now be achieved in hours using computers” said study co-senior author César de la Fuente, PhD, a Presidential Assistant Professor in Psychiatry, Microbiology, Chemistry, Chemical and Biomolecular Engineering, and Bioengineering.
    Nature has always been a good place to look for new medicines, especially antibiotics. Bacteria, ubiquitous on our planet, have evolved numerous antibacterial defenses, often in the form of short proteins (“peptides”) that can disrupt bacterial cell membranes and other critical structures. While the discovery of penicillin and other natural-product-derived antibiotics revolutionized medicine, the growing threat of antibiotic resistance has underscored the urgent need for new antimicrobial compounds.
    In recent years, de la Fuente and colleagues have pioneered AI-powered searches for antimicrobials. They have identified preclinical candidates in the genomes of contemporary humans, extinct Neanderthals and Denisovans, woolly mammoths, and hundreds of other organisms. One of the lab’s primary goals is to mine the world’s biological information for useful molecules, including antibiotics.
    For this new study, the research team used a machine learning platform to sift through multiple public databases containing microbial genomic data. The analysis covered 87,920 genomes from specific microbes as well as 63,410 mixes of microbial genomes — “metagenomes” — from environmental samples. This comprehensive exploration spanned diverse habitats around the planet.
    This extensive exploration succeeded in identifying 863,498 candidate antimicrobial peptides, more than 90 percent of which had never been described before. To validate these findings, the researchers synthesized 100 of these peptides and tested them against 11 disease-causing bacterial strains, including antibiotic-resistant strains of E. coli and Staphylococcus aureus.

    “Our initial screening revealed that 63 of these 100 candidates completely eradicated the growth of at least one of the pathogens tested, and often multiple strains,” de la Fuente said. “In some cases, these molecules were effective against bacteria at very low doses.”
    Promising results were also observed in preclinical animal models, where some of the potent compounds successfully stopped infections. Further analysis suggested that many of these candidate molecules destroy bacteria by disrupting their outer protective membranes, effectively popping them like balloons.
    The identified compounds originated from microbes living in a wide variety of habitats, including human saliva, pig guts, soil and plants, corals, and many other terrestrial and marine organisms. This validates the researchers’ broad approach to exploring the world’s biological data.
    Overall, the findings demonstrate the power of AI in discovering new antibiotics, providing multiple new leads for antibiotic developers, and signaling the start of a promising new era in antibiotic discovery.
    The team has published their repository of putative antimicrobial sequences, which they call AMPSphere, which is open access and freely available at https://ampsphere.big-data-biology.org/ More

  • in

    Accelerating the R&D of wearable tech: Combining collaborative robotics, AI

    Engineers at the University of Maryland (UMD) have developed a model that combines machine learning and collaborative robotics to overcome challenges in the design of materials used in wearable green tech.
    Led by Po-Yen Chen, assistant professor in UMD’s Department of Chemical and Biomolecular Engineering, the accelerated method to create aerogel materials used in wearable heating applications — published June 1 in the journal Nature Communications- could automate design processes for new materials.
    Similar to water-based gels, but instead made using air, aerogels are lightweight and porous materials used in thermal insulation and wearable technologies, due to their mechanical strength and flexibility. But despite their seemingly simplistic nature, the aerogel assembly line is complex; researchers rely on time-intensive experiments and experience-based approaches to explore a vast design space and design the materials.
    To overcome these challenges, the research team combined robotics, machine learning algorithms, and materials science expertise to enable the accelerated design of aerogels with programmable mechanical and electrical properties. Their prediction model is built to generate sustainable products with a 95 percent accuracy rate.
    “Materials science engineers often struggle to adopt machine learning design due to the scarcity of high-quality experimental data. Our workflow, which combines robotics and machine learning, not only enhances data quality and collection rates, but also assists researchers in navigating the complex design space,” said Chen.
    The team’s strong and flexible aerogels were made using conductive titanium nanosheets, as well as naturally occurring components such as cellulose (an organic compound found in plant cells) and gelatin (a collagen-derived protein found in animal tissue and bones).
    The team says their tool can also be expanded to meet other applications in aerogel design — such as green technologies used in oil spill cleanup, sustainable energy storage, and thermal energy products like insulating windows.
    “The blending of these approaches is putting us at the frontier of materials design with tailorable complex properties. We foresee leveraging this new scaleup production platform to design aerogels with unique mechanical, thermal, and electrical properties for harsh working environments,” said Eleonora Tubaldi, an assistant professor in mechanical engineering and collaborator in the study.
    Looking ahead, Chen’s group will conduct studies to understand the microstructures responsible for aerogel flexibility and strength properties. His work has been supported by a UMD Grand Challenges Team Project Grant for the programmable design of natural plastic substitutes, jointly awarded to UMD Mechanical Engineering Professor Teng Li. More

  • in

    Internet addiction affects the behavior and development of adolescents

    Adolescents with an internet addiction undergo changes in the brain that could lead to additional addictive behaviour and tendencies, finds a new study by UCL researchers.
    The findings, published in PLOS Mental Health, reviewed 12 articles involving 237 young people aged 10-19 with a formal diagnosis of internet addiction between 2013 and 2023.
    Internet addiction has been defined as a person’s inability to resist the urge to use the internet, negatively impacting their psychological wellbeing, as well as their social, academic and professional lives.
    The studies used functional magnetic resonance imaging (fMRI) to inspect the functional connectivity (how regions of the brain interact with each other) of participants with internet addiction, both while resting and completing a task.
    The effects of internet addiction were seen throughout multiple neural networks in the brains of adolescents. There was a mixture of increased and decreased activity in the parts of the brain that are activated when resting (the default mode network).
    Meanwhile, there was an overall decrease in the functional connectivity in the parts of the brain involved in active thinking (the executive control network).
    These changes were found to lead to addictive behaviours and tendencies in adolescents, as well as behaviour changes associated with intellectual ability, physical coordination, mental health and development.

    Lead author, MSc student, Max Chang (UCL Great Ormond Street Institute for Child Health) said: “Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition, and personalities. As a result, the brain is particularly vulnerable to internet addiction related urges during this time, such as compulsive internet usage, cravings towards usage of the mouse or keyboard and consuming media.
    “The findings from our study show that this can lead to potentially negative behavioural and developmental changes that could impact the lives of adolescents. For example, they may struggle to maintain relationships and social activities, lie about online activity and experience irregular eating and disrupted sleep.”
    With smartphones and laptops being ever more accessible, internet addiction is a growing problem across the globe. Previous research has shown that people in the UK spend over 24 hours every week online and, of those surveyed, more than half self-reported being addicted to the internet.
    Meanwhile, Ofcom found that of the 50 million internet users in the UK, over 60% said their internet usage had a negative effect on their lives — such as being late or neglecting chores.
    Senior author, Irene Lee (UCL Great Ormond Street Institute of Child Health), said: “There is no doubt that the internet has certain advantages. However, when it begins to affect our day-to-day lives, it is a problem.
    “We would advise that young people enforce sensible time limits for their daily internet usage and ensure that they are aware of the psychological and social implications of spending too much time online.”
    Mr Chang added: “We hope our findings will demonstrate how internet addiction alters the connection between the brain networks in adolescence, allowing physicians to screen and treat the onset of internet addiction more effectively.

    “Clinicians could potentially prescribe treatment to aim at certain brain regions or suggest psychotherapy or family therapy targeting key symptoms of internet addiction.
    “Importantly, parental education on internet addiction is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of internet addiction will more effectively handle screen time, impulsivity, and minimise the risk factors surrounding internet addiction.”
    Study limitations
    Research into the use of fMRI scans to investigate internet addiction is currently limited and the studies had small adolescent samples. They were also primarily from Asian countries. Future research studies should compare results from Western samples to provide more insight on therapeutic intervention. More

  • in

    Using AI to decode dog vocalizations

    Have you ever wished you could understand what your dog is trying to say to you? University of Michigan researchers are exploring the possibilities of AI, developing tools that can identify whether a dog’s bark conveys playfulness or aggression.
    The same models can also glean other information from animal vocalizations, such as the animal’s age, breed and sex. A collaboration with Mexico’s National Institute of Astrophysics, Optics and Electronics (INAOE) Institute in Puebla, the study finds that AI models originally trained on human speech can be used as a starting point to train new systems that target animal communication.
    The results were presented at the Joint International Conference on Computational Linguistics, Language Resources and Evaluation.
    “By using speech processing models initially trained on human speech, our research opens a new window into how we can leverage what we built so far in speech processing to start understanding the nuances of dog barks,” said Rada Mihalcea, the Janice M. Jenkins Collegiate Professor of Computer Science and Engineering, and director of U-M’s AI Laboratory.
    “There is so much we don’t yet know about the animals that share this world with us. Advances in AI can be used to revolutionize our understanding of animal communication, and our findings suggest that we may not have to start from scratch.”
    One of the prevailing obstacles to developing AI models that can analyze animal vocalizations is the lack of publicly available data. While there are numerous resources and opportunities for recording human speech, collecting such data from animals is more difficult.
    “Animal vocalizations are logistically much harder to solicit and record,” said Artem Abzaliev, lead author and U-M doctoral student in computer science and engineering. “They must be passively recorded in the wild or, in the case of domestic pets, with the permission of owners.”
    Because of this dearth of usable data, techniques for analyzing dog vocalizations have proven difficult to develop, and the ones that do exist are limited by a lack of training material. The researchers overcame these challenges by repurposing an existing model that was originally designed to analyze human speech.

    This approach enabled the researchers to tap into robust models that form the backbone of the various voice-enabled technologies we use today, including voice-to-text and language translation. These models are trained to distinguish nuances in human speech, like tone, pitch and accent, and convert this information into a format that a computer can use to identify what words are being said, recognize the individual speaking, and more.
    “These models are able to learn and encode the incredibly complex patterns of human language and speech,” Abzaliev said. “We wanted to see if we could leverage this ability to discern and interpret dog barks.”
    The researchers used a dataset of dog vocalizations recorded from 74 dogs of varying breed, age and sex, in a variety of contexts. Humberto Pérez-Espinosa, a collaborator at INAOE, led the team who collected the dataset. Abzaliev then used the recordings to modify a machine-learning model — a type of computer algorithm that identifies patterns in large data sets. The team chose a speech representation model called Wav2Vec2, which was originally trained on human speech data.
    With this model, the researchers were able to generate representations of the acoustic data collected from the dogs and interpret these representations. They found that Wav2Vec2 not only succeeded at four classification tasks; it also outperformed other models trained specifically on dog bark data, with accuracy figures up to 70%.
    “This is the first time that techniques optimized for human speech have been built upon to help with the decoding of animal communication,” Mihalcea said. “Our results show that the sounds and patterns derived from human speech can serve as a foundation for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations.”
    In addition to establishing human speech models as a useful tool in analyzing animal communication — which could benefit biologists, animal behaviorists and more — this research has important implications for animal welfare. Understanding the nuances of dog vocalizations could greatly improve how humans interpret and respond to the emotional and physical needs of dogs, thereby enhancing their care and preventing potentially dangerous situations, the researchers said. More

  • in

    New model allows a computer to understand human emotions

    Researchers at the University of Jyväskylä, Finland, have developed a model that enables computers to interpret and understand human emotions, utilizing principles of mathematical psychology. This advancement could significantly improve the interface between humans and smart technologies, including artificial intelligence systems, making them more intuitive and responsive to user feelings.
    According to Jussi Jokinen, Associate Professor of Cognitive Science, the model could be used by a computer in the future to predict, for example, when a user will become annoyed or anxious. In such situations, the computer could, for example, give the user additional instructions or redirect the interaction.
    In everyday interactions with computers, users commonly experience emotions such as joy, irritation, and boredom. Despite the growing prevalence of artificial intelligence, current technologies often fail to acknowledge these user emotions.
    The model developed in Jyväskylä can currently predict if the user has feelings of happiness, boredom, irritation, rage, despair and anxiety.
    “Humans naturally interpret and react to each other’s emotions, a capability that machines fundamentally lack,” Jokinen explains. “This discrepancy can make interactions with computers frustrating, especially if the machine remains oblivious to the user’s emotional state.”
    The research project led by Jokinen uses mathematical psychology to find solutions to the problem of misalignment between intelligent computer systems and their users.
    “Our model can be integrated into AI systems, granting them the ability to psychologically understand emotions and thus better relate to their users.” Jokinen says.

    Research is based on emotional theory — the next step is to influence the user’s emotions
    The research is anchored in a theory postulating that emotions are generated when human cognition evaluates events from various perspectives.
    Jokinen elaborates: “Consider a computer error during a critical task. This event is assessed by the user’s cognition as being counterproductive. An inexperienced user might react with anxiety and fear due to uncertainty on how to resolve the error, whereas an experienced user might feel irritation, annoyed at having to waste time resolving the issue. Our model predicts the user’s emotional response by simulating this cognitive evaluation process.”
    The next phase of this project will explore potential applications of this emotional understanding.
    “With our model, a computer could preemptively predict user distress and attempt to mitigate negative emotions,” Jokinen suggests.
    “This proactive approach could be utilized in various settings, from office environments to social media platforms, improving user experience by sensitively managing emotional dynamics.”
    The implications of such technology are profound, offering a glimpse into a future where computers are not merely tools, but empathetic partners in user interaction. More

  • in

    New open-source platform allows users to evaluate performance of AI-powered chatbots

    Researchers have developed a platform for the interactive evaluation of AI-powered chatbots such as ChatGPT.
    A team of computer scientists, engineers, mathematicians and cognitive scientists, led by the University of Cambridge, developed an open-source evaluation platform called CheckMate, which allows human users to interact with and evaluate the performance of large language models (LLMs).
    The researchers tested CheckMate in an experiment where human participants used three LLMs — InstructGPT, ChatGPT and GPT-4 — as assistants for solving undergraduate-level mathematics problems.
    The team studied how well LLMs can assist participants in solving problems. Despite a generally positive correlation between a chatbot’s correctness and perceived helpfulness, the researchers also found instances where the LLMs were incorrect, but still useful for the participants. However, certain incorrect LLM outputs were thought to be correct by participants. This was most notable in LLMs optimised for chat.
    The researchers suggest models that communicate uncertainty, respond well to user corrections, and can provide a concise rationale for their recommendations, make better assistants. Human users of LLMs should verify their outputs carefully, given their current shortcomings.
    The results, reported in the Proceedings of the National Academy of Sciences (PNAS), could be useful in both informing AI literacy training, and help developers improve LLMs for a wider range of uses.
    While LLMs are becoming increasingly powerful, they can also make mistakes and provide incorrect information, which could have negative consequences as these systems become more integrated into our everyday lives.

    “LLMs have become wildly popular, and evaluating their performance in a quantitative way is important, but we also need to evaluate how well these systems work with and can support people,” said co-first author Albert Jiang, from Cambridge’s Department of Computer Science and Technology. “We don’t yet have comprehensive ways of evaluating an LLM’s performance when interacting with humans.”
    The standard way to evaluate LLMs relies on static pairs of inputs and outputs, which disregards the interactive nature of chatbots, and how that changes their usefulness in different scenarios. The researchers developed CheckMate to help answer these questions, designed for but not limited to applications in mathematics.
    “When talking to mathematicians about LLMs, many of them fall into one of two main camps: either they think that LLMs can produce complex mathematical proofs on their own, or that LLMs are incapable of simple arithmetic,” said co-first author Katie Collins from the Department of Engineering. “Of course, the truth is probably somewhere in between, but we wanted to find a way of evaluating which tasks LLMs are suitable for and which they aren’t.”
    The researchers recruited 25 mathematicians, from undergraduate students to senior professors, to interact with three different LLMs (InstructGPT, ChatGPT, and GPT-4) and evaluate their performance using CheckMate. Participants worked through undergraduate-level mathematical theorems with the assistance of an LLM and were asked to rate each individual LLM response for correctness and helpfulness. Participants did not know which LLM they were interacting with.
    The researchers recorded the sorts of questions asked by participants, how participants reacted when they were presented with a fully or partially incorrect answer, whether and how they attempted to correct the LLM, or if they asked for clarification. Participants had varying levels of experience with writing effective prompts for LLMs, and this often affected the quality of responses that the LLMs provided.
    An example of an effective prompt is “what is the definition of X” (X being a concept in the problem) as chatbots can be very good at retrieving concepts they know of and explaining it to the user.

    “One of the things we found is the surprising fallibility of these models,” said Collins. “Sometimes, these LLMs will be really good at higher-level mathematics, and then they’ll fail at something far simpler. It shows that it’s vital to think carefully about how to use LLMs effectively and appropriately.”
    However, like the LLMs, the human participants also made mistakes. The researchers asked participants to rate how confident they were in their own ability to solve the problem they were using the LLM for. In cases where the participant was less confident in their own abilities, they were more likely to rate incorrect generations by LLM as correct.
    “This kind of gets to a big challenge of evaluating LLMs, because they’re getting so good at generating nice, seemingly correct natural language, that it’s easy to be fooled by their responses,” said Jiang. “It also shows that while human evaluation is useful and important, it’s nuanced, and sometimes it’s wrong. Anyone using an LLM, for any application, should always pay attention to the output and verify it themselves.”
    Based on the results from CheckMate, the researchers say that newer generations of LLMs are increasingly able to collaborate helpfully and correctly with human users on undergraduate-level maths problems, as long as the user can assess the correctness of LLM-generated responses. Even if the answers may be memorised and can be found somewhere on the internet, LLMs have the advantage of being flexible in their inputs and outputs over traditional search engines (though should not replace search engines in their current form).
    While CheckMate was tested on mathematical problems, the researchers say their platform could be adapted to a wide range of fields. In the future, this type of feedback could be incorporated into the LLMs themselves, although none of the CheckMate feedback from the current study has been fed back into the models.
    “These kinds of tools can help the research community to have a better understanding of the strengths and weaknesses of these models,” said Collins. “We wouldn’t use them as tools to solve complex mathematical problems on their own, but they can be useful assistants, if the users know how to take advantage of them.”
    The research was supported in part by the Marshall Commission, the Cambridge Trust, Peterhouse, Cambridge, The Alan Turing Institute, the European Research Council, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI). More

  • in

    Microscope system sharpens scientists’ view of neural circuit connections

    The brain’s ability to learn comes from “plasticity,” in which neurons constantly edit and remodel the tiny connections called synapses that they make with other neurons to form circuits. To study plasticity, neuroscientists seek to track it at high resolution across whole cells, but plasticity doesn’t wait for slow microscopes to keep pace and brain tissue is notorious for scattering light and making images fuzzy. In a paper in Scientific Reports, a collaboration of MIT engineers and neuroscientists describes a new microscopy system designed for fast, clear, and frequent imaging of the living brain.
    The system, called “multiline orthogonal scanning temporal focusing” (mosTF), works by scanning brain tissue with lines of light in perpendicular directions. As with other live brain imaging systems that rely on “two-photon microscopy,” this scanning light “excites” photon emission from brain cells that have been engineered to fluoresce when stimulated. The new system proved in the team’s tests to be eight times faster than a two-photon scope that goes point by point, and proved to have a four-fold better signal to background ratio (a measure of the resulting image clarity) than a two-photon system that just scans in one direction.
    “Tracking rapid changes in circuit structure in the context of the living brain remains a challenge,” said co-author Elly Nedivi, William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory and MIT’s Departments of Biology and Brain and Cognitive Sciences. “While two-photon microscopy is the only method that allows high resolution visualization of synapses deep in scattering tissue, such as the brain, the required point by point scanning is mechanically slow. The mosTF system significantly reduces scan time without sacrificing resolution.”
    Scanning a whole line of a sample is inherently faster than just scanning one point at a time, but it kicks up a lot of scattering. To manage that scattering, some scope systems just discard scattered photons as noise, but then they are lost, said lead author Yi Xue, an assistant professor at UC Davis and a former graduate student in the lab of corresponding author Peter T.C. So, professor of mechanical engineering and biological engineering at MIT. Newer single-line and the mosTF systems produce a stronger signal (thereby resolving smaller and fainter features of stimulated neurons) by algorithmically reassigning scattered photons back to their origin. In a two-dimensional image, that process is better accomplished by using the information produced by a two-dimensional, perpendicular-direction system such as mosTF, than by a one-dimensional, single-direction system, Xue said.
    “Our excitation light is a line rather than a point — more like a light tube than a light bulb — but the reconstruction process can only reassign photons to the excitation line and cannot handle scattering within the line,” Xue explained. “Therefore, scattering correction is only performed along one dimension for a 2D image. To correct scattering in both dimensions, we need to scan the sample and correct scattering along the other dimension as well, resulting in an orthogonal scanning strategy.”
    In the study the team tested their system head-to-head against a point-by-point scope (a two-photon laser scanning microscope — TPLSM) and a line-scanning temporal focusing microscope (lineTF). They imaged fluorescent beads through water and through a lipid-infused solution that better simulates the kind of scattering that arises in biological tissue. In the lipid solution, mosTF produced images with a 36-times better signal-to-background ratio than lineTF.
    For a more definitive proof, Xue worked with Josiah Boivin in the Nedivi lab to image neurons in the brain of a live, anesthetized mouse, using mosTF. Even in this much more complex environment, where the pulsations of blood vessels and the movement of breathing provide additional confounds, the mosTF scope still achieved a four-fold better signal-to-background ratio. Importantly, it was able to reveal the features where many synapses dwell: the spines that protrude along the vine-like processes, or dendrites, that grow out of the neuron cell body. Monitoring plasticity requires being able to watch those spines grow, shrink, come and go, across the entire cell, Nedivi said.
    “Our continued collaboration with the So lab and their expertise with microscope development has enabled in vivo studies that are unapproachable using conventional, out-of-the-box two photon microscopes,” she added.
    So said he is already planning further improvements to the technology.
    “We’re continuing to work toward the goal of developing even more efficient microscopes to look at plasticity even more efficiently,” he said. “The speed of mosTF is still limited by needing to use high sensitivity, low noise cameras that are often slow. We are now working on a next generation system with new type of detectors such as hybrid photomultiplier or avalanche photodiode arrays that are both sensitive and fast.” More

  • in

    Unraveling the physics of knitting

    Knitting, the age-old craft of looping and stitching natural fibers into fabrics, has received renewed attention for its potential applications in advanced manufacturing. Far beyond their use for garments, knitted textiles are ideal for designing and fabricating emerging technologies like wearable electronics or soft robotics — structures that need to move and bend.
    Knitting transforms one-dimensional yarn into two-dimensional fabrics that are flexible, durable, and highly customizable in shape and elasticity. But to create smart textile design techniques that engineers can use, understanding the mechanics behind knitted materials is crucial.
    Physicists from the Georgia Institute of Technology have taken the technical know-how of knitting and added mathematical backing to it. In a study led by Elisabetta Matsumoto, associate professor in the School of Physics, and Krishma Singal, a graduate researcher in Matsumoto’s lab, the team used experiments and simulations to quantify and predict how knit fabric response can be programmed. By establishing a mathematical theory of knitted materials, the researchers hope that knitting — and textiles in general — can be incorporated into more engineering applications.
    Their research paper, “Programming Mechanics in Knitted Materials, Stitch by Stitch,” was published in the journal Nature Communications.
    “For centuries, hand knitters have used different types of stitches and stitch combinations to specify the geometry and ‘stretchiness’ of garments, and much of the technical knowledge surrounding knitting has been handed down by word of mouth,” said Matsumoto.
    But while knitting has often been dismissed as unskilled, poorly paid “women’s work,” the properties of knits can be more complex than traditional engineering materials like rubbers or metals.
    For this project, the team wanted to decode the underlying principles that direct the elastic behavior of knitted fabrics. These principles are governed by the nuanced interplay of stitch patterns, geometry, and yarn topology — the undercrossings or overcrossings in a knot or stitch. “A lot of yarn isn’t very stretchy, yet once knit into a fabric, the fabric exhibits emergent elastic behavior,” Singal said.

    “Experienced knitters can identify which fabrics are stretchier than others and have an intuition for its best application,” she added. “But by understanding how these fabrics can be programmed and how they behave, we can expand knitting’s application into a variety of fields beyond clothing.”
    Through a combination of experiments and simulations, Matsumoto and Singal explored the relationships among yarn manipulation, stitch patterns, and fabric elasticity, and how these factors work together to affect bulk fabric behavior. They began with physical yarn and fabric stretching experiments to identify main parameters, such as how bendable or fluffy the yarn is, and the length and radius of yarn in a given stitch.
    They then used the experiment results to design simulations to examine the yarn inside a stitch, similar to an X-ray. It is difficult to see inside stitches during the physical measurements, so the simulations are used to see what parts of the yarn have interacted with other parts. The simulations are used to recreate the physical measurements as accurately as possible.
    Through these experiments and simulations, Singal and Matsumoto showed the profound impact that design variations can have on fabric response and uncovered the remarkable programmability of knitting. “We discovered that by using simple adjustments in how you design a fabric pattern, you can change how stretchy or stiff the bulk fabric is,” Singal said. “How the yarn is manipulated, what stitches are formed, and how the stitches are patterned completely alter the response of the final fabric.”
    Matsumoto envisions that the insights gleaned from their research will enable knitted textile design to become more commonly used in manufacturing and product design. Their discovery that simple stitch patterning can alter a fabric’s elasticity points to knitting’s potential for cutting-edge interactive technologies like soft robotics, wearables, and haptics.
    “We think of knitting as an additive manufacturing technique — like 3D printing, and you can change the material properties just by picking the right stitch pattern,” Singal said.
    Matsumoto and Singal plan to push the boundaries of knitted fabric science even further, as there are still numerous questions about knitted fabrics to be answered.
    “Textiles are ubiquitous and we use them everywhere in our lives,” Matsumoto said. “Right now, the hard part is that designing them for specific properties relies on having a lot of experience and technical intuition. We hope our research helps make textiles a versatile tool for engineers and scientists too.” More