More stories

  • in

    Injectable tissue prosthesis to aid in damaged muscle/nerve regeneration

    In a recent publication in the journal Nature, researchers from the Institute of Basic Science (IBS) in South Korea have made significant strides in biomaterial technology and rehabilitation medicine. They’ve developed a novel approach to healing muscle injury by employing “injectable tissue prosthesis” in the form of conductive hydrogels and combining it with a robot-assisted rehabilitation system.
    Let’s imagine you are swimming in the ocean. A giant shark approaches and bites a huge chunk of meat out of your thigh, resulting in a complete loss of motor/sensor function in your leg. If left untreated, such severe muscle damage would result in permanent loss of function and disability. How on Earth will you be able to recover from this kind of injury?
    Traditional rehabilitation methods for these kinds of muscle injuries have long sought an efficient closed-loop gait rehabilitation system that merges lightweight exoskeletons and wearable/implantable devices. Such assistive prosthetic system is required to aid the patients through the process of recovering sensory and motor functions linked to nerve and muscle damage.
    Unfortunately, the mechanical properties and rigid nature of existing electronic materials render them incompatible with soft tissues. This leads to friction and potential inflammation, stalling patient rehabilitation.
    To overcome these limitations, the IBS researchers turned to a material commonly used as a wrinkle-smoothing filler, called hyaluronic acid. Using this substance, an injectable hydrogel was developed for “tissue prostheses,” which can temporarily fill the gap of the missing muscle/nerve tissues while it regenerates. The injectable nature of this material gives it a significant advantage over traditional bioelectronic devices, which are unsuitable for narrow, deep, or small areas, and necessitate invasive surgeries.
    Thanks to its highly “tissue-like” properties, this hydrogel seamlessly interfaces with biological tissues and can be easily administered to hard-to-reach body areas without surgery. The reversible and irreversible crosslinks within the hydrogel adapt to high shear stress during injection, ensuring excellent mechanical stability. This hydrogel also incorporates gold nanoparticles, which gives it decent electrical properties. Its conductive nature allows for the effective transmission of electrophysiological signals between the two ends of injured tissues. In addition, the hydrogel is biodegrdable, meaning that the patients do not need to get surgery again.
    With mechanical properties akin to natural tissues, exceptional tissue adhesion, and injectable characteristics, researchers believe this material offers a novel approach to rehabilitation. More

  • in

    New twist on optical tweezers

    Optical tweezers manipulate tiny things like cells and nanoparticles using lasers. While they might sound like tractor beams from science fiction, the fact is their development garnered scientists a Nobel Prize in 2018.
    Scientists have now used supercomputers to make optical tweezers safer to use on living cells with applications to cancer therapy, environmental monitoring, and more.
    “We believe our research is one significant step closer towards the industrialization of optical tweezers in biological applications, specifically in both selective cellular surgery and targeted drug delivery,” said Pavana Kollipara, a recent graduate of The University of Texas at Austin. Kollipara co-authored a study on optical tweezers published August 2023 in Nature Communications, written just before he completed his PhD in mechanical engineering under fellow study co-author Yuebing Zheng of UT Austin, the corresponding author of the paper.
    Optical tweezers trap and move small particles because light has momentum, which can transfer to an impacted particle. Intensified light in lasers amps it up.
    Kollipara and colleagues took optical tweezers one step further by developing a method to keep the targeted particle cool, using a heat sink and thermoelectric cooler. Their method, called hypothermal optothermophoretic tweezers (HOTTs), can achieve low-power trapping of diverse colloids and biological cells in their native fluids.
    This latest advancement could help overcome problems with current laser light tweezers because they scorch the sample too much for biological applications.
    “The main idea of this work is simple,” Kollipara said. “If the sample is getting damaged because of the heat, just cool the entire thing down, and then heat it with the laser beam. Eventually, when the target such as a biological cell gets trapped, the temperature is still close to the ambient temperature of 27-34 °C. You can trap it at lower laser power and control the temperature, thereby removing photon or thermal damage to the cells.”
    The science team tested their HOTT on human red blood cells, which are sensitive to temperature changes. More

  • in

    Nanowire ‘brain’ network learns and remembers ‘on the fly’

    For the first time, a physical neural network has successfully been shown to learn and remember ‘on the fly’, in a way inspired by and similar to how the brain’s neurons work.
    The result opens a pathway for developing efficient and low-energy machine intelligence for more complex, real-world learning and memory tasks.
    Published today in Nature Communications, the research is a collaboration between scientists at the University of Sydney and University of California at Los Angeles.
    Lead author Ruomin Zhu, a PhD student from the University of Sydney Nano Institute and School of Physics, said: “The findings demonstrate how brain-inspired learning and memory functions using nanowire networks can be harnessed to process dynamic, streaming data.”
    Nanowire networks are made up of tiny wires that are just billionths of a metre in diameter. The wires arrange themselves into patterns reminiscent of the children’s game ‘Pick Up Sticks’, mimicking neural networks, like those in our brains. These networks can be used to perform specific information processing tasks.
    Memory and learning tasks are achieved using simple algorithms that respond to changes in electronic resistance at junctions where the nanowires overlap. Known as ‘resistive memory switching’, this function is created when electrical inputs encounter changes in conductivity, similar to what happens with synapses in our brain.
    In this study, researchers used the network to recognise and remember sequences of electrical pulses corresponding to images, inspired by the way the human brain processes information. More

  • in

    Reverse engineering Jackson Pollock

    Can a machine be trained to paint like Jackson Pollock? More specifically, can 3D-printing harness the Pollock’s distinctive techniques to quickly and accurately print complex shapes?
    “I wanted to know, can one replicate Jackson Pollock, and reverse engineer what he did,” said L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and Professor of Organismic and Evolutionary Biology, and of Physics in the Faculty of Arts and Sciences (FAS).
    Mahadevan and his team combined physics and machine learning to develop a new 3D-printing technique that can quickly create complex physical patterns — including replicating a segment of a Pollock painting — by leveraging the same natural fluid instability that Pollock used in his work.
    The research is published in Soft Matter.
    3D and 4D printing has revolutionized manufacturing but the process is still painstakingly slow.
    The issue, as it usually is, is physics. Liquid inks are bound by the rules of fluid dynamics, which means when they fall from a height, they become unstable, folding and coiling in on themselves. You can observe this at home by drizzling honey on a piece of toast.
    More than two decades ago, Mahadevan provided a simple physical explanation of this process, and later suggested how Pollock could have intuitively used these ideas to paint from a distance. More

  • in

    New techniques efficiently accelerate sparse tensors for massive AI models

    Researchers from MIT and NVIDIA have developed two techniques that accelerate the processing of sparse tensors, a type of data structure that’s used for high-performance computing tasks. The complementary techniques could result in significant improvements to the performance and energy-efficiency of systems like the massive machine-learning models that drive generative artificial intelligence.
    Tensors are data structures used by machine-learning models. Both of the new methods seek to efficiently exploit what’s known as sparsity — zero values — in the tensors. When processing these tensors, one can skip over the zeros and save on both computation and memory. For instance, anything multiplied by zero is zero, so it can skip that operation. And it can compress the tensor (zeros don’t need to be stored) so a larger portion can be stored in on-chip memory.
    However, there are several challenges to exploiting sparsity. Finding the nonzero values in a large tensor is no easy task. Existing approaches often limit the locations of nonzero values by enforcing a sparsity pattern to simplify the search, but this limits the variety of sparse tensors that can be processed efficiently.
    Another challenge is that the number of nonzero values can vary in different regions of the tensor. This makes it difficult to determine how much space is required to store different regions in memory. To make sure the region fits, more space is often allocated than is needed, causing the storage buffer to be underutilized. This increases off-chip memory traffic, which requires extra computation.
    The MIT and NVIDIA researchers crafted two solutions to address these problems. For one, they developed a technique that allows the hardware to efficiently find the nonzero values for a wider variety of sparsity patterns.
    For the other solution, they created a method that can handle the case where the data do not fit in memory, which increases the utilization of the storage buffer and reduces off-chip memory traffic.
    Both methods boost the performance and reduce the energy demands of hardware accelerators specifically designed to speed up the processing of sparse tensors. More

  • in

    Human input boosts citizens’ acceptance of AI and perceptions of fairness, study shows

    Increasing human input when AI is used for public services boosts acceptance of the technology, a new study shows.
    The research shows citizens are not only concerned about AI fairness but also about potential human biases. They are in favour of AI being used in cases when administrative discretion is perceived as too large.
    Researchers found citizens’ knowledge about AI does not alter their acceptance of the technology. More accurate systems and lower cost systems also increased their acceptance. Cost and accuracy of technology mattered more to them than human involvement.
    The study, by Laszlo Horvath from Birkbeck, University of London and Oliver James, Susan Banducci and Ana Beduschi from the University of Exeter, is published in the journal Government Information Quarterly.
    Academics carried out an experiment with 2,143 people in the UK. Respondents were asked to select if they would prefer more or less AI in systems to process immigration visas and parking permits.
    Researchers found more human involvement tended to increase acceptance of AI. Yet, when substantial human discretion was introduced in parking permit scenarios, respondents preferred more limited human input.
    System-level factors such as high accuracy, the presence of an appeals system, increased transparency, reduced cost, non-sharing of data, and the absence of private company involvement all boosted both acceptance and perceived procedural fairness. More

  • in

    Hey, Siri: Moderate AI voice speed encourages digital assistant use

    Voice speed and interaction style may determine whether a user sees a digital assistant like Alexa or Siri as a helpful partner or something to control, according to a team led by Penn State researchers. The findings reveal insights into the parasocial, or one-sided, relationships that people can form with digital assistants, according to the researchers.
    They reported their findings in the Journal of Business Research.
    “We endow these digital assistants with personalities and human characteristics, and it impacts how we interact with the devices,” said Brett Christenson, assistant clinical professor of marketing at Penn State and first author of the study. “If you could design the perfect voice for every consumer, it could be a very useful tool.”
    The researchers found that a digital assistant’s moderate talking speed, compared to faster and slower speeds, increased the likelihood that a person would use the assistant. In addition, conversation-like interactions, rather than monologues, mitigated the negative effects of faster and slower voice speeds and increased user trust in the digital assistant, according to the researchers.
    “As people adopt devices that can speak to them, having a consistent, branded voice can be used as a strategic competitive tool,” Christenson said. “What this paper shows is that when you’re designing the voice of a digital assistant, not all voices are equal in terms of their impact on the customer.”
    Christenson and his colleagues conducted three experiments to measure how changing the voice speed and interaction style of a digital assistant affected a user’s likelihood to use and trust the device. In the first study, they asked 753 participants to use a digital assistant to help them create a personal budget. The digital assistant recited a monological, or one-way, script at either a slow, moderate or fast pace.
    The researchers then asked the participants how likely they would be to use the digital assistant to create a personal budget, measuring responses from one, not at all likely, to seven, very likely. They found that participants who heard the moderate voice speed were more likely to use the digital assistant than those who heard the slow or fast voices. More

  • in

    Scientists train AI to illuminate drugs’ impact

    An ideal medicine for one person may prove ineffective or harmful for someone else, and predicting who could benefit from a given drug has been difficult. Now, an international team led by neuroscientist Kirill Martemyanov, Ph.D., based at The Herbert Wertheim UF Scripps Institute for Biomedical Innovation & Technology, is training artificial intelligence to assist.
    Martemyanov’s group used a powerful molecular tracking technology to profile the action of more than 100 prominent cellular drug targets, including their more common genetic variations. The scientists then used that data to develop and train an AI-anchored platform. In a study that appears in the Oct. 31 issue of the journal Cell Reports, Martemyanov and colleagues report that their algorithm predicted with more than 80% accuracy how cell surface receptors would respond to drug-like molecules.
    The data used to train the algorithm was gathered over a decade of experimentation. Their long-range goal is to refine the tool and use it to help power the design of true precision medications, said Martemyanov, who chairs the institute’s neuroscience department.
    “We all think of ourselves as more or less normal, but we are not. We are all basically mutants. We have tremendous variability in our cell receptors,” Martemyanov said. “If doctors don’t know what exact genetic alteration you have, you just have this one-size-fits-all approach to prescribing, so you have to experiment to find what works for you.”
    One-third of all drugs work by binding to cell-surface receptors called G protein-coupled receptors, or GPCRs. These are complexes that cross the cell membrane, with a “docking station” on the cell’s exterior and a branch that extends into the cell. When a drug pulls into its GPCR dock, the branch moves, triggering a G protein inside the cell and setting off a cascade of changes, like falling dominoes.
    The result of activating or blocking this process might be anything from pain relief, quieting allergies or reducing blood pressure. Besides medications, other things like hormones, neurotransmitters and even scents dock with GPCRs to direct biological activities.
    Scientists have catalogued about 800 GPCRs in humans. About half are dedicated to senses, especially smell. About 250 more receive medicines or other known molecules. Martemyanov’s team had to invent a new protocol to observe and document them. They found many surprises. Some GPCRs worked as expected, but others didn’t, notably those for neurotransmitters called glutamate. More