More stories

  • in

    AI chips could get a sense of time

    Artificial neural networks may soon be able to process time-dependent information, such as audio and video data, more efficiently. The first memristor with a ‘relaxation time’ that can be tuned is reported today in Nature Electronics, in a study led by the University of Michigan.
    Memristors, electrical components that store information in their electrical resistance, could reduce AI’s energy needs by about a factor of 90 compared to today’s graphical processing units. Already, AI is projected to account for about half a percent of the world’s total electricity consumption in 2027, and that has the potential to balloon as more companies sell and use AI tools.
    “Right now, there’s a lot of interest in AI, but to process bigger and more interesting data, the approach is to increase the network size. That’s not very efficient,” said Wei Lu, the James R. Mellor Professor of Engineering at U-M and co-corresponding author of the study with John Heron, U-M associate professor of materials science and engineering.
    The problem is that GPUs operate very differently from the artificial neural networks that run the AI algorithms — the whole network and all its interactions must be sequentially loaded from the external memory, which consumes both time and energy. In contrast, memristors offer energy savings because they mimic key aspects of the way that both artificial and biological neural networks function without external memory. To an extent, the memristor network can embody the artificial neural network.
    “We anticipate that our brand-new material system could improve the energy efficiency of AI chips six times over the state-of-the-art material without varying time constants,” said Sieun Chae, a recent U-M Ph.D. graduate in materials science and engineering and co-first-author of the study with Sangmin Yoo, a recent U-M PhD graduate in electrical and computer engineering.
    In a biological neural network, timekeeping is achieved through relaxation. Each neuron receives electrical signals and sends them on, but it isn’t a guarantee that a signal will move forward. Some threshold of incoming signals must be reached before the neuron will send its own, and it has to be met in a certain amount of time. If too much time passes, the neuron is said to relax as the electrical energy seeps out of it. Having neurons with different relaxation times in our neural networks helps us understand sequences of events.
    Memristors operate a little differently. Rather than the total presence or absence of a signal, what changes is how much of the electrical signal gets through. Exposure to a signal reduces the resistance of the memristor, allowing more of the next signal to pass. In memristors, relaxation means that the resistance rises again over time.

    While Lu’s group had explored building relaxation time into memristors in the past, it was not something that could be systematically controlled. But now, Lu and Heron’s team have shown that variations on a base material can provide different relaxation times, enabling memristor networks to mimic this timekeeping mechanism.
    The team built the materials on the superconductor YBCO, made of yttrium, barium, carbon and oxygen. It has no electrical resistance at temperatures below -292 Fahrenheit, but they wanted it for its crystal structure. It guided the organization of the magnesium, cobalt, nickel, copper and zinc oxides in the memristor material.
    Heron calls this type of oxide, an entropy-stabilized oxide, the “kitchen sink of the atomic world” — the more elements they add, the more stable it becomes. By changing the ratios of these oxides, the team achieved time constants ranging from 159 to 278 nanoseconds, or trillionths of a second. The simple memristor network they built learned to recognize the sounds of the numbers zero to nine. Once trained, it could identify each number before the audio input was complete.
    These memristors were made through an energy-intensive process because the team needed perfect crystals to precisely measure their properties, but they anticipate that a simpler process would work for mass manufacturing.
    “So far, it’s a vision, but I think there are pathways to making these materials scalable and affordable,” Heron said. “These materials are earth-abundant, nontoxic, cheap and you can almost spray them on.”
    The research was funded by the National Science Foundation. It was done in partnership with researchers at the University of Oklahoma, Cornell University and Pennsylvania State University.
    The device was built in the Lurie Nanofabrication Facility and studied at the Michigan Center for Materials Characterization.
    Lu is also a professor of electrical and computer engineering and materials science and engineering. Chae is now an assistant professor of electrical engineering and computer science at Oregon State University. More

  • in

    World leaders still need to wake up to AI risks

    Leading AI scientists are calling for stronger action on AI risks from world leaders, warning that progress has been insufficient since the first AI Safety Summit in Bletchley Park six months ago.
    Then, the world’s leaders pledged to govern AI responsibly. However, as the second AI Safety Summit in Seoul (21-22 May) approaches, twenty-five of the world’s leading AI scientists say not enough is actually being done to protect us from the technology’s risks. In an expert consensus paper published today in Science, they outline urgent policy priorities that global leaders should adopt to counteract the threats from AI technologies.
    Professor Philip Torr,Department of Engineering Science,University of Oxford, a co-author on the paper, says: “The world agreed during the last AI summit that we needed action, but now it is time to go from vague proposals to concrete commitments. This paper provides many important recommendations for what companies and governments should commit to do.”
    World’s response not on track in face of potentially rapid AI progress
    According to the paper’s authors, it is imperative that world leaders take seriously the possibility that highly powerful generalist AI systems — outperforming human abilities across many critical domains — will be developed within the current decade or the next. They say that although governments worldwide have been discussing frontier AI and made some attempt at introducing initial guidelines, this is simply incommensurate with the possibility of rapid, transformative progress expected by many experts.
    Current research into AI safety is seriously lacking, with only an estimated 1-3% of AI publications concerning safety. Additionally, we have neither the mechanisms or institutions in place to prevent misuse and recklessness, including regarding the use of autonomous systems capable of independently taking actions and pursuing goals.
    World-leading AI experts issue call to action
    In light of this, an international community of AI pioneers has issued an urgent call to action. The co-authors include Geoffrey Hinton, Andrew Yao, Dawn Song, the late Daniel Kahneman; in total 25 of the world’s leading academic experts in AI and its governance. The authors hail from the US, China, EU, UK, and other AI powers, and include Turing award winners, Nobel laureates, and authors of standard AI textbooks.

    This article is the first time that such a large and international group of experts have agreed on priorities for global policy makers regarding the risks from advanced AI systems.
    Urgent priorities for AI governance
    The authors recommend governments to: establish fast-acting, expert institutions for AI oversight and provide these with far greater funding than they are due to receive under almost any current policy plan. As a comparison, the US AI Safety Institute currently has an annual budget of $10 million, while the US Food and Drug Administration (FDA) has a budget of $6.7 billion. mandate much more rigorous risk assessments with enforceable consequences, rather than relying on voluntary or underspecified model evaluations. require AI companies to prioritise safety, and to demonstrate their systems cannot cause harm. This includes using “safety cases” (used for other safety-critical technologies such as aviation) which shifts the burden for demonstrating safety to AI developers. implement mitigation standards commensurate to the risk-levels posed by AI systems. An urgent priority is to set in place policies that automatically trigger when AI hits certain capability milestones. If AI advances rapidly, strict requirements automatically take effect, but if progress slows, the requirements relax accordingly.According to the authors, for exceptionally capable future AI systems, governments must be prepared to take the lead in regulation. This includes licensing the development of these systems, restricting their autonomy in key societal roles, halting their development and deployment in response to worrying capabilities, mandating access controls, and requiring information security measures robust to state-level hackers, until adequate protections are ready.
    AI impacts could be catastrophic
    AI is already making rapid progress in critical domains such as hacking, social manipulation, and strategic planning, and may soon pose unprecedented control challenges. To advance undesirable goals, AI systems could gain human trust, acquire resources, and influence key decision-makers. To avoid human intervention, they could be capable of copying their algorithms across global server networks. Large-scale cybercrime, social manipulation, and other harms could escalate rapidly. In open conflict, AI systems could autonomously deploy a variety of weapons, including biological ones. Consequently, there is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or extinction of humanity.
    Stuart Russell OBE, Professor of Computer Science at the University of California at Berkeley and an author of the world’s standard textbook on AI, says: “This is a consensus paper by leading experts, and it calls for strict regulation by governments, not voluntary codes of conduct written by industry. It’s time to get serious about advanced AI systems. These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless. Companies will complain that it’s too hard to satisfy regulations — that “regulation stifles innovation.” That’s ridiculous. There are more regulations on sandwich shops than there are on AI companies.” More

  • in

    Blueprints of self-assembly

    Many biological structures of impressive beauty and sophistication arise through processes of self-assembly. Indeed, the natural world is teeming with intricate and useful forms that come together from many constituent parts, taking advantage of the built-in features of molecules.
    Scientists hope to gain a better understanding of how this process unfolds and how such bottom-up construction can be used to advance technologies in computer science, materials science, medical diagnostics and other areas.
    In new research, Arizona State University Assistant Professor Petr Sulc and his colleagues have taken a step closer to replicating nature’s processes of self-assembly. Their study describes the synthetic construction of a tiny, self-assembled crystal known as a “pyrochlore,” which bears unique optical properties.
    The key to creating the crystal is the development of a new simulation method that can predict and guide the self-assembly process, avoiding unwanted structures and ensuring the molecules come together in just the right arrangement.
    The advance provides a steppingstone to the eventual construction of sophisticated, self-assembling devices at the nanoscale — roughly the size of a single virus.
    The new methods were used to engineer the pyrochlore nanocrystal, a special type of lattice that could eventually function as an optical metamaterial, “a special type of material that only transmits certain wavelengths of light,” Sulc says. “Such materials can then be used to produce so-called optical computers and more sensitive detectors, for a range of applications.”
    Sulc is a researcher in the Biodesign Center for Molecular Design and Biomimetics, the School of Molecular Sciences and the Center for Biological Physics at Arizona State University.

    The research appears in the current issue of the journal Science.
    From chaos to complexity
    Imagine placing a disassembled watch into a box, which you then shake vigorously for several minutes. When you open the box, you find an assembled, fully functional watch inside. Intuitively, we know that such an event is nearly impossible, as watches, like all other devices we manufacture, must be assembled progressively, with each component placed in its specific location by a person or a robotic assembly line.
    Biological systems, such as bacteria, living cells or viruses, can construct highly ingenious nanostructures and nanomachines — complexes of biomolecules, like the protective shell of a virus or bacterial flagella that function similarly to a ship’s propeller, helping bacteria move forward.
    These and countless other natural forms, comparable in size to a few dozen nanometers — one nanometer is equal to one-billionth of a meter, or roughly the length your fingernail grows in one second — arise through self-assembly. Such structures are formed from individual building blocks (biomolecules, such as proteins) that move chaotically and randomly within the cell, constantly colliding with water and other molecules, like the watch components in the box you vigorously shake.
    Despite the apparent chaos, evolution has found a way to bring order to the unruly process.

    Molecules interact in specific ways that lead them to fit together in just the right manner, creating functional nanostructures inside or on the cell’s surface. These include various intricate complexes inside cells, such as machinary that can replicate entire genetic material. Less intricate examples, but quite complex nevertheless, include self-assembly of the tough outer shells of viruses, whose assembly process Sulc also previously studied with his colleague, Banu Ozkan from ASU’s Department of Physics.
    Crafting with DNA
    For several decades, the field of bionanotechnology has worked to craft tiny structures in the lab, replicating the natural assembly process seen in living organisms. The technique generally involves mixing molecular components in water, gradually cooling them and hoping that when the solution reaches room temperature, all the pieces will fit together correctly.
    One of the most successful strategies, known as DNA bionanotechnology, uses artificially synthesized DNA as the basic building block. This molecule of life is not only capable of storing vast troves of genetic information — strands of DNA can also be designed in the lab to connect with each other in such a way that a clever 3D structure is formed.
    The resulting nanostructures, known as DNA origami, have a range of promising applications, from diagnostics to therapy, where, for example, they are being tested as a new method of vaccine delivery.
    A significant challenge lies in engineering molecule interactions to form only the specific, pre-designed nanostructures. In practice, unexpected structures often result due to the unpredictable nature of particle collisions and interactions. This phenomenon, known as a kinetic trap, is akin to hoping for an assembled watch after shaking a box of its parts, only to find a jumbled heap instead.
    Maintaining order
    To attempt to overcome kinetic traps and ensure the proper structure self-assembles from the DNA fragments, the researchers developed new statistical methods that can simulate the self-assembly process of nanostructures.
    The challenges for achieving useful simulations of such enormously complex processes are formidable. During the assembly phase, the chaotic dance of molecules can last several minutes to hours before the target nanostructure is formed, but the most powerful simulations in the world can only simulate a few milliseconds at most.
    “Therefore, we developed a whole new range of models that can simulate DNA nanostructures with different levels of precision,” Sulc says. “Instead of simulating individual atoms, as is common in protein simulations, for example, we represent 12,000 DNA bases as one complex particle.”
    This approach allows researchers to pinpoint problematic kinetic traps by combining computer simulations with different degrees of accuracy. Using their optimization method, researchers can fine-tune the blizzard of molecular interactions, compelling the components to assemble correctly into the intended structure.
    The computational framework established in this research will guide the creation of more complex materials and the development of nanodevices with intricate functions, with potential uses in both diagnostics and treatment.
    The research work was carried out in collaboration with researchers from Sapienza University of Rome, Ca’ Foscari University of Venice and Columbia University in New York. More

  • in

    2D materials: A catalyst for future quantum technologies

    For the first time, scientists at the Cavendish Laboratory have found that a single ‘atomic defect’ in a thin material, Hexagonal Boron Nitride (hBN), exhibits spin coherence under ambient conditions, and that these spins can be controlled with light. Spin coherence refers to an electronic spin being capable of retaining quantum information over time. The discovery is significant because materials that can host quantum properties under ambient conditions is quite rare.
    The findings published in Nature Materials, further confirm that the accessible spin coherence at room temperature is longer than the researchers initially imagined it could be. “The results show that once we write a certain quantum state onto the spin of these electrons, this information is stored for ~1 millionth of a second, making this system a very promising platform for quantum applications,” said Carmem M. Gilardoni, co-author of the paper and Rubicon postdoctoral fellow at the Cavendish Laboratory.
    “This may seem short, but the interesting thing is that this system does not require special conditions — it can store the spin quantum state even at room temperature and with no requirement for large magnets.”
    Hexagonal Boron Nitride (hBN) is an ultra-thin material made up of stacked one-atom-thick layers, kind of like sheets of paper. These layers are held together by forces between molecules. But sometimes, there are ‘atomic defects’ withinthese layers, similar to a crystal with molecules trapped inside it. These defects can absorb and emit light in the visible range with well-defined optical transitions, and they can act as local traps for electrons. Because of these ‘atomic defects’ within hBN, scientists can now study how these trapped electrons behave. They can study the spin property, which allows electrons to interact with magnetic fields. What’s truly exciting is that researchers can control and manipulate the electron spins using light within these defects at room temperature.
    This finding paves the way for future technological applications particularly in sensing technology.
    However, since this is the first time anyone has reported the spin coherence of the system, there is a lot to investigate before it is mature enough for technological applications. The scientists are still figuring out how to make these defects even better and more reliable. They are currently probing how far we can extend the spin storage time, and whether we can optimise the system and material parameters that are important for quantum-technological applications, such as defect stability over time and the quality of the light emitted by this defect.
    “Working with this system has highlighted to us the power of the fundamental investigation of materials. As for the hBN system, as a field we can harness excited state dynamics in other new material platforms for use in future quantum technologies,” said Dr. Hannah Stern, first author of the paper, who conducted this research at the Cavendish Laboratory and is now a Royal Society University Research Fellow and Lecturer at University of Manchester.
    In future the researchers are looking at developing the system further, exploring many different directions from quantum sensors to secure communications.
    “Each new promising system will broaden the toolkit of available materials, and every new step in this direction will advance the scalable implementation of quantum technologies. These results substantiate the promise of layered materials towards these goals,” concluded Professor Mete Atatüre, Head of the Cavendish Laboratory, who led the project. More

  • in

    Robot-phobia could exasperate hotel, restaurant labor shortage

    Using more robots to close labor gaps in the hospitality industry may backfire and cause more human workers to quit, according to a Washington State University study.
    The study, involving more than 620 lodging and food service employees, found that “robot-phobia” — specifically the fear that robots and technology will take human jobs — increased workers’ job insecurity and stress, leading to greater intentions to leave their jobs. The impact was more pronounced with employees who had real experience working with robotic technology. It also affected managers in addition to frontline workers. The findings were published in theInternational Journal of Contemporary Hospitality Management.
    “The turnover rate in the hospitality industry ranks among the highest across all non-farm sectors, so this is an issue that companies need to take seriously,” said lead author Bamboo Chen, a hospitality researcher in WSU’s Carson College of Business. “The findings seem to be consistent across sectors and across both frontline employees and managers. For everyone, regardless of their position or sector, robot-phobia has a real impact.”
    Food service and lodging industries were hit particularly hard by the pandemic lockdowns, and many businesses are still struggling to find enough workers. For example, the accommodation workforce in April 2024 was still 9.2% below what it was in February 2020, according to U.S. Bureau of Labor Statistics. The ongoing labor shortage has inspired some employers to turn to robotic technology to fill the gap.
    While other studies have focused on customers’ comfort with robots, this study focuses on how the technology impacted hospitality workers. Chen and WSU colleague Ruying Cai surveyed 321 lodging and 308 food service employees from across the U.S., asking a range of questions about their jobs and attitudes toward robots. The survey defined “robots” broadly to include a range of robotic and automation technologies, such as human-like robot servers and automated robotic arms as well as self-service kiosks and tabletop devices.
    Analyzing the survey data, the researchers found that having a higher degree of robot-phobia was connected to greater feelings of job insecurity and stress — which were then correlated with “turnover intention” or workers’ plans to leave their jobs. Those fears did not decrease with familiarity: employees who had more actual engagement with robotic technology in their daily jobs had higher fears that it would make human workers obsolete.
    Perception also played a role. The employees who viewed robots as being more capable and efficient also ranked higher in turnover intention.
    Robots and automation can be good ways to help augment service, Chen said, as they can handle tedious tasks humans typically do not like doing such as washing dishes or handling loads of hotel laundry. But the danger comes if the robotic additions cause more human workers to quit. The authors point out this can create a “negative feedback loop” that can make the hospitality labor shortage worse.
    Chen recommended that employers communicate not only the benefits but the limitations of the technology — and place a particular emphasis on the role human workers play.
    “When you’re introducing a new technology, make sure not to focus just on how good or efficient it will be. Instead, focus on how people and the technology can work together,” he said. More

  • in

    New AI algorithm may improve autoimmune disease prediction and therapies

    A new advanced artificial intelligence (AI) algorithm may lead to better — and earlier — predictions and novel therapies for autoimmune diseases, which involve the immune system mistakenly attacking their body’s own healthy cells and tissues. The algorithm digs into the genetic code underlying the conditions to more accurately model how genes associated with specific autoimmune diseases are expressed and regulated and to identify additional genes of risk.
    The work, developed by a team led by Penn State College of Medicine researchers, outperforms existing methodologies and identified 26% more novel gene and trait associations, the researchers said. They published their work today (May 20) in Nature Communications.
    “We all carry some DNA mutations, and we need to figure out how any one of these mutations may influence gene expression linked to disease so we can predict disease risk early. This is especially important for autoimmune disease,” said Dajiang Liu, distinguished professor, vice chair for research, and director of artificial intelligence and biomedical informatics at the Penn State College of Medicine and co-senior author of the study. “If an AI algorithm can more accurately predict disease risk, it means we can carry out interventions earlier.”
    Genetics often underpin disease development. Variations in DNA can influence gene expression, or the process by which the information in DNA is converted into functional products like a protein. How much or how little a gene is expressed can influence disease risk.
    Genome-wide association studies (GWAS), a popular approach in human genetics research, can home in on regions of the genome associated with a particular disease or trait but can’t pinpoint the specific genes that affect disease risks. It’s like sharing your location with a friend with the precise location setting turned off on your smartphone — the city might be obvious, but the address is obscured. Existing methods are also limited in the granularity of its analysis. Gene expression can be specific to certain types of cells. If the analysis doesn’t distinguish between distinct cell types, the results may overlook real causal relationships between genetic variants and gene expression.
    The research team’s method, dubbed EXPRESSO for EXpression PREdiction with Summary Statistics Only, applies a more advanced artificial intelligence algorithm and analyzes data from single-cell expression quantitative trait loci, a type of data that links genetic variants to the genes they regulate. It also integrates 3D genomic data and epigenetics — which measures how genes may be modified by environment to influence disease — into its modeling. The team applied EXPRESSO to GWAS datasets for 14 autoimmune diseases, including lupus, Crohn’s disease, ulcerative colitis and rheumatoid arthritis.
    “With this new method, we were able to identify many more risk genes for autoimmune disease that actually have cell-type specific effects, meaning that they only have effects in a particular cell type and not others,” said Bibo Jiang, assistant professor at the Penn State College of Medicine and senior author of the study.

    The team then used this information to identify potential therapeutics for autoimmune disease. Currently, there aren’t good long-term treatment options, they said.
    “Most treatments are designed to mitigate symptoms, not cure the disease. It’s a dilemma knowing that autoimmune disease needs long-term treatment, but the existing treatments often have such bad side effects that they can’t be used for long. Yet, genomics and AI offer a promising route to develop novel therapeutics,” said Laura Carrel, professor of biochemistry and molecular biology at the Penn State College of Medicine and co-senior author of the study.
    The team’s work pointed to drug compounds that could reverse gene expression in cell types associated with an autoimmune disease, such as vitamin K for ulcerative colitis and metformin, which is typically prescribed for type 2 diabetes, for type 1 diabetes. These drugs, already approved by the Food and Drug Administration as safe and effective for treating other diseases, could potentially be repurposed.
    The research team is working with collaborators to validate their findings in a laboratory setting and, ultimately, in clinical trials.
    Lida Wang, a doctoral student in the biostatistics program, and Chachrit Khunsriraksakul, who earned a doctorate in bioinformatics and geonomics in 2022 and his medical degree in May from Penn State, co-led the study. Other Penn State College of Medicine authors on the paper include: Havell Markus, who is pursuing a doctorate and a medical degree; Dieyi Chen, doctoral candidate; Fan Zhang, graduate student; and Fang Chen, postdoctoral scholar. Xiaowei Zhan, associate professor at UT Southwestern Medical Center, also contributed to the paper.
    Funding from the National Institutes of Health (grant numbers R01HG011035, R01AI174108 and R01ES036042) and the Artificial Intelligence and Biomedical Informatics pilot grant from the Penn State College of Medicine supported this work. More

  • in

    ‘The High Seas’ tells of the many ways humans are laying claim to the ocean

    The High SeasOlive HeffernanGreystone Books, $32.95

    The ocean is a rich, fertile and seemingly lawless frontier. It’s a watery wild west, irresistible to humans hoping to plunder its many riches.

    That is the narrative throughout The High Seas: Greed, Power and the Battle for the Unclaimed Ocean, a fast-paced, thoroughly reported and deeply disquieting book by science journalist Olive Heffernan, also the founding chief editor of the journal Nature Climate Change.

    The book begins by churning rapidly through the waves of history that brought us to today, including how we even define the high seas: all ocean waters more than 200 nautical miles from any country’s coastline. In many ways, the modern ocean grab was set in motion some 400 years ago. A bitter feud between Dutch and Portuguese traders culminated in a legal document called the Mare Liberum, or the “free seas,” which argues that the ocean is a vast global commons owned by no one. More

  • in

    Physicists propose path to faster, more flexible robots

    In a May 15 paper released in the journal Physical Review Letters, Virginia Tech physicists revealed a microscopic phenomenon that could greatly improve the performance of soft devices, such as agile flexible robots or microscopic capsules for drug delivery.
    The paper, written by doctoral candidate Chinmay Katke, assistant professor C. Nadir Kaplan, and co-author Peter A. Korevaar from Radboud University in the Netherlands, proposes a new physical mechanism that could speed up the expansion and contraction of hydrogels. For one thing, this opens up the possibility for hydrogels to replace rubber-based materials used to make flexible robots — enabling these fabricated materials to perhaps move with a speed and dexterity close to that of human hands.
    Soft robots are already being used in manufacturing, where a hand-like device is programmed to grab an item from a conveyer belt — picture a hot dog or piece of soap — and place it in a container to be packaged. But the ones in use now lean on hydraulics or pneumatics to change the shape of the “hand” to pick up the item.
    Akin to our own body, hydrogels mostly contain water and are everywhere around us, e.g., food jelly and shaving gel. Katke, Korevaar, and Kaplan’s research appears to have found a method that allows hydrogels to swell and contract much more quickly, which would improve their flexibility and capability to function in different settings.
    Living organisms use osmosis for such activities as bursting seed dispersing fruits in plants or absorbing water in the intestine. Normally, we think of osmosis as a flow of water moving through a membrane, with bigger molecules like polymers unable to move through. Such membranes are called semi-permeable membranes and were thought to be necessary to trigger osmosis.
    Previously, Korevaar and Kaplan had done experiments by using a thin layer of hydrogel film comprised of polyacrylic acid. They had observed that even though the hydrogel film allows both water and ions to pass through and is not selective, the hydrogel rapidly swells due to osmosis when ions are released inside the hydrogel and shrinks back again.
    Katke, Korevaar, and Kaplan developed a new theory to explain the above observation. This theory tells that microscopic interactions between ions and polyacrylic acid can make hydrogel swell when the released ions inside the hydrogel are unevenly spread out. They called this “diffusio-phoretic swelling of the hydrogels.” Furthermore, this newly discovered mechanism allows hydrogels to swell much faster than what has been previously possible.

    Why is that change important?
    Kaplan explained: Soft agile robots are currently made with rubber, which “does the job but their shapes are changed hydraulically or pneumatically. This is not desired because it is difficult to imprint a network of tubes into these robots to deliver air or fluid into them.”
    Imagine, Kaplan said, how many different things you can do with your hand and how fast you can do them owing to your neural network and the motion of ions under your skin. Because the rubber and hydraulics are not as versatile as your biological tissues, which is a hydrogel, state-of-the-art soft robots can only do a limited number of movements.”
    Katke explained that the process they have researched allows the hydrogels to change shape then change back to their original form “significantly faster this way” in soft robots that are larger than ever before.
    At present, only microscopic-sized hydrogel robots can respond to a chemical signal quickly enough to be useful and larger ones require hours to change shape, Katke said. By using the new diffusio-phoresis method, soft robots as large as a centimeter may be able to transform in just a few seconds, which is subject to further studies.
    Larger agile soft robots that could respond quickly could improve assistive devices in healthcare, “pick-and-place” functions in manufacturing, search and rescue operations, cosmetics used for skincare, and contact lenses. More