More stories

  • in

    Advance in 'optical tweezers' to boost biomedical research

    Much like the Jedis in Star Wars use ‘the force’ to control objects from a distance, scientists can use light or ‘optical force’ to move very small particles.
    The inventors of this ground-breaking laser technology, known as ‘optical tweezers’, were awarded the 2018 Nobel Prize in physics.
    Optical tweezers are used in biology, medicine and materials science to assemble and manipulate nanoparticles such as gold atoms. However, the technology relies on a difference in the refractive properties of the trapped particle and the surrounding environment.
    Now scientists have discovered a new technique that allows them to manipulate particles that have the same refractive properties as the background environment, overcoming a fundamental technical challenge.
    The study ‘Optical tweezers beyond refractive index mismatch using highly doped upconversion nanoparticles’ has just been published in Nature Nanotechnology.
    “This breakthrough has huge potential, particularly in fields such as medicine,” says leading co-author Dr Fan Wang from the University of Technology Sydney (UTS).

    advertisement

    “The ability to push, pull and measure the forces of microscopic objects inside cells, such as strands of DNA or intracellular enzymes, could lead to advances in understanding and treating many different diseases such as diabetes or cancer.
    “Traditional mechanical micro-probes used to manipulate cells are invasive, and the positioning resolution is low. They can only measure things like the stiffness of a cell membrane, not the force of molecular motor proteins inside a cell,” he says.
    The research team developed a unique method to control the refractive properties and luminescence of nanoparticles by doping nanocrystals with rare-earth metal ions.
    Having overcome this first fundamental challenge, the team then optimised the doping concentration of ions to achieve the trapping of nanoparticles at a much lower energy level, and at 30 times increased efficiency.
    “Traditionally, you need hundreds of milliwatts of laser power to trap a 20 nanometre gold particle. With our new technology, we can trap a 20 nanometre particle using tens of milliwatts of power,” says Xuchen Shan, first co-author and UTS PhD candidate in the UTS School of Electrical and Data Engineering.

    advertisement

    “Our optical tweezers also achieved a record high degree of sensitivity or ‘stiffness’ for nanoparticles in a water solution. Remarkably, the heat generated by this method was negligible compared with older methods, so our optical tweezers offer a number of advantages,” he says.
    Fellow leading co-author Dr Peter Reece, from the University of New South Wales, says this proof-of-concept research is a significant advancement in a field that is becoming increasingly sophisticated for biological researchers.
    “The prospect of developing a highly-efficient nanoscale force probe is very exciting. The hope is that the force probe can be labelled to target intracellular structures and organelles, enabling the optical manipulation of these intracellular structures,” he says.
    Distinguished Professor Dayong Jin, Director of the UTS Institute for Biomedical Materials and Devices (IBMD) and a leading co-author, says this work opens up new opportunities for super resolution functional imaging of intracellular biomechanics.
    “IBMD research is focused on the translation of advances in photonics and material technology into biomedical applications, and this type of technology development is well aligned to this vision,” says Professor Jin.
    “Once we have answered the fundamental science questions and discovered new mechanisms of photonics and material science, we then move to apply them. This new advance will allow us to use lower-power and less-invasive ways to trap nanoscopic objects, such as live cells and intracellular compartments, for high precision manipulation and nanoscale biomechanics measurement.” More

  • in

    Researchers discover that privacy-preserving tools leave private data anything but

    Machine-learning (ML) systems are becoming pervasive not only in technologies affecting our day-to-day lives, but also in those observing them, including face expression recognition systems. Companies that make and use such widely deployed services rely on so-called privacy preservation tools that often use generative adversarial networks (GANs), typically produced by a third party to scrub images of individuals’ identity. But how good are they?
    Researchers at the NYU Tandon School of Engineering, who explored the machine-learning frameworks behind these tools, found that the answer is “not very.” In the paper “Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images,” presented last month at the 35th AAAI Conference on Artificial Intelligence, a team led by Siddharth Garg, Institute Associate Professor of electrical and computer engineering at NYU Tandon, explored whether private data could still be recovered from images that had been “sanitized” by such deep-learning discriminators as privacy protecting GANs (PP-GANs) and that had even passed empirical tests. The team, including lead author Kang Liu, a Ph.D. candidate, and Benjamin Tan, research assistant professor of electrical and computer engineering, found that PP-GAN designs can, in fact, be subverted to pass privacy checks, while still allowing secret information to be extracted from sanitized images.
    Machine-learning-based privacy tools have broad applicability, potentially in any privacy sensitive domain, including removing location-relevant information from vehicular camera data, obfuscating the identity of a person who produced a handwriting sample, or removing barcodes from images. The design and training of GAN-based tools are outsourced to vendors because of the complexity involved.
    “Many third-party tools for protecting the privacy of people who may show up on a surveillance or data-gathering camera use these PP-GANs to manipulate images,” said Garg. “Versions of these systems are designed to sanitize images of faces and other sensitive data so that only application-critical information is retained. While our adversarial PP-GAN passed all existing privacy checks, we found that it actually hid secret data pertaining to the sensitive attributes, even allowing for reconstruction of the original private image.”
    The study provides background on PP-GANs and associated empirical privacy checks, formulates an attack scenario to ask if empirical privacy checks can be subverted, and outlines an approach for circumventing empirical privacy checks.
    The team provides the first comprehensive security analysis of privacy-preserving GANs and demonstrate that existing privacy checks are inadequate to detect leakage of sensitive information.
    Using a novel steganographic approach, they adversarially modify a state-of-the-art PP-GAN to hide a secret (the user ID), from purportedly sanitized face images.
    They show that their proposed adversarial PP-GAN can successfully hide sensitive attributes in “sanitized” output images that pass privacy checks, with 100% secret recovery rate.
    Noting that empirical metrics are dependent on discriminators’ learning capacities and training budgets, Garg and his collaborators argue that such privacy checks lack the necessary rigor for guaranteeing privacy.
    “From a practical standpoint, our results sound a note of caution against the use of data sanitization tools, and specifically PP-GANs, designed by third parties,” explained Garg. “Our experimental results highlighted the insufficiency of existing DL-based privacy checks and the potential risks of using untrusted third-party PP-GAN tools.”

    Story Source:
    Materials provided by NYU Tandon School of Engineering. Note: Content may be edited for style and length. More

  • in

    High end of climate sensitivity in new climate models seen as less plausible

    A recent analysis of the latest generation of climate models — known as a CMIP6 — provides a cautionary tale on interpreting climate simulations as scientists develop more sensitive and sophisticated projections of how the Earth will respond to increasing levels of carbon dioxide in the atmosphere.
    Researchers at Princeton University and the University of Miami reported that newer models with a high “climate sensitivity” — meaning they predict much greater global warming from the same levels of atmospheric carbon dioxide as other models — do not provide a plausible scenario of Earth’s future climate.
    Those models overstate the global cooling effect that arises from interactions between clouds and aerosols and project that clouds will moderate greenhouse gas-induced warming — particularly in the northern hemisphere — much more than climate records show actually happens, the researchers reported in the journal Geophysical Research Letters.
    Instead, the researchers found that models with lower climate sensitivity are more consistent with observed differences in temperature between the northern and southern hemispheres, and, thus, are more accurate depictions of projected climate change than the newer models. The study was supported by the Carbon Mitigation Initiative (CMI) based in Princeton’s High Meadows Environmental Institute (HMEI).
    These findings are potentially significant when it comes to climate-change policy, explained co-author Gabriel Vecchi, a Princeton professor of geosciences and the High Meadows Environmental Institute and principal investigator in CMI. Because models with higher climate sensitivity forecast greater warming from greenhouse gas emissions, they also project more dire — and imminent — consequences such as more extreme sea-level rise and heat waves.
    The high climate-sensitivity models forecast an increase in global average temperature from 2 to 6 degrees Celsius under current carbon dioxide levels. The current scientific consensus is that the increase must be kept under 2 degrees to avoid catastrophic effects. The 2016 Paris Agreement sets the threshold to 1.5 degrees Celsius.

    advertisement

    “A higher climate sensitivity would obviously necessitate much more aggressive carbon mitigation,” Vecchi said. “Society would need to reduce carbon emissions much more rapidly to meet the goals of the Paris Agreement and keep global warming below 2 degrees Celsius. Reducing the uncertainty in climate sensitivity helps us make a more reliable and accurate strategy to deal with climate change.”
    The researchers found that both the high and low climate-sensitivity models match global temperatures observed during the 20th century. The higher-sensitivity models, however, include a stronger cooling effect from aerosol-cloud interaction that offsets the greater warming due to greenhouse gases. Moreover, the models have aerosol emissions occurring primarily in the northern hemisphere, which is not consistent with observations.
    “Our results remind us that we should be cautious about a model result, even if the models accurately represent past global warming,” said first author Chenggong Wang, a Ph.D. candidate in Princeton’s Program in Atmospheric and Oceanic Sciences. “We show that the global average hides important details about the patterns of temperature change.”
    In addition to the main findings, the study helps shed light on how clouds can moderate warming both in models and the real world at large and small scales.
    “Clouds can amplify global warming and may cause warming to accelerate rapidly during the next century,” said co-author Wenchang Yang, an associate research scholar in geosciences at Princeton. “In short, improving our understanding and ability to correctly simulate clouds is really the key to more reliable predictions of the future.”
    Scientists at Princeton and other institutions have recently turned their focus to the effect that clouds have on climate change. Related research includes two papers by Amilcare Porporato, Princeton’s Thomas J. Wu ’94 Professor of Civil and Environmental Engineering and the High Meadows Environmental Institute and a member of the CMI leadership team, that reported on the future effect of heat-induced clouds on solar power and how climate models underestimate the cooling effect of the daily cloud cycle.
    “Understanding how clouds modulate climate change is at the forefront of climate research,” said co-author Brian Soden, a professor of atmospheric sciences at the University of Miami. “It is encouraging that, as this study shows, there are still many treasures we can exploit from historical climate observations that help refine the interpretations we get from global mean-temperature change.”

    Story Source:
    Materials provided by Princeton University. Original written by Morgan Kelly. Note: Content may be edited for style and length. More

  • in

    Filming a 3D video of a virus with instantaneous light and AI

    It is millions of trillions of times brighter than the sunlight and a whopping 1,000 trillionth of a second, appropriately called the instantaneous light. It is the X-ray Free Electron Laser (XFEL) light that opens a new scientific paradigm. Combining it with AI, an international research team has succeeded in filming and restoring the 3D structure of nanoparticles that share structural similarities with viruses. With the fear of a new pandemic growing around the world due to COVID-19, this discovery is attracting the attention among academic circles for imaging the structure of the virus with both high accuracy and speed.
    An international team of researchers from POSTECH, National University of Singapore (NUS), KAIST, GIST, and IBS have successfully analyzed the structural heterogeneities in 3D structures of nanoparticles by irradiating thousands of nanoparticles per hour using the XFEL at Pohang Accelerator Laboratory (PAL) in Korea and restoring 3D multi-models through machine learning. The research team led by Professor Changyong Song and Ph.D. candidate Do Hyung Cho of Department of Physics at POSTECH has driven the international research collaboration to realize it.
    Nanoparticles have a peculiar function that may not be available from native bulk materials, and one can control their physical and chemical properties by designing 3D structures and compositions of constituting elements.
    The commonality between nanoparticles and viruses is that they exist in the form of independent particles, rather than in crystal-regular, periodic arrangements, and, as such, their structures are not uniform at the nanometer level. To precisely understand their structures, it is necessary to statistically analyze the structure of individual particles using the whole ensemble distribution of structures from thousands to hundreds of thousands of specimens. However, electron microscopes often fall short of providing enough penetration to limit the size of the sample to be probed; conventional X-rays may damage the sample by X-ray radiation itself, making it difficult to obtain sufficient resolution.
    The research team overcame the practical limitations of the conventional method by using the X-ray free electron laser and the machine learning method to observe the statistical distribution of the 3D structure of thousands of nanoparticles at the nanometer level. As a result, 3D structures of nanoparticles having a size of 300 nm were obtained with a resolution better than 20 nm.
    This achievement was particularly significant for restoring the 3D structure of thousands of nanoparticles using machine learning. Since conventional single-particle imaging techniques often assume an identical 3D structure of the specimens, it was difficult to restore the structure in actual experimental data where the sample structure is not homogeneous. However, with the introduction of the multi-model this time, the researchers succeeded in restoring the representative 3D structures. This research has enabled the classification of nanoparticles into four major shapes, and confirmed that about 40% of them had similar structures.
    In addition, through quantitative analysis of the restored 3D structure, the international research collaboration team also uncovered the internal elastic strain distribution accompanied by the characteristic polyhedron structure of the nanoparticles and the inhomogeneous density distribution.
    “These findings enable the observation of 3D structure of noncrystalline viral specimens with inhomogeneously distributed internal molecules,” explained Professor Changyong Song of POSTECH. “Adding the 3D image restoration algorithm to this through machine learning shows promise to be applicable to studies of macromolecule structures or viruses in living organisms.”

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Heat-free optical switch would enable optical quantum computing chips

    In a potential boost for quantum computing and communication, a European research collaboration reported a new method of controlling and manipulating single photons without generating heat. The solution makes it possible to integrate optical switches and single-photon detectors in a single chip.
    Publishing in Nature Communications, the team reported to have developed an optical switch that is reconfigured with microscopic mechanical movement rather than heat, making the switch compatible with heat-sensitive single-photon detectors.
    Optical switches in use today work by locally heating light guides inside a semiconductor chip. “This approach does not work for quantum optics,” says co-author Samuel Gyger, a PhD student at KTH Royal Institute of Technology in Stockholm.
    “Because we want to detect every single photon, we use quantum detectors that work by measuring the heat a single photon generates when absorbed by a superconducting material,” Gyger says. “If we use traditional switches, our detectors will be flooded by heat, and thus not work at all.”
    The new method enables control of single photons without the disadvantage of heating up a semiconductor chip and thereby rendering single-photon detectors useless, says Carlos Errando Herranz, who conceived the research idea and led the work at KTH as part of the European Quantum Flagship project, S2QUIP.
    Using microelectromechanical (MEMS) actuation, the solution enables optical switching and photon detection on a single semiconductor chip while maintaining the cold temperatures required by single-photon detectors.
    “Our technology will help to connect all building blocks required for integrated optical circuits for quantum technologies,” Errando Herranz says.
    “Quantum technologies will enable secure message encryption and methods of computation that solve problems today’s computers cannot,” he says. “And they will provide simulation tools that enable us to understand fundamental laws of nature, which can lead to new materials and medicines.”
    The group will further develop the technology to make it compatible with typical electronics, which will involve reducing the voltages used in the experimental setup.
    Errando Herranz says that the group aims to integrate the fabrication process in semiconductor foundries that already fabricate on-chip optics — a necessary step in order to make quantum optic circuits large enough to fulfill some of the promises of quantum technologies.

    Story Source:
    Materials provided by KTH, Royal Institute of Technology. Note: Content may be edited for style and length. More

  • in

    New search engine for single cell atlases

    A new software tool allows researchers to quickly query datasets generated from single-cell sequencing. Users can identify which cell types any combination of genes are active in. Published in Nature Methods on 1st March, the open-access ‘scfind’ software enables swift analysis of multiple datasets containing millions of cells by a wide range of users, on a standard computer.
    Processing times for such datasets are just a few seconds, saving time and computing costs. The tool, developed by researchers at the Wellcome Sanger Institute, can be used much like a search engine, as users can input free text as well as gene names.
    Techniques to sequence the genetic material from an individual cell have advanced rapidly over the last 10 years. Single-cell RNA sequencing (scRNAseq), used to assess which genes are active in individual cells, can be used on millions of cells at once and generates vast amounts of data (2.2 GB for the Human Kidney Atlas). Projects including the Human Cell Atlas and the Malaria Cell Atlas are using such techniques to uncover and characterise all of the cell types present in an organism or population. Data must be easy to access and query, by a wide range of researchers, to get the most value from them.
    To allow for fast and efficient access, a new software tool called scfind uses a two-step strategy to compress data ~100-fold. Efficient decompression makes it possible to quickly query the data. Developed by researchers at the Wellcome Sanger Institute, scfind can perform large scale analysis of datasets involving millions of cells on a standard computer without special hardware. Queries that used to take days to return a result, now take seconds.
    The new tool can also be used for analyses of multi-omics data, for example by combining single-cell ATAC-seq data, which measures epigenetic activity, with scRNAseq data.
    Dr Jimmy Lee, Postdoctoral Fellow at the Wellcome Sanger Institute, and lead author of the research, said: “The advances of multiomics methods have opened up an unprecedented opportunity to appreciate the landscape and dynamics of gene regulatory networks. Scfind will help us identify the genomic regions that regulate gene activity — even if those regions are distant from their targets.”
    Scfind can also be used to identify new genetic markers that are associated with, or define, a cell type. The researchers show that scfind is a more accurate and precise method to do this, compared with manually curated databases or other computational methods available.
    To make scfind more user friendly, it incorporates techniques from natural language processing to allow for arbitrary queries.
    Dr Martin Hemberg, former Group Leader at the Wellcome Sanger Institute, now at Harvard Medical School and Brigham and Women’s Hospital, said: “Analysis of single-cell datasets usually requires basic programming skills and expertise in genetics and genomics. To ensure that large single-cell datasets can be accessed by a wide range of users, we developed a tool that can function like a search engine — allowing users to input any query and find relevant cell types.”
    Dr Jonah Cool, Science Program Officer at the Chan Zuckerberg Initiative, said: “New, faster analysis methods are crucial for finding promising insights in single-cell data, including in the Human Cell Atlas. User-friendly tools like scfind are accelerating the pace of science and the ability of researchers to build off of each other’s work, and the Chan Zuckerberg Initiative is proud to support the team that developed this technology.”

    Story Source:
    Materials provided by Wellcome Trust Sanger Institute. Note: Content may be edited for style and length. More

  • in

    This soft robot withstands crushing pressures at the ocean’s greatest depths

    Inspired by a strange fish that can withstand the punishing pressures of the deepest reaches of the ocean, scientists have devised a soft autonomous robot capable of keeping its fins flapping — even in the deepest part of the Mariana Trench.
    The team, led by roboticist Guorui Li of Zhejiang University in Hangzhou, China, successfully field-tested the robot’s ability to swim at depths ranging from 70 meters to nearly 11,000 meters, it reports March 4 in Nature.
    Challenger Deep is the lowest of the low, the deepest part of the Mariana Trench. It bottoms out at about 10,900 meters below sea level (SN: 12/11/12). The pressure from all that overlying water is about a thousand times the atmospheric pressure at sea level, translating to about 103 million pascals (or 15,000 pounds per square inch). “It’s about the equivalent of an elephant standing on top of your thumb,” says deep-sea physiologist and ecologist Mackenzie Gerringer of State University of New York at Geneseo, who was not involved in the new study.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    The tremendous pressures at these hadal depths — the deepest ocean zone, between 6,000 and 11,000 meters — present a tough engineering challenge, Gerringer says. Traditional deep-sea robots or manned submersibles are heavily reinforced with rigid metal frames so as not to crumple — but these vessels are bulky and cumbersome, and the risk of structural failure remains high.
    To design robots that can maneuver gracefully through shallower waters, scientists have previously looked to soft-bodied ocean creatures, such as the octopus, for inspiration (SN: 9/17/14). As it happens, such a deep-sea muse also exists: Pseudoliparis swirei, or the Mariana hadal snailfish, a mostly squishy, translucent fish that lives as much as 8,000 meters deep in the Mariana Trench.
    In 2018, researchers described three newly discovered species of deep-sea snailfish (one shown) found in the Pacific Ocean’s Atacama Trench, living at depths down to about 7,500 meters. Also found in the Mariana Trench, such fish are well adapted for living in high-pressure, deep-sea environments, with only partially hardened skulls and soft, streamlined, energy-efficient bodies.Newcastle University
    Gerringer, one of the researchers who first described the deep-sea snailfish in 2014, constructed a 3-D printed soft robot version of it several years later to better understand how it swims. Her robot contained a synthesized version of the watery goo inside the fish’s body that most likely adds buoyancy and helps it swim more efficiently (SN: 1/3/18).
    But devising a robot that can swim under extreme pressure to investigate the deep-sea environment is another matter. Autonomous exploration robots require electronics not only to power their movement, but also to perform various tasks, whether testing water chemistry, lighting up and filming the denizens of deep ocean trenches, or collecting samples to bring back to the surface. Under the squeeze of water pressure, these electronics can grind against one another.
    So Li and his colleagues decided to borrow one of the snailfish’s adaptations to high-pressure life: Its skull is not completely fused together with hardened bone. That extra bit of malleability allows the pressure on the skull to equalize. In a similar vein, the scientists decided to distribute the electronics — the “brain” — of their robot fish farther apart than they normally would, and then encase them in soft silicone to keep them from touching.
    The design of the new soft robot (left) was inspired by the deep-sea snailfish (illustrated, right), which is adapted to live in the very high-pressure environments of the deepest parts of the ocean. The snailfish’s skull is incompletely ossified, or hardened, which allows external and internal pressures to equalize. Spreading apart the robot’s sensitive electronics and encasing them in silicone keeps the parts from squeezing together. The robots flapping fins are inspired by the thin pectoral fins of the fish (although the real fish doesn’t use its fins to swim).Li et al/ Nature 2021
    The team also designed a soft body that slightly resembles the snailfish, with two fins that the robot can use to propel itself through the water. (Gerringer notes that the actual snailfish doesn’t flap its fins, but wriggles its body like a tadpole.) To flap the fins, the robot is equipped with batteries that power artificial muscles: electrodes sandwiched between two membranes that deform in response to the electrical charge.
    The team tested the robot in several environments: 70 meters deep in a lake; about 3,200 meters deep in the South China Sea; and finally, at the very bottom of the ocean. The robot was allowed to swim freely in the first two trials. For the Challenger Deep trial, however, the researchers kept a tight grip, using the extendable arm of a deep-sea lander to hold the robot while it flapped its fins.
    This machine “pushes the boundaries of what can be achieved” with biologically inspired soft robots, write robotocists Cecilia Laschi of the National University of Singapore and Marcello Calisti of the University of Lincoln in England. The pair have a commentary on the research in the same issue of Nature. That said, the machine is still a long way from deployment, they note. It swims more slowly than other underwater robots, and doesn’t yet have the power to withstand powerful underwater currents. But it “lays the foundations” for future such robots to help answer lingering questions about these mysterious reaches of the ocean, they write.
    [embedded content]
    Researchers successfully ran a soft autonomous robot through several field tests at different depths in the ocean. At 3,224 meters deep in the South China Sea, the tests demonstrated that the robot could swim autonomously (free swim test). The team also tested the robot’s ability to move under even the most extreme pressures in the ocean. A deep-sea lander’s extendable arm held the robot as it flapped its wings at a depth of 10,900 meters in the Challenger Deep, the lowest part of the Mariana Trench (extreme pressure test). These tests suggest that such robots may, in future, be able to aid in autonomous exploration of the deepest parts of the ocean, the researchers say.
    Deep-sea trenches are known to be teeming with microbial life, which happily feed on the bonanza of organic material — from algae to animal carcasses — that finds its way to the bottom of the sea. That microbial activity hints that the trenches may play a significant role in Earth’s carbon cycle, which is in turned linked to the planet’s regulation of its climate.
    The discovery of microplastics in Challenger Deep is also incontrovertible evidence that even the bottom of the ocean isn’t really that far away, Gerringer says (SN: 11/20/20). “We’re impacting these deep-water systems before we’ve even found out what’s down there. We have a responsibility to help connect these seemingly otherworldly systems, which are really part of our planet.” More

  • in

    Helping soft robots turn rigid on demand

    Imagine a robot.
    Perhaps you’ve just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they’re not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people.
    Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary.
    Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones.
    “This is the first step in trying to see if we can get the best of both worlds,” says James Bern, the paper’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
    Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern’s advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper’s other author.

    advertisement

    Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot’s arm or grabbing objects with vacuum-sealed coffee grounds. However, a key unsolved challenge for soft robotics is control — how to drive the robot’s actuators in order to achieve a given goal.
    Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm.
    Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot’s movement: “If I pull the cables in just the right way, can I get the robot to act stiff?” Bern says he can — at least in a computer simulation — thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, “you can get stiffness by pulling on both sides of something,” says Bern. So, he applied the same principle to his robots.
    The researchers’ paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots’ multiple cables — using some to twist and turn the body, while using others to counterbalance each other to tweak the robot’s rigidity. Bern emphasizes that the advance isn’t a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots.
    “This is an intuitive way of expanding how you can control a soft robot,” he says. “It’s just encoding that idea [of on-demand rigidity] into something a computer can work with.” Bern hopes his roadmap will one day allow users to control a robot’s rigidity as easily as its motion.
    On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles.
    Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. “Interacting with humans is definitely a vision for soft robotics,” he says. Bern points to potential applications in caring for human patients, where a robot’s softness could enhance safety, while its ability to become rigid could allow for lifting when necessary.
    “The core message is to make it easy to control robots’ stiffness,” says Bern. “Let’s start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform.” More