More stories

  • in

    New study shows how AI can help us better understand global threats to wildlife

    A new study published today (Tuesday 12 March) by the University of Sussex shows how researchers are using AI technology and social media to help identify global threats to wildlife.
    Researchers at Sussex have used AI to access online records from Facebook, X/Twitter, Google, and Bing, to map the global extent of threats to bats from hunting and trade.
    The new study demonstrates how social media and online content generated by news outlets and the public, can help to increase our understanding of threats to wildlife across the world — and refocus conservation efforts.
    The Sussex team identified 22 countries involved in bat exploitation, covering both hunting and trade, that had not previously been identified by traditional academic research, including Bahrain, Spain, Sri Lanka, New Zealand and Singapore, which had the highest number of new records.
    The team developed an automated system which allowed them to conduct large scale searches across multiple platforms. Using AI, they filtered tens of thousands of results to find relevant data. Any observations or anecdotes of bat exploitation were used to develop a global database of ‘bat exploitation records’.
    To better understand threats to bats, the team compared online records with academic records, knowing that data and information shared online is influenced by factors including global events and where people have access to the internet.
    Lead author, Bronwen Hunter at the University of Sussex says:
    “Using data sources like this provides a low-cost way to help us understand threats to wildlife globally. AI allowed us to access the data at scale and complete a global analysis, which isn’t something we would have been able to achieve using traditional field studies.

    “Another benefit of using online data combined with automated data filtering is that more information can be obtained in real-time, ensuring that we can keep up to date with current threats.”
    Bats make up about a fifth of all mammal species globally, and have a vital role in ecosystems. They are pollinators, disperse seeds and help with pest control.
    Over half of bat species are considered as either ‘Threatened with Extinction’ or ‘Data Deficient’ by the International Union for Nature Conservation (IUCN). Much less is known about the impact of hunting and trade of bats compared with other mammals. However, their very low reproductive rate and longevity — usually 10-30 years — makes them likely to be vulnerable on a scale more commonly associated with much larger mammals such as chimpanzees, bears or lions.
    Being able to expand knowledge of bat exploitation using crowd-sourced digital records can help identify bat populations most in need of conservation action, or feed into global assessments, such as the IUCN Red List.
    Prof. Fiona Mathews at the University of Sussex, who leads the research group says:
    “The hunting and sale of bats for meat was highlighted during the Covid pandemic. But there is also a worrying trade of bats as curios or medicines. It is vital that we understand where bat exploitation is happening, and this has been very difficult historically because it often happens in remote places, and elicit trade can be hidden. This research shows that posts on the internet and social media can provide vital evidence, that can now be followed up on the ground.”
    This research highlights the value of contributions from social media and online platforms and argues that they could be used for future conservation decision making. Using online data combined with current research studies provides a more complete picture of the global extent of bat exploitation.

    Kit Stoner, CEO at The Bat Conservation Trust says:
    “Unsustainable wildlife trade can pose a threat to bat species being hunted or harvested. Often, species are sold much further afield from where they are found. This trade can undermine bat conservation directly and pose a wider threat in terms of increasing the risk of zoonosis. We welcome the results of this research in providing a possible new low-cost way of detecting trade in bats which could offer a way of monitoring how this wildlife trade operates and examining ways of disrupting it.” More

  • in

    Spiral wrappers switch nanotubes from conductors to semiconductors and back

    It might look like a roll of chicken wire, but this tiny cylinder of carbon atoms — too small to see with the naked eye — could one day be used for making electronic devices ranging from night vision goggles and motion detectors to more efficient solar cells, thanks to techniques developed by researchers at Duke University.
    First discovered in the early 1990s, carbon nanotubes are made from single sheets of carbon atoms rolled up like a straw.
    Carbon isn’t exactly a newfangled material. All life on Earth is based on carbon. It’s the same stuff found in diamonds, charcoal, and pencil lead.
    What makes carbon nanotubes special are their remarkable properties. These tiny cylinders are stronger than steel, and yet so thin that 50,000 of them would equal the thickness of a human hair.
    They’re also amazingly good at conducting electricity and heat, which is why, in the push for faster, smaller, more efficient electronics, carbon nanotubes have long been touted as potential replacements for silicon.
    But producing nanotubes with specific properties is a challenge.
    Depending on how they’re rolled up, some nanotubes are considered metallic — meaning electrons can flow through them at any energy. The problem is they can’t be switched off. This limits their use in digital electronics, which use electrical signals that are either on or off to store binary states; just like silicon semiconductor transistors switch between 0 and 1 bits to carry out computations.

    Duke chemistry professor Michael Therien and his team say they’ve found a way around this.
    The approach takes a metallic nanotube, which always lets current through, and transforms it into a semiconducting form that can be switched on and off.
    The secret lies in special polymers — substances whose molecules are hooked together in long chains — that wind around the nanotube in an orderly spiral, “like wrapping a ribbon around a pencil,” said first author Francesco Mastrocinque, who earned his chemistry Ph.D. in Therien’s lab at Duke.
    The effect is reversible, they found. Wrapping the nanotube in a polymer changes its electronic properties from a conductor to a semiconductor. But if the nanotube is unwrapped, it goes back to its original metallic state.
    The researchers also showed that by changing the type of polymer that encircles a nanotube, they could engineer new types of semiconducting nanotubes. They can conduct electricity, but only when the right amount of external energy is applied.
    “This method provides a subtle new tool,” Therien said. “It allows you to make a semiconductor by design.”
    Practical applications of the method are likely far off. “We’re a long way from making devices,” Therien said.

    Mastrocinque and his co-authors say the work is important because it’s a way to design semiconductors that can conduct electricity when struck by light of certain low-energy wavelengths that are common but invisible to human eyes.
    In the future for instance, the Duke team’s work might help others engineer nanotubes that detect heat released as infrared radiation, to reveal people or vehicles hidden in the shadows. When infrared light — such as that emitted by warm-blooded animals — strikes one of these nanotube-polymer hybrids, it would generate an electric signal.
    Or take solar cells: this technique could be used to make nanotube semiconductors that convert a broader range of wavelengths into electricity, to harness more of the Sun’s energy.
    Because of the spiral wrapper on the nanotube surface, these structures could also be ideal materials for new forms of computing and data storage that use the spins of electrons, in addition to their charge, to process and carry information.
    The researchers describe their results March 11 in the journal Proceedings of the National Academy of Sciences.
    This research was supported by the Air Force Office of Scientific Research (FA9550-18-1-0222), the National Institutes of Health (1R01HL146849), the United States National Science Foundation (CHE-2140249, DGE-2040435) and the John Simon Guggenheim Memorial Foundation. More

  • in

    AI making waves in marine data collection

    Numerous measurement stations around the world provide us with data about air quality, allowing us to enhance it. Although we are increasingly collecting data from marine areas, access to such data is considerably more challenging. Signals are poorly transmitted through water, differences in both pressure and currents hinder measurement devices and there is an absence of pre-constructed computing infrastructure.
    Could intelligent technologies help us improve marine data collection? Professor of Computer Science Petteri Nurmi and his research group at the University of Helsinki have joined forces with researchers at the University of Tartu, University of Madeira, and MARE-Madeira, ARDITI, a non-profit marine research institute, to develop solutions combining sensor technologies and embedded Artificial Intelligence.
    The researchers strive to make more efficient and expansive the data collection methods now used in, for example, environmental research.
    “The higher the quantity and quality of data about the oceans obtained, the better we can use it to understand and protect the oceans. Our methods help expand the total amount of data gathered from marine areas and reduce the effort required to collect and analyse them,” Nurmi says.
    AI identifies animal species
    In a recently published study, Nurmi and his colleagues used data collected from whale-watching excursions in Madeira, Portugal. The vessels used for these excursions usually carry persons who can record observations of the species seen or video the environment during the tour.
    In the study, an AI model assisted individuals in environmental real-time observation. AI was also used to identify whether the video footage showed certain animals, such as dolphins or whales. In addition, the researchers compared the AI assistance received by experienced and less experienced observers, and explored how AI-assisted animal observations served as data for training the model.

    “We analysed how AI assistance affected the quality of data and human observations. AI improved the accuracy of animal observations by amateurs, but had no effect on expert observations. On the other hand, when the data collected were used for training purposes, the best results were achieved by combining AI classifications with expert observations. Thus, interactions between humans and AI can influence each other and they need to be better understood,” Nurmi explains. The method could ideally be used for faster identification of animals moving in marine areas. The results and methods can also be expanded to observe other organisms.
    Identifying marine plastics
    Another recent study focused on identifying and classifying underwater plastic debris. These differ from microplastics, or tiny plastic particles, in being visible to the naked eye. One method currently used to obtain information on marine plastic pollution is to have divers or devices collect samples for laboratory analysis, but this usually takes a great deal of time. Surface-layer plastics can be observed with aerial photography as well.
    The researchers developed an AI model functioning with sensors and based on the analysis of light spectrum data. The model could be connected to diver equipment or a diving robot to determine the type of plastic waste underwater. The researchers discovered that the model was able to distinguish types of plastic with 85% accuracy.
    “We are capable of identifying four out of five objects directly, which means we need to send fewer samples to a lab for identification. This provides us with more data and thus a more comprehensive overview of marine plastic pollution.”
    Nurmi emphasises that his and his colleagues’ aim is to create new ways of collecting data. Experts in other fields can then come up with the best uses for the new methods. For instance, once the type of plastic is known, it is easier to find out its source and consider ways of preventing pollution. The data also help researchers understand the effects on ecosystems of different types of plastic, as they contain different chemicals and break down in different ways.

    Better data for better protection
    Nurmi believes that the situation with marine data may be similar to the one in air quality research years ago.
    “Originally, air quality research relied on a few large measuring towers, but now you can find them even on bus stops. In marine sciences this is not yet the case, even if the methods for collecting marine data and the number of actors involved are increasing. Our research ensures that we can obtain even greater amount of even more accurate marine measurement data as the capabilities to collect data improve.”
    In the future, the use of underwater drones, smart buoys, vessels and coastal base stations is likely to increase in marine data collection. The expanding network of such tools will offer further ways to gather underwater data.
    “Despite their vital importance for humanity, marine protection and regulation unfortunately tend to be drowned out by other issues. The higher the quantity and quality of marine data collected, the better we are able to develop solutions and regulations improving the status of the marine environment.” More

  • in

    Powerful new tool ushers in new era of quantum materials research

    Research in quantum materials is paving the way for groundbreaking discoveries and is poised to drive technological advancements that will redefine the landscapes of industries like mining, energy, transportation, and medtech.
    A technique called time- and angle-resolved photoemission spectroscopy (TR-ARPES) has emerged as a powerful tool, allowing researchers to explore the equilibrium and dynamical properties of quantum materials via light-matter interaction.
    Published in the physics review journal, Reviews of Modern Physics, a recent review paper by Professor Fabio Boschini from the Institut national de la recherche scientifique (INRS), along with colleagues Marta Zonno from Canadian Light Source (CLS) and Andrea Damascelli from UBC’s Stewart Blusson Quantum Matter Institute (Blusson QMI), illustrates that TR-ARPES has rapidly matured into a powerful technique over the last two decades.
    “TR-ARPES is an effective technique not only for fundamental studies, but also for characterizing out-of-equilibrium properties of quantum materials for future applications,” says Professor Boschini who specializes in ultrafast spectroscopies of condensed matter, at the Énergie Matériaux Télécommunications Research Centre.
    A revolutionary tool for quantum materials research
    The new paper provides a comprehensive review of research using TR-ARPES and its evolving significance in exploring light-induced electron dynamics and phase transitions in a wide range of quantum materials.
    “The scientific community is currently investigating new ‘tuning knobs’ to control the electronic, transport, and magnetic properties of quantum materials on demand. One of these ‘tuning knobs’ is the light-matter interaction, which promises to provide fine control the properties of quantum materials on ultrafast timescales,” says Professor Boschini, who is also a QMI affiliate investigator. “TR-ARPES is the ideal technique for this purpose, since it provides direct insight into how light excitation modifies electronic states with time, energy, and momentum resolution.”
    “TR-ARPES has ushered in a new era of quantum materials research, allowing us to ‘knock on the system’ and observe how it responds, and pushing the materials out of equilibrium to uncover their hidden properties,” adds Blusson QMI Scientific Director Andrea Damascelli.

    Collaboration at the heart of TR ARPES’ success
    TR-ARPES combines condensed matter spectroscopy (ARPES) with ultrafast lasers (photonics), bringing together research groups from both fields. The technique owes much of its success to significant advancements in developing new laser sources capable of producing light with precise characteristics.
    Boschini is working closely on the subject with Professor François Légaré, a full professor at INRS and an expert in ultrafast laser science and technology. Together, Boschini’s and Légaré’s groups built and are operating a state-of-the-art TR-ARPES endstation with unique intense long-wavelength excitation capabilities at the Advanced Laser Light Source (ALLS) laboratory.
    “Thanks to the support from the Canada Foundation for Innovation (CFI), the governments of Québec (MEIE) and Canada, and LaserNetUS, as well as the recent CFI Major Science Initiatives program, we are now in the privileged position to open the TR-ARPES endstation at ALLS to national and international users,” states Professor Légaré, Director of the Énergie Matériaux Télécommunications Research Centre at INRS and Scientific Head of ALLS.
    According to Professor Boschini, TR-ARPES is now a mature technique with a proven impact on various branches of physics and chemistry. “Further experimental and theoretical developments, similar to what we are doing at ALLS, suggest that even more exciting times lie ahead,” he concludes. More

  • in

    How do neural networks learn? A mathematical formula explains how they detect relevant patterns

    Neural networks have been powering breakthroughs in artificial intelligence, including the large language models that are now being used in a wide range of applications, from finance, to human resources to healthcare. But these networks remain a black box whose inner workings engineers and scientists struggle to understand. Now, a team led by data and computer scientists at the University of California San Diego has given neural networks the equivalent of an X-ray to uncover how they actually learn.
    The researchers found that a formula used in statistical analysis provides a streamlined mathematical description of how neural networks, such as GPT-2, a precursor to ChatGPT, learn relevant patterns in data, known as features. This formula also explains how neural networks use these relevant patterns to make predictions.
    “We are trying to understand neural networks from first principles,” said Daniel Beaglehole, a Ph.D. student in the UC San Diego Department of Computer Science and Engineering and co-first author of the study. “With our formula, one can simply interpret which features the network is using to make predictions.”
    The team presented their findings in the March 7 issue of the journal Science.
    Why does this matter? AI-powered tools are now pervasive in everyday life. Banks use them to approve loans. Hospitals use them to analyze medical data, such as X-rays and MRIs. Companies use them to screen job applicants. But it’s currently difficult to understand the mechanism neural networks use to make decisions and the biases in the training data that might impact this.
    “If you don’t understand how neural networks learn, it’s very hard to establish whether neural networks produce reliable, accurate, and appropriate responses,” said Mikhail Belkin, the paper’s corresponding author and a professor at the UC San Diego Halicioglu Data Science Institute. “This is particularly significant given the rapid recent growth of machine learning and neural net technology.”
    The study is part of a larger effort in Belkin’s research group to develop a mathematical theory that explains how neural networks work. “Technology has outpaced theory by a huge amount,” he said. “We need to catch up.”
    The team also showed that the statistical formula they used to understand how neural networks learn, known as Average Gradient Outer Product (AGOP), could be applied to improve performance and efficiency in other types of machine learning architectures that do not include neural networks.

    “If we understand the underlying mechanisms that drive neural networks, we should be able to build machine learning models that are simpler, more efficient and more interpretable,” Belkin said. “We hope this will help democratize AI.”
    The machine learning systems that Belkin envisions would need less computational power, and therefore less power from the grid, to function. These systems also would be less complex and so easier to understand.
    Illustrating the new findings with an example
    (Artificial) neural networks are computational tools to learn relationships between data characteristics (i.e. identifying specific objects or faces in an image). One example of a task is determining whether in a new image a person is wearing glasses or not. Machine learning approaches this problem by providing the neural network many example (training) images labeled as images of “a person wearing glasses” or “a person not wearing glasses.” The neural network learns the relationship between images and their labels, and extracts data patterns, or features, that it needs to focus on to make a determination. One of the reasons AI systems are considered a black box is because it is often difficult to describe mathematically what criteria the systems are actually using to make their predictions, including potential biases. The new work provides a simple mathematical explanation for how the systems are learning these features.
    Features are relevant patterns in the data. In the example above, there are a wide range of features that the neural networks learns, and then uses, to determine if in fact a person in a photograph is wearing glasses or not. One feature it would need to pay attention to for this task is the upper part of the face. Other features could be the eye or the nose area where glasses often rest. The network selectively pays attention to the features that it learns are relevant and then discards the other parts of the image, such as the lower part of the face, the hair and so on.
    Feature learning is the ability to recognize relevant patterns in data and then use those patterns to make predictions. In the glasses example, the network learns to pay attention to the upper part of the face. In the new Science paper, the researchers identified a statistical formula that describes how the neural networks are learning features.

    Alternative neural network architectures: The researchers went on to show that inserting this formula into computing systems that do not rely on neural networks allowed these systems to learn faster and more efficiently.
    “How do I ignore what’s not necessary? Humans are good at this,” said Belkin. “Machines are doing the same thing. Large Language Models, for example, are implementing this ‘selective paying attention’ and we haven’t known how they do it. In our Science paper, we present a mechanism explaining at least some of how the neural nets are ‘selectively paying attention.'”
    Study funders included the National Science Foundation and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning. Belkin is part of NSF-funded and UC San Diego-led The Institute for Learning-enabled Optimization at Scale, or TILOS. More

  • in

    Mathematicians use AI to identify emerging COVID-19 variants

    Scientists at The Universities of Manchester and Oxford have developed an AI framework that can identify and track new and concerning COVID-19 variants and could help with other infections in the future.
    The framework combines dimension reduction techniques and a new explainable clustering algorithm called CLASSIX, developed by mathematicians at The University of Manchester. This enables the quick identification of groups of viral genomes that might present a risk in the future from huge volumes of data.
    The study, presented this week in the journal PNAS, could support traditional methods of tracking viral evolution, such as phylogenetic analysis, which currently require extensive manual curation.
    Roberto Cahuantzi, a researcher at The University of Manchester and first and corresponding author of the paper, said: “Since the emergence of COVID-19, we have seen multiple waves of new variants, heightened transmissibility, evasion of immune responses, and increased severity of illness.
    “Scientists are now intensifying efforts to pinpoint these worrying new variants, such as alpha, delta and omicron, at the earliest stages of their emergence. If we can find a way to do this quickly and efficiently, it will enable us to be more proactive in our response, such as tailored vaccine development and may even enable us to eliminate the variants before they become established.”
    Like many other RNA viruses, COVID-19 has a high mutation rate and short time between generations meaning it evolves extremely rapidly. This means identifying new strains that are likely to be problematic in the future requires considerable effort.
    Currently, there are almost 16 million sequences available on the GISAID database (the Global Initiative on Sharing All Influenza Data), which provides access to genomic data of influenza viruses.

    Mapping the evolution and history of all COVID-19 genomes from this data is currently done using extremely large amounts of computer and human time.
    The described method allows automation of such tasks. The researchers processed 5.7 million high-coverage sequences in only one to two days on a standard modern laptop; this would not be possible for existing methods, putting identification of concerning pathogen strains in the hands of more researchers due to reduced resource needs.
    Thomas House, Professor of Mathematical Sciences at The University of Manchester, said: “The unprecedented amount of genetic data generated during the pandemic demands improvements to our methods to analyse it thoroughly. The data is continuing to grow rapidly but without showing a benefit to curating this data, there is a risk that it will be removed or deleted.
    “We know that human expert time is limited, so our approach should not replace the work of humans all together but work alongside them to enable the job to be done much quicker and free our experts for other vital developments.”
    The proposed method works by breaking down genetic sequences of the COVID-19 virus into smaller “words” (called 3-mers) represented as numbers by counting them. Then, it groups similar sequences together based on their word patterns using machine learning techniques.
    Stefan Güttel, Professor of Applied Mathematics at the University of Manchester, said: “The clustering algorithm CLASSIX we developed is much less computationally demanding than traditional methods and is fully explainable, meaning that it provides textual and visual explanations of the computed clusters.”
    Roberto Cahuantzi added: “Our analysis serves as a proof of concept, demonstrating the potential use of machine learning methods as an alert tool for the early discovery of emerging major variants without relying on the need to generate phylogenies.
    “Whilst phylogenetics remains the ‘gold standard’ for understanding the viral ancestry, these machine learning methods can accommodate several orders of magnitude more sequences than the current phylogenetic methods and at a low computational cost.” More

  • in

    Cicadas’ unique urination unlocks new understanding of fluid dynamics

    Cicadas are the soundtrack of summer, but their pee is more special than their music. Rather than sprinkling droplets, they emit jets of urine from their small frames. For years, Georgia Tech researchers have wanted to understand the cicada’s unique urination.
    Saad Bhamla, an assistant professor in the School of Chemical and Biochemical Engineering, and his research group hoped for an opportunity to study a cicada’s fluid excretion. However, while cicadas are easily heard, they hide in trees, making them hard to observe. As such, seeing a cicada pee is an event. Bhamla’s team had only watched the process on YouTube.
    Then, while doing field work in Peru, the team got lucky: They saw numerous cicadas in a tree, peeing.
    This moment of observation was enough to disprove two main insect pee paradigms. First, cicadas eat xylem sap, and most xylem feeders only pee in droplets because it uses less energy to excrete the sap. Cicadas, however, are such voracious eaters that individually flicking away each drop of pee would be too taxing and would not extract enough nutrients from the sap.
    “The assumption was that if an insect transitions from droplet formation into a jet, it will require more energy because the insect would have to inject more speed,” said Elio Challita, a former Ph.D. student in Bhamla’s lab and current postdoctoral researcher at Harvard University.
    Second, smaller animals are expected to pee in droplets because their orifice is too tiny to emit anything thicker. Because of cicadas’ larger size — with wingspans that can rival a small hummingbird’s — they use less energy to expel pee in jets.
    “Previously, it was understood that if a small animal wants to eject jets of water, then this becomes a bit challenging, because the animal expends more energy to force the fluid’s exit at a higher speed. This is due to surface tension and viscous forces. But a larger animal can rely on gravity and inertial forces to pee,” Challita said.

    The cicadas’ ability to jet water offered the researchers a new understanding of how fluid dynamics impacts these tiny insects — and even large mammals. The researchers published this challenge to the paradigm as a brief, “Unifying Fluidic Excretion Across Life from Cicadas to Elephants,” in Proceedings of the National Academy of Sciences the week of March 11.
    For years, the research group has been studying fluid ejection across species, culminating in a recent arXiv preprint that characterizes this phenomenon from microscopic fungi to colossal whales. Their framework reveals diverse functions — such as excretion, venom spraying, prey hunting, spore dispersal, and plant guttation — highlighting potential applications in soft robotics, additive manufacturing, and drug delivery.
    Cicadas are the smallest animal to create high-speed jets, so they can potentially inform applications in making jets in tiny robots/nozzles. And because their population reaches trillions, the ecosystem impact of their fluid ejection is substantial but unknown. Beyond bio-inspired engineering, Bhamla believes the critters could also inform bio-monitoring applications.
    “Our research has mapped the excretory patterns of animals, spanning eight orders of scale from tiny cicadas to massive elephants,” he said. “We’ve identified the fundamental constraints and forces that dictate these processes, offering a new lens through which to understand the principles of excretion, a critical function of all living systems. This work not only deepens our comprehension of biological functions but also paves the way for unifying the underlying principles that govern life’s essential processes.” More

  • in

    Robotic interface masters a soft touch

    The perception of softness can be taken for granted, but it plays a crucial role in many actions and interactions — from judging the ripeness of an avocado to conducting a medical exam, or holding the hand of a loved one. But understanding and reproducing softness perception is challenging, because it involves so many sensory and cognitive processes.
    Robotics researchers have tried to address this challenge with haptic devices, but previous attempts have not distinguished between two primary elements of softness perception: cutaneous cues (sensory feedback from the skin of the fingertip), and kinesthetic cues (feedback about the amount of force on the finger joint).
    “If you press on a marshmallow with your fingertip, it’s easy to tell that it’s soft. But if you place a hard biscuit on top of that marshmallow and press again, you can still tell that the soft marshmallow is underneath, even though your fingertip is touching a hard surface,” explains Mustafa Mete, a PhD student in the Reconfigurable Robotics Lab in the School of Engineering. “We wanted to see if we could create a robotic platform that can do the same.”
    With SORI (Softness Rendering Interface), the RRL, led by Jamie Paik, has achieved just that. By decoupling cutaneous and kinesthetic cues, SORI faithfully recreate the softness of a range of real materials, filling a gap in the robotics field enabling many applications where softness sensation is critical — from deep-sea exploration to robot-assisted surgery.
    The research appears in the Proceedings of the National Academy of Science (PNAS).
    We all feel softness differently
    Mete explains that neuroscientific and psychological studies show that cutaneous cues are largely based on how much skin is in contact with a surface, which is often related in part to the deformation of the object. In other words, a surface that envelopes a greater area of your fingertip will be perceived as softer. But because human fingertips vary widely in size and firmness, one finger may make greater contact with a given surface than another.

    “We realized that the softness I feel may not be the same as the softness you feel, because of our different finger shapes. So, for our study, we first had to develop parameters for the geometries of a fingertip and its contact surface in order to estimate the softness cues for that fingertip,” Mete explains. Then, the researchers extracted the softness parameters from a range of different materials, and mapped both sets of parameters onto the SORI device.
    Building on the RRL’s trademark origami robot research, which has fueled spinoffs for reconfigurable environments and a haptic joystick, SORI is equipped with motor-driven origami joints that can be modulated to become stiffer or more supple. Perched atop the joints is a dimpled silicone membrane. A flow of air inflates the membrane to varying degrees, to envelop a fingertip placed at its center.
    With this novel decoupling of kinesthetic and cutaneous functionality, SORI succeeded in recreating the softness of a range of materials — including beef, salmon, and marshmallow — over the course of several experiments with two human volunteers. It also mimicked materials with both soft and firm attributes (such as a biscuit on top of a marshmallow, or a leather-bound book). In one virtual experiment, SORI even reproduced the sensation of a beating heart, to demonstrate its efficacy at rendering soft materials in motion.
    Medicine is therefore a primary area of potential application for this technology; for example, to train medical students to detect cancerous tumors, or to provide crucial sensory feedback to surgeons using robots to perform operations.
    Other applications include robot-assisted exploration of space or the deep ocean, where the device could enable scientists to feel the softness of a discovered object from a remote location. SORI is also a potential answer to one of the biggest challenges in robot-assisted agriculture: harvesting tender fruits and vegetables without crushing them.
    “This is not intended to act as a softness sensor for robots, but to transfer the feeling of ‘touch’ digitally, just like sending photos or music,” Mete summarizes. More