More stories

  • in

    Skills development in Physical AI could give birth to lifelike intelligent robots

    The research suggests that teaching materials science, mechanical engineering, computer science, biology and chemistry as a combined discipline could help students develop the skills they need to create lifelike artificially intelligent (AI) robots as researchers.
    Known as Physical AI, these robots would be designed to look and behave like humans or other animals while possessing intellectual capabilities normally associated with biological organisms. These robots could in future help humans at work and in daily living, performing tasks that are dangerous for humans, and assisting in medicine, caregiving, security, building and industry.
    Although machines and biological beings exist separately, the intelligence capabilities of the two have not yet been combined. There have so far been no autonomous robots that interact with the surrounding environment and with humans in a similar way to how current computer and smartphone-based AI does.
    Co-lead author Professor Mirko Kovac of Imperial’s Department of Aeronautics and the Swiss Federal Laboratories for Materials Science and Technology (Empa)’s Materials and Technology Centre of Robotics said: “The development of robot ‘bodies’ has significantly lagged behind the development of robot ‘brains’. Unlike digital AI, which has been intensively explored in the last few decades, breathing physical intelligence into them has remained comparatively unexplored.”
    The researchers say that the reason for this gap might be that no systematic educational approach has yet been developed for teaching students and researchers to create robot bodies and brains combined as whole units.
    This new research, which is published today in Nature Machine Intelligence defines the term Physical AI. It also suggests an approach for overcoming the gap in skills by integrating scientific disciplines to help future researchers create lifelike robots with capabilities associated with intelligent organisms, such as developing bodily control, autonomy and sensing at the same time.

    advertisement

    The authors identified five main disciplines that are essential for creating Physical AI: materials science, mechanical engineering, computer science, biology and chemistry.
    Professor Kovac said: “The notion of AI is often confined to computers, smartphones and data intensive computation. We are proposing to think of AI in a broader sense and co-develop physical morphologies, learning systems, embedded sensors, fluid logic and integrated actuation. This Physical AI is the new frontier in robotics research and will have major impact in the decades to come, and co-evolving students’ skills in an integrative and multidisciplinary way could unlock some key ideas for students and researchers alike.”
    The researchers say that achieving nature-like functionality in robots requires combining conventional robotics and AI with other disciplines to create Physical AI as its own discipline.
    Professor Kovac said: “We envision Physical AI robots being evolved and grown in the lab by using a variety of unconventional materials and research methods. Researchers will need a much broader stock of skills for building lifelike robots. Cross-disciplinary collaborations and partnerships will be very important.”
    One example of such a partnership is the Imperial-Empa joint Materials and Technology Centre of Robotics that links up Empa’s material science expertise with Imperial’s Aerial Robotics Laboratory.

    advertisement

    The authors also propose intensifying research activities in Physical AI by supporting teachers on both the institutional and community level. They suggest hiring and supporting faculty members whose priority will be multidisciplinary Physical AI research.
    Co-lead author Dr Aslan Miriyev of Empa and the Department of Aeronautics at Imperial said: “Such backing is especially needed as working in the multidisciplinary playground requires daring to leave the comfort zones of narrow disciplinary knowledge for the sake of a high-risk research and career uncertainty.
    “Creating lifelike robots has thus far been an impossible task, but it could be made possible by including Physical AI in the higher education system. Developing skills and research in Physical AI could bring us closer than ever to redefining human-robot and robot-environment interaction.”
    The researchers hope that their work will encourage active discussion of the topic and will lead to integration of Physical AI disciplines in the educational mainstream.
    The researchers intend to implement the Physical AI methodology in their research and education activities to pave the way to human-robot ecosystems.

    Story Source:
    Materials provided by Imperial College London. Original written by Caroline Brogan. Note: Content may be edited for style and length. More

  • in

    Five mistakes people make when sharing COVID-19 data visualizations on Twitter

    The frantic swirl of coronavirus-related information sharing that took place this year on social media is the subject of a new analysis led by researchers at the School of Informatics and Computing at IUPUI.
    Published in the open-access journal Informatics, the study focuses on the sharing of data visualizations on Twitter — by health experts and average citizens alike — during the initial struggle to grasp the scope of the COVID-19 pandemic, and its effects on society. Many social media users continue to encounter similar charts and graphs every day, especially as a new wave of coronavirus cases has begun to surge across the globe.
    The work found that more than half of the analyzed visualizations from average users contained one of five common errors that reduced their clarity, accuracy or trustworthiness.
    “Experts have not yet begun to explore the world of casual visualizations on Twitter,” said Francesco Cafaro, an assistant professor in the School of Informatics and Computing, who led the study. “Studying the new ways people are sharing information online to understand the pandemic and its effect on their lives is an important step in navigating these uncharted waters.”
    Casual data visualizations refer to charts and graphs that rely upon tools available to average users in order to visually depict information in a personally meaningful way. These visualizations differ from traditional data visualization because they aren’t generated or distributed by the traditional “gatekeepers” of health information, such as the Centers for Disease Control and Prevention or the World Health Organization, or by the media.
    “The reality is that people depend upon these visualizations to make major decisions about their lives: whether or not it’s safe to send their kids back to school, whether or not it’s safe to take a vacation, and where to go,” Cafaro said. “Given their influence, we felt it was important to understand more about them, and to identify common issues that can cause people creating or viewing them to misinterpret data, often unintentionally.”
    For the study, IU researchers crawled Twitter to identify 5,409 data visualizations shared on the social network between April 14 and May 9, 2020. Of these, 540 were randomly selected for analysis — with full statistical analysis reserved for 435 visualizations based upon additional criteria. Of these, 112 were made by average citizens.
    Broadly, Cafaro said the study identified five pitfalls common to the data visualizations analyzed. In addition to identifying these problems, the study’s authors suggest steps to overcome or reduce their negative impact:
    Mistrust: Over 25 percent of the posts analyzed failed to clearly identify the source of their data, sowing distrust in the accuracy. This information was often obscured due to poor design — such as bad color choices, busy layout, or typos — not intentional obfuscation. To overcome these issues, the study’s authors suggest clearly labeling data sources as well as placing this information on the graphic itself rather than the accompanying text, as images are often unpaired from their original post during social sharing.
    Proportional reasoning: Eleven percent of posts exhibited issues related to proportional reasoning, which refers to the users’ ability to compare variables based on ratios or fractions. Understanding infection rates across different geographic locations is a challenge of proportional reasoning, for example, since similar numbers of infections can indicate different levels of severity in low- versus high-population settings. To overcome this challenge, the study’s authors suggest using labels such as number of infections per 1,000 people to compare regions with disparate populations, as this metric is easier to understand than absolute numbers or percentages.
    Temporal reasoning: The researchers identified 7 percent of the posts with issues related to temporal reasoning, which refers to users’ ability to understand change over time. These included visualizations that compared the numbers of deaths from flu in a full year to the number of deaths from COVID-19 in a few months, or visualizations that failed to account for the delay between the date of infection and deaths. Recommendations to address these issues included breaking metrics that depend upon different time scales in separate charts, as opposed to conveying the data in a single chart.
    Cognitive bias: A small percentage of posts (0.5 percent) contained text that seemed to encourage users to misinterpret data based upon the creator’s “biases related to race, country and immigration.” The researchers state that information should be presented with clear, objective descriptions carefully separated from any accompanying political commentary.
    Misunderstanding about virus: Two percent of visualizations were based upon misunderstandings about the novel coronavirus, such as the use of data related to SARS or influenza.
    The study also found certain types of data visualizations performed strongest on social media. Data visualizations that showed change over time, such as line or bar graphs, were most commonly shared. They also found that users engaged more frequently with charts conveying numbers of deaths as opposed to numbers of infections or impact on the economy, suggesting that people were more interested in the virus’s lethality than its other negative health or societal effects.
    “The challenge of accurately conveying information visually is not limited to information-sharing on Twitter, but we feel these communications should be considered especially carefully given the influence of social media on people’s decision-making,” Cafaro said. “We believe our findings can help government agencies, news media and average people better understand the types of information about which people care the most, as well as the challenges people may face while interpreting visual information related to the pandemic.”
    Additional leading authors on the study are Milka Trajkova, A’aeshah Alhakamy, Sanika Vedak, Rashmi Mallappa and Sreekanth R. Kankara, research assistants in the School of Informatics and Computing at IUPUI at the time of the study. Alhakamy is currently a lecturer at the University of University of Tabuk in Saudi Arabia.

    Story Source:
    Materials provided by Indiana University. Note: Content may be edited for style and length. More

  • in

    Scientists develop AI-powered 'electronic nose' to sniff out meat freshness

    A team of scientists led by Nanyang Technological University, Singapore (NTU Singapore) has invented an artificial olfactory system that mimics the mammalian nose to assess the freshness of meat accurately.
    The ‘electronic nose’ (e-nose) comprises a ‘barcode’ that changes colour over time in reaction to the gases produced by meat as it decays, and a barcode ‘reader’ in the form of a smartphone app powered by artificial intelligence (AI). The e-nose has been trained to recognise and predict meat freshness from a large library of barcode colours.
    When tested on commercially packaged chicken, fish and beef meat samples that were left to age, the team found that their deep convolutional neural network AI algorithm that powers the e-nose predicted the freshness of the meats with a 98.5 per cent accuracy. As a comparison, the research team assessed the prediction accuracy of a commonly used algorithm to measure the response of sensors like the barcode used in this e-nose. This type of analysis showed an overall accuracy of 61.7 per cent.
    The e-nose, described in a paper published in the scientific journal Advanced Materials in October, could help to reduce food wastage by confirming to consumers whether meat is fit for consumption, more accurately than a ‘Best Before’ label could, said the research team from NTU Singapore, who collaborated with scientists from Jiangnan University, China, and Monash University, Australia.
    Co-lead author Professor Chen Xiaodong, the Director of Innovative Centre for Flexible Devices at NTU, said: “Our proof-of-concept artificial olfactory system, which we tested in real-life scenarios, can be easily integrated into packaging materials and yields results in a short time without the bulky wiring used for electrical signal collection in some e-noses that were developed recently.
    “These barcodes help consumers to save money by ensuring that they do not discard products that are still fit for consumption, which also helps the environment. The biodegradable and non-toxic nature of the barcodes also means they could be safely applied in all parts of the food supply chain to ensure food freshness.”
    A patent has been filed for this method of real-time monitoring of food freshness, and the team is now working with a Singapore agribusiness company to extend this concept to other types of perishables.

    advertisement

    A nose for freshness
    The e-nose developed by NTU scientists and their collaborators comprises two elements: a coloured ‘barcode’ that reacts with gases produced by decaying meat; and a barcode ‘reader’ that uses AI to interpret the combination of colours on the barcode. To make the e-nose portable, the scientists integrated it into a smartphone app that can yield results in 30 seconds.
    The e-nose mimics how a mammalian nose works. When gases produced by decaying meat bind to receptors in the mammalian nose, signals are generated and transmitted to the brain. The brain then collects these responses and organises them into patterns, allowing the mammal to identify the odour present as meat ages and rots.
    In the e-nose, the 20 bars in the barcode act as the receptors. Each bar is made of chitosan (a natural sugar) embedded on a cellulose derivative and loaded with a different type of dye. These dyes react with the gases emitted by decaying meat and change colour in response to the different types and concentrations of gases, resulting in a unique combination of colours that serves as a ‘scent fingerprint’ for the state of any meat.
    For instance, the first bar in the barcode contains a yellow dye that is weakly acidic. When exposed to nitrogen-containing compounds produced by decaying meat (called bioamines), this yellow dye changes into blue as the dye reacts with these compounds. The colour intensity changes with an increasing concentration of bioamines as meat decays further.
    For this study, the scientists first developed a classification system (fresh, less fresh, or spoiled) using an international standard that determines meat freshness. This is done by extracting and measuring the amount of ammonia and two other bioamines found in fish packages wrapped in widely-used transparent PVC (polyvinyl chloride) packaging film and stored at 4°C (39°Fahrenheit) over five days at different intervals.
    They concurrently monitored the freshness of these fish packages with barcodes glued on the inner side of the PVC film without touching the fish. Images of these barcodes were taken at different intervals over five days. More

  • in

    'Electronic skin' promises cheap and recyclable alternative to wearable devices

    Researchers at the University of Colorado Boulder are developing a wearable electronic device that’s “really wearable” — a stretchy and fully-recyclable circuit board that’s inspired by, and sticks onto, human skin.
    The team, led by Jianliang Xiao and Wei Zhang, describes its new “electronic skin” in a paper published today in the journal Science Advances. The device can heal itself, much like real skin. It also reliably performs a range of sensory tasks, from measuring the body temperature of users to tracking their daily step counts.
    And it’s reconfigurable, meaning that the device can be shaped to fit anywhere on your body.
    “If you want to wear this like a watch, you can put it around your wrist,” said Xiao, an associate professor in the Paul M. Rady Department of Mechanical Engineering at CU Boulder. “If you want to wear this like a necklace, you can put it on your neck.”
    He and his colleagues are hoping that their creation will help to reimagine what wearable devices are capable of. The group said that, one day, such high-tech skin could allow people to collect accurate data about their bodies — all while cutting down on the world’s surging quantities of electronic waste.
    “Smart watches are functionally nice, but they’re always a big chunk of metal on a band,” said Zhang, a professor in the Department of Chemistry. “If we want a truly wearable device, ideally it will be a thin film that can comfortably fit onto your body.”
    Stretching out

    advertisement

    Those thin, comfortable films have long been a staple of science fiction. Picture skin peeling off the face of Arnold Schwarzenegger in the Terminator film franchise. “Our research is kind of going in that direction, but we still have a long way to go,” Zhang said.
    His team’s goals, however, are both robot and human. The researchers previously described their design for electronic skin in 2018. But their latest version of the technology makes a lot of improvements on the concept — for a start, it’s far more elastic, not to mention functional.
    To manufacture their bouncy product, Xiao and his colleagues use screen printing to create a network of liquid metal wires. They then sandwich those circuits in between two thin films made out of a highly flexible and self-healing material called polyimine.
    The resulting device is a little thicker than a Band-Aid and can be applied to skin with heat. It can also stretch by 60% in any direction without disrupting the electronics inside, the team reports.
    “It’s really stretchy, which enables a lot of possibilities that weren’t an option before,” Xiao said.

    advertisement

    The team’s electronic skin can do a lot of the same things that popular wearable fitness devices like Fitbits do: reliably measuring body temporary, heart rate, movement patterns and more.
    Less waste
    Arnold may want to take note: The team’s artificial epidermis is also remarkably resilient.
    If you slice a patch of electronic skin, Zhang said, all you have to do is pinch the broken areas together. Within a few minutes, the bonds that hold together the polyimine material will begin to reform. Within 13 minutes, the damage will be almost entirely undetectable.
    “Those bonds help to form a network across the cut. They then begin to grow together,” Zhang said. “It’s similar to skin healing, but we’re talking about covalent chemical bonds here.”
    Xiao added that the project also represents a new approach to manufacturing electronics — one that could be much better for the planet. By 2021, estimates suggest that humans will have produced over 55 million tons of discarded smart phones, laptops and other electronics.
    His team’s stretchy devices, however, are designed to skip the landfills. If you dunk one of these patches into a recycling solution, the polyimine will depolymerize, or separate into its component molecules, while the electronic components sink to the bottom. Both the electronics and the stretchy material can then be reused.
    “Our solution to electronic waste is to start with how we make the device, not from the end point, or when it’s already been thrown away,” Xiao said. “We want a device that is easy to recycle.”
    The team’s electronic skin is a long way away from being able to compete with the real thing. For now, these devices still need to be hooked up to an external source of power to work. But, Xiao said, his group’s research hints that cyborg skin could soon be the fashion fad of the future.
    “We haven’t realized all of these complex functions yet,” he said. “But we are marching toward that device function.” More

  • in

    Getting single-crystal diamond ready for electronics

    Silicon has been the workhorse of electronics for decades because it is a common element, is easy to process, and has useful electronic properties. A limitation of silicon is that high temperatures damage it, which limits the operating speed of silicon-based electronics. Single-crystal diamond is a possible alternative to silicon. Researchers recently fabricated a single-crystal diamond wafer, but common methods of polishing the surface — a requirement for use in electronics — are a combination of slow and damaging.
    In a study recently published in Scientific Reports, researchers from Osaka University and collaborating partners polished a single-crystal diamond wafer to be nearly atomically smooth. This procedure will be useful for helping diamond replace at least some of the silicon components of electronic devices.
    Diamond is the hardest known substance and essentially does not react with chemicals. Polishing it with a similarly hard tool damages the surface and conventional polishing chemistry is slow. In this study, the researchers in essence first modified the quartz glass surface and then polished diamond with modified quartz glass tools.
    “Plasma-assisted polishing is an ideal technique for single-crystal diamond,” explains lead author Nian Liu. “The plasma activates the carbon atoms on the diamond surface without destroying the crystal structure, which lets a quartz glass plate gently smooth away surface irregularities.”
    The single-crystal diamond, before polishing, had many step-like features and was wavy overall, with an average root mean square roughness of 0.66 micrometers. After polishing, the topographical defects were gone, and the surface roughness was far less: 0.4 nanometers.
    “Polishing decreased the surface roughness to near-atomic smoothness,” says senior author Kazuya Yamamura. “There were no scratches on the surface, as seen in scaife mechanical smoothing approaches.”
    Furthermore, the researchers confirmed that the polished surface was unaltered chemically. For example, they detected no graphite — therefore, no damaged carbon. The only detected impurity was a very small amount of nitrogen from the original wafer preparation.
    “Using Raman spectroscopy, the full width at half maximum of the diamond lines in the wafer were the same, and the peak positions were almost identical,” says Liu. “Other polishing techniques show clear deviations from pure diamond.”
    With this research development, high-performance power devices and heat sinks based on single-crystal diamond are now attainable. Such technologies will dramatically lower the power use and carbon input, and improve the performance, of future electronic devices.

    Story Source:
    Materials provided by Osaka University. Note: Content may be edited for style and length. More

  • in

    New black hole merger simulations could help power next-gen gravitational wave detectors

    Rochester Institute of Technology scientists have developed new simulations of black holes with widely varying masses merging that could help power the next generation of gravitational wave detectors. RIT Professor Carlos Lousto and Research Associate James Healy from RIT’s School of Mathematical Sciences outline these record-breaking simulations in a new Physical Review Letters paper.
    As scientists develop more advanced detectors, such as the Laser Interferometer Space Antenna (LISA), they will need more sophisticated simulations to compare the signals they receive with. The simulations calculate properties about the merged black holes including the final mass, spin, and recoil velocity, as well as peak frequency, amplitude, and luminosity of the gravitational waveforms the mergers produce.
    “Right now, we can only observe black holes of comparable masses because they are bright and generate a lot of radiation,” said Lousto. “We know there should be black holes of very different masses that we don’t have access to now through current technology and we will need these third generational detectors to find them. In order for us to confirm that we are observing holes of these different masses, we need these theoretical predictions and that’s what we are providing with these simulations.”
    The scientists from RIT’s Center for Computational Relativity and Gravitation created a series of simulations showing what happens when black holes of increasingly disparate masses — up to a record-breaking ratio of 128:1 — orbit 13 times and merge.
    “From a computational point of view, it really is testing the limits of our method to solve Einstein’s general relativity equations on supercomputers,” said Lousto. “It pushes to the point that no other group in the world has been able to come close to. Technically, it’s very difficult to handle two different objects like two black holes, in this case one is 128 times larger than the other.”

    Story Source:
    Materials provided by Rochester Institute of Technology. Original written by Luke Auburn. Note: Content may be edited for style and length. More

  • in

    Electrified magnets: Researchers uncover a new way to handle data

    The properties of synthesised magnets can be changed and controlled by charge currents as suggested by a study and simulations conducted by physicists at Martin Luther University Halle-Wittenberg (MLU) and Central South University in China. In the journal Nature Communications, the team reports on how magnets and magnetic signals can be coupled more effectively and steered by electric fields. This could result in new, environmentally friendly concepts for efficient communication and data processing.
    Magnets are used to store large amounts of data. They can also be employed in transmitting and processing signals, for example in spintronic devices. External magnetic fields are used to modify the data or the signals. This has few drawbacks. “Generating magnetic fields, for example with the help of a current-carrying coil, requires a lot of energy and is relatively slow,” says Professor Jamal Berakdar from the Institute for Physics at MLU. Electric fields could help. “However, magnets react very weakly — if at all — to electrical fields, which is why it is so hard to control magnetically based data using electrical voltage,” continues the researcher. Therefore, the team from Germany and China looked for a new way to enhance the response of magnetism to electrical fields. “We wanted to find out whether stacked magnetic layers reacted fundamentally differently to electrical fields,” explains Berakdar. The idea: The layers could serve as data channels for magnetically based signals. If a metal layer, for example platinum, is inserted between two magnetic layers, the current flowing in it attenuates the magnetic signal in one layer but amplifies it in the other. Through detailed analysis and simulations, the team was able to show that this mechanism can be precisely controlled by tuning the voltage. This drives the current and allows for a precise and efficient electrical control of the magnetic signals. In addition, it can be implemented on a nanoscale, making it interesting for nanoelectronic applications.
    The researchers went one step further in their work. They were able to show that the newly designed structure also responds more strongly to light or, more generally, to electromagnetic waves. This is important if electromagnetic waves are to be guided through magnetic layers or if these waves are to be used to control magnetic signals. “Another feature of our new concept is that this mechanism works for many material classes, as simulations under realistic conditions show,” says Berakdar. The findings could thus help to develop energy-saving and efficient solutions for data transmission and processing.
    The study was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), the National Natural Science Foundation of China, and the Natural Science Foundation of Hunan Province in China.

    Story Source:
    Materials provided by Martin-Luther-Universität Halle-Wittenberg. Note: Content may be edited for style and length. More

  • in

    New 'robotic snake' device grips, picks up objects

    Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant’s trunk to grasp, pick up and release objects without breaking them.
    The researchers say the versatile technology could be widely applied in sectors where fragile objects are handled, such as agriculture, food and the scientific and resource exploration industries — even for human rescue operations or personal assistive devices.
    Dr Thanh Nho Do, Scientia Lecturer and UNSW Medical Robotics Lab director, said the gripper could be commercially available in the next 12 to 16 months, if his team secured an industry partner.
    He is the senior author of a study featuring the invention, published in Advanced Materials Technologies this month.
    Dr Do worked with the study’s lead author and PhD candidate Trung Thien Hoang, Phuoc Thien Phan, Mai Thanh Thai and his collaborator Scientia Professor Nigel Lovell, Head of the Graduate School of Biomedical Engineering.
    “Our new soft fabric gripper is thin, flat, lightweight and can grip and retrieve various objects — even from confined hollow spaces — for example, a pen inside a tube,” Dr Do said.

    advertisement

    “This device also has an enhanced real-time force sensor which is 15 times more sensitive than conventional designs and detects the grip strength required to prevent damage to objects it’s handling.
    “There is also a thermally-activated mechanism that can change the gripper body from flexible to stiff and vice versa, enabling it to grasp and hold objects of various shapes and weights — up to 220 times heavier than the gripper’s mass.”
    Nature-inspired robotics
    Dr Do said the researchers found inspiration in nature when designing their soft fabric gripper.
    “Animals such as an elephant, python or octopus use the soft, continuum structures of their bodies to coil their grip around objects while increasing contact and stability — it’s easy for them to explore, grasp and manipulate objects,” he said.

    advertisement

    “These animals can do this because of a combination of highly sensitive organs, sense of touch and the strength of thousands of muscles without rigid bone — for example, an elephant’s trunk has up to 40,000 muscles.
    “So, we wanted to mimic these gripping capabilities — holding and manipulating objects are essential motor skills for many robots.”
    Improvement on existing grippers
    Dr Do said the researchers’ new soft gripper was an improvement on existing designs which had disadvantages that limited their application.
    “Many soft grippers are based on claws or human hand-like structures with multiple inward-bending fingers, but this makes them unsuitable to grip objects that are oddly shaped, heavy or bulky, or objects smaller or larger than the gripper’s opening,” he said.
    “Many existing soft grippers also lack sensory feedback and adjustable stiffness capabilities, which means you can’t use them with fragile objects or in confined environments.
    “Our technology can grip long, slender objects and retrieve them from confined, narrow spaces, as well as hook through holes in objects to pick them up — for example, a mug handle.”
    Lead author Trung Thien Hoang said the researchers’ fabrication method was also simple and scalable, which allowed the gripper to be easily produced at different sizes and volumes — for example, a one-metre long gripper could handle objects at least 300 millimetres in diameter.
    During testing, a gripper prototype weighing 8.2 grams could lift an object of 1.8 kilograms — more than 220 times the gripper’s mass — while a prototype 13 centimetres long could wrap around an object with a diameter of 30 mm.
    Prof. Nigel Lovell said: “We used a manufacturing process involving computerised apparel engineering and applied newly designed, highly sensitive liquid metal-based tactile sensors for detecting the grip force required.
    “The gripper’s flat continuum also gives it superior contact with surfaces as it wraps around an object, while increasing the holding force.
    “What’s more, the total heating and cooling cycle for the gripper to change structure from flexible to rigid takes less than half a minute, which is among the fastest reported so far.”
    Integrating robotic arms and the sense of touch
    Dr Do has filed a provisional patent for the new gripper, having successfully tested and validated the technology as a complete device.
    He expects the gripper to be commercially available in the next 12 to 16 months, if he finds an industry partner.
    “We now aim to optimise the integrated materials, develop a closed-loop control algorithm, and integrate the gripper into the ends of robotic arms for gripping and manipulating objects autonomously,” Dr Do said.
    “If we can achieve these next steps, there will be no need to manually lift the gripper which will help for handling very large, heavy objects.
    “We are also working on combining the gripper with our recently announced wearable haptic glove device, which would enable the user to remotely control the gripper while experiencing what an object feels like at the same time.”
    Video: https://www.youtube.com/watch?v=kIelv-iABQs&feature=emb_logo More