More stories

  • in

    How a horse whisperer can help engineers build better robots

    Humans and horses have enjoyed a strong working relationship for nearly 10,000 years — a partnership that transformed how food was produced, people were transported and even how wars were fought and won. Today, we look to horses for companionship, recreation and as teammates in competitive activities like racing, dressage and showing.
    Can these age-old interactions between people and their horses teach us something about building robots designed to improve our lives? Researchers with the University of Florida say yes.
    “There are no fundamental guiding principles for how to build an effective working relationship between robots and humans,” said Eakta Jain, an associate professor of computer and information science and engineering at UF’s Herbert Wertheim College of Engineering. “As we work to improve how humans interact with autonomous vehicles and other forms of AI, it occurred to me that we’ve done this before with horses. This relationship has existed for millennia but was never leveraged to provide insights for human-robot interaction.”
    Jain, who did her doctoral work at the Robotics Institute at Carnegie Mellon University, conducted a year of field work observing the special interactions among horses and humans at the UF Horse Teaching Unit in Gainesville, Florida. She will present her findings today at the ACM Conference on Human Factors in Computing Systems in Hamburg, Germany.
    Like horses did thousands of years before, robots are entering our lives and workplaces as companions and teammates. They vacuum our floors, help educate and entertain our children, and studies are showing that social robots can be effective therapy tools to help improve mental and physical health. Increasingly, robots are found in factories and warehouses, working collaboratively with human workers and sometimes even called co-bots.
    As a member of the UF Transportation Institute, Jain was leading the human factor subgroup that examines how humans should interact with autonomous vehicles, or AVs.

    “For the first time, cars and trucks can observe nearby vehicles and keep an appropriate distance from them as well as monitor the driver for signs of fatigue and attentiveness,” Jain said. “However, the horse has had these capabilities for a long time. I thought why not learn from our partnership with horses for transportation to help solve the problem of natural interaction between humans and AVs.”
    Looking at our history with animals to help shape our future with robots is not a new concept, though most studies have been inspired by the relationship humans have with dogs. Jain and her colleagues in the College of Engineering and UF Equine Sciences are the first to bring together engineering and robotics researchers with horse experts and trainers to conduct on-the-ground field studies with the animals.
    The multidisciplinary collaboration involved expertise in engineering, animal sciences and qualitative research methodologies, Jain explained. She first reached out Joel McQuagge, from UF’s equine behavior and management program who oversees the UF Horse Teaching Unit. He hadn’t thought about this connection between horses and robots, but he provided Jain with full access, and she spent months observing classes. She interviewed and observed horse experts, including thoroughbred trainers and devoted horse owners. Christina Gardner-McCune, an associate professor in UF’s department of computer and information science and engineering, provided expertise in qualitative data analysis.
    Data collected through observations and thematical analyses resulted in findings that can be applied by human-robot interaction researchers and robot designers.
    “Some of the findings are concrete and easy to visualize, while others are more abstract,” she says. “For example, we learned that a horse speaks with its body. You can see its ears pointing to where something caught its attention. We could build in similar types of nonverbal expressions in our robots, like ears that point when there is a knock on the door or something visual in the car when there’s a pedestrian on that side of the street.”
    A more abstract and groundbreaking finding is the notion of respect. When a trainer first works with a horse, he looks for signs of respect from the horse for its human partner.

    “We don’t typically think about respect in the context of human-robot interactions,” Jain says. “What ways can a robot show you that it respects you? Can we design behaviors similar to what the horse uses? Will that make the human more willing to work with the robot?”
    Jain, originally from New Delhi, says she grew up with robots the way people grow up with animals. Her father is an engineer who made educational and industrial robots, and her mother was a computer science teacher who ran her school’s robotics club.
    “Robots were the subject of many dinner table conversations,” she says, “so I was exposed to human-robot interactions early.”
    However, during her yearlong study of the human-horse relationship, she learned how to ride a horse and says she hopes to one day own a horse.
    “At first, I thought I could learn by observing and talking to people,” she says. “There is no substitute for doing, though. I had to feel for myself how the horse-human partnership works. From the first time I got on a horse, I fell in love with them.” More

  • in

    Jellyfish-like robots could one day clean up the world’s oceans

    Most of the world is covered in oceans, which are unfortunately highly polluted. One of the strategies to combat the mounds of waste found in these very sensitive ecosystems — especially around coral reefs — is to employ robots to master the cleanup. However, existing underwater robots are mostly bulky with rigid bodies, unable to explore and sample in complex and unstructured environments, and are noisy due to electrical motors or hydraulic pumps. For a more suitable design, scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart looked to nature for inspiration. They configured a jellyfish-inspired, versatile, energy-efficient and nearly noise-free robot the size of a hand. Jellyfish-Bot is a collaboration between the Physical Intelligence and Robotic Materials departments at MPI-IS. “A Versatile Jellyfish-like Robotic Platform for Effective Underwater Propulsion and Manipulation” was published in Science Advances.
    To build the robot, the team used electrohydraulic actuators through which electricity flows. The actuators serve as artificial muscles which power the robot. Surrounding these muscles are air cushions as well as soft and rigid components which stabilize the robot and make it waterproof. This way, the high voltage running through the actuators cannot contact the surrounding water. A power supply periodically provides electricity through thin wires, causing the muscles to contract and expand. This allows the robot to swim gracefully and to create swirls underneath its body.
    “When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body. In this way, it can also collect nutrients. Our robot, too, circulates the water around it. This function is useful in collecting objects such as waste particles. It can then transport the litter to the surface, where it can later be recycled. It is also able to collect fragile biological samples such as fish eggs. Meanwhile, there is no negative impact on the surrounding environment. The interaction with aquatic species is gentle and nearly noise-free,” Tianlu Wang explains. He is a postdoc in the Physical Intelligence Department at MPI-IS and first author of the publication.
    His co-author Hyeong-Joon Joo from the Robotic Materials Department continues: “70% of marine litter is estimated to sink to the seabed. Plastics make up more than 60% of this litter, taking hundreds of years to degrade. Therefore, we saw an urgent need to develop a robot to manipulate objects such as litter and transport it upwards. We hope that underwater robots could one day assist in cleaning up our oceans.”
    Jellyfish-Bots are capable of moving and trapping objects without physical contact, operating either alone or with several in combination. Each robot works faster than other comparable inventions, reaching a speed of up to 6.1 cm/s. Moreover, Jellyfish-Bot only requires a low input power of around 100 mW. And it is safe for humans and fish should the polymer material insulating the robot one day be torn apart. Meanwhile, the noise from the robot cannot be distinguished from background levels. In this way Jellyfish-Bot interacts gently with its environment without disturbing it — much like its natural counterpart.
    The robot consists of several layers: some stiffen the robot, others serve to keep it afloat or insulate it. A further polymer layer functions as a floating skin. Electrically powered artificial muscles known as HASELs are embedded into the middle of the different layers. HASELs are liquid dielectric-filled plastic pouches that are partially covered by electrodes. Applying a high voltage across an electrode charges it positively, while surrounding water is charged negatively. This generates a force between positively-charged electrode and negatively-charged water that pushes the oil inside the pouches back and forth, causing the pouches to contract and relax — resembling a real muscle. HASELs can sustain the high electrical stresses generated by the charged electrodes and are protected against water by an insulating layer. This is important, as HASEL muscles were never before used to build an underwater robot.
    The first step was to develop Jellyfish-Bot with one electrode with six fingers or arms. In the second step, the team divided the single electrode into separated groups to independently actuate them.
    “We achieved grasping objects by making four of the arms function as a propeller, and the other two as a gripper. Or we actuated only a subset of the arms, in order to steer the robot in different directions. We also looked into how we can operate a collective of several robots. For instance, we took two robots and let them pick up a mask, which is very difficult for a single robot alone. Two robots can also cooperate in carrying heavy loads. However, at this point, our Jellyfish-Bot needs a wire. This is a drawback if we really want to use it one day in the ocean,” Hyeong-Joon Joo says.
    Perhaps wires powering robots will soon be a thing of the past. “We aim to develop wireless robots. Luckily, we have achieved the first step towards this goal. We have incorporated all the functional modules like the battery and wireless communication parts so as to enable future wireless manipulation,” Tianlu Wang continues. The team attached a buoyancy unit at the top of the robot and a battery and microcontroller to the bottom. They then took their invention for a swim in the pond of the Max Planck Stuttgart campus, and could successfully steer it along. So far, however, they could not direct the wireless robot to change course and swim the other way. More

  • in

    Creating a tsunami early warning system using artificial intelligence

    Tsunamis are incredibly destructive waves that can destroy coastal infrastructure and cause loss of life. Early warnings for such natural disasters are difficult because the risk of a tsunami is highly dependent on the features of the underwater earthquake that triggers it.
    In Physics of Fluids, by AIP Publishing, researchers from the University of California, Los Angeles and Cardiff University in the U.K. developed an early warning system that combines state-of-the-art acoustic technology with artificial intelligence to immediately classify earthquakes and determine potential tsunami risk.
    Underwater earthquakes can trigger tsunamis if a large amount of water is displaced, so determining the type of earthquake is critical to assessing the tsunami risk.
    “Tectonic events with a strong vertical slip element are more likely to raise or lower the water column compared to horizontal slip elements,” said co-author Bernabe Gomez. “Thus, knowing the slip type at the early stages of the assessment can reduce false alarms and enhance the reliability of the warning systems through independent cross-validation.”
    In these cases, time is of the essence, and relying on deep ocean wave buoys to measure water levels often leaves insufficient evacuation time. Instead, the researchers propose measuring the acoustic radiation (sound) produced by the earthquake, which carries information about the tectonic event and travels significantly faster than tsunami waves. Underwater microphones, called hydrophones, record the acoustic waves and monitor tectonic activity in real time.
    “Acoustic radiation travels through the water column much faster than tsunami waves. It carries information about the originating source and its pressure field can be recorded at distant locations, even thousands of kilometers away from the source. The derivation of analytical solutions for the pressure field is a key factor in the real-time analysis,” co-author Usama Kadri said.
    The computational model triangulates the source of the earthquake from the hydrophones and AI algorithms classify its slip type and magnitude. It then calculates important properties like effective length and width, uplift speed, and duration, which dictate the size of the tsunami.
    The authors tested their model with available hydrophone data and found it almost instantaneously and successfully described the earthquake parameters with low computational demand. They are improving the model by factoring in more information to increase the tsunami characterization’s accuracy.
    Their work predicting tsunami risk is part of a larger project to enhance hazard warning systems. The tsunami classification is a back-end aspect of a software that can improve the safety of offshore platforms and ships. More

  • in

    Scientists have full state of a quantum liquid down cold

    A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing — a method that can perform certain calculations significantly more efficiently than conventional computing.
    “In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which appears in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory — similar to those that describe the fundamental particles in our universe.”
    Sels adds that the breakthrough offers promise for technological advancement.
    “Quantum computing relies on the ability to generate entanglement between different subsystems, and that’s exactly what we can probe with our method,” he notes. “The ability to do such precise characterization could also lead to better quantum sensors — another application area of quantum technologies.”
    The research team, which included scientists from Vienna University of Technology, ETH Zurich, Free University of Berlin, and the Max-Planck Institute of Quantum Optics, performed a tomography of a quantum system — the reconstruction of a specific quantum state with the aim of seeking experimental evidence of a theory.
    The studied quantum system consisted of ultracold atoms — slow-moving atoms that make the movement easier to analyze because of their near-zero temperature — trapped on an atom chip.
    In their work, the scientists created two “copies” of this quantum system — cigar-shaped clouds of atoms that evolve over time without influencing each other. At different stages of this process, the team performed a series of experiments that revealed the two copies’ correlations.
    “By constructing an entire history of these correlations, we can infer what is the initial quantum state of the system and extract its properties,” explains Sels. “Initially, we have a very strongly coupled quantum liquid, which we split into two so that it evolves as two independent liquids, and then we recombine it to reveal the ripples that are in the liquid.
    “It’s like watching the ripples in a pond after throwing a rock in it and inferring the properties of the rock, such as its size, shape, and weight.”
    This research was supported by grants from the Air Force Office of Scientific Research (FA9550-21-1-0236) and the U.S. Army Research Office (W911NF-20-1-0163) as well as the Austrian Science Fund (FWF) and the German Research Research Foundation (DRG). More

  • in

    Researchers use AI to discover new planet outside solar system

    A University of Georgia research team has confirmed evidence of a previously unknown planet outside of our solar system, and they used machine learning tools to detect it.
    A recent study by the team showed that machine learning can correctly determine if an exoplanet is present by looking in protoplanetary disks, the gas around newly formed stars.
    The newly published findings represent a first step toward using machine learning to identify previously overlooked exoplanets.
    “We confirmed the planet using traditional techniques, but our models directed us to run those simulations and showed us exactly where the planet might be,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study.
    “When we applied our models to a set of older observations, they identified a disk that wasn’t known to have a planet despite having already been analyzed. Like previous discoveries, we ran simulations of the disk and found that a planet could re-create the observation.”
    According to Terry, the models suggested a planet’s presence, indicated by several images that strongly highlighted a particular region of the disk that turned out to have the characteristic sign of a planet — an unusual deviation in the velocity of the gas near the planet.
    “This is an incredibly exciting proof of concept. We knew from our previous work that we could use machine learning to find known forming exoplanets,” said Cassandra Hall, assistant professor of computational astrophysics and principal investigator of the Exoplanet and Planet Formation Research Group at UGA. “Now, we know for sure that we can use it to make brand new discoveries.”
    The discovery highlights how machine learning has the power to enhance scientists’ work, utilizing artificial intelligence as an added tool to expand researchers’ accuracy and more efficiently economize their time when engaged in such a vast endeavor as investigating deep, outer space.
    The models were able to detect a signal in data that people had already analyzed; they found something that previously had gone undetected.
    “This demonstrates that our models — and machine learning in general — have the ability to quickly and accurately identify important information that people can miss. This has the potential to dramatically speed up analysis and subsequent theoretical insights,” Terry said. “It only took about an hour to analyze that entire catalog and find strong evidence for a new planet in a specific spot, so we think there will be an important place for these types of techniques as our datasets get even larger.” More

  • in

    New programmable smart fabric responds to temperature and electricity

    A new smart material developed by researchers at the University of Waterloo is activated by both heat and electricity, making it the first ever to respond to two different stimuli.
    The unique design paves the way for a wide variety of potential applications, including clothing that warms up while you walk from the car to the office in winter and vehicle bumpers that return to their original shape after a collision.
    Inexpensively made with polymer nano-composite fibres from recycled plastic, the programmable fabric can change its colour and shape when stimuli are applied.
    “As a wearable material alone, it has almost infinite potential in AI, robotics and virtual reality games and experiences,” said Dr. Milad Kamkar, a chemical engineering professor at Waterloo. “Imagine feeling warmth or a physical trigger eliciting a more in-depth adventure in the virtual world.”
    The novel fabric design is a product of the happy union of soft and hard materials, featuring a combination of highly engineered polymer composites and stainless steel in a woven structure.
    Researchers created a device similar to a traditional loom to weave the smart fabric. The resulting process is extremely versatile, enabling design freedom and macro-scale control of the fabric’s properties.
    The fabric can also be activated by a lower voltage of electricity than previous systems, making it more energy-efficient and cost-effective. In addition, lower voltage allows integration into smaller, more portable devices, making it suitable for use in biomedical devices and environment sensors.
    “The idea of these intelligent materials was first bred and born from biomimicry science,” said Kamkar, director of the Multi-scale Materials Design (MMD) Centre at Waterloo.
    “Through the ability to sense and react to environmental stimuli such as temperature, this is proof of concept that our new material can interact with the environment to monitor ecosystems without damaging them.”
    The next step for researchers is to improve the fabric’s shape-memory performance for applications in the field of robotics. The aim is to construct a robot that can effectively carry and transfer weight to complete tasks. More

  • in

    Better superconductors with palladium

    It is one of the most exciting races in modern physics: How can we produce the best superconductors that remain superconducting even at the highest possible temperatures and ambient pressure? In recent years, a new era of superconductivity has begun with the discovery of nickelates. These superconductors are based on nickel, which is why many scientists speak of the “nickel age of superconductivity research.” In many respects, nickelates are similar to cuprates, which are based on copper and were discovered in the 1980s.
    But now a new class of materials is coming into play: In a cooperation between TU Wien and universities in Japan, it was possible to simulate the behaviour of various materials more precisely on the computer than before. There is a “Goldilocks zone” in which superconductivity works particularly well. And this zone is reached neither with nickel nor with copper, but with palladium. This could usher in a new “age of palladates” in superconductivity research. The results have now been published in the scientific journal Physical Review Letters.
    The search for higher transition temperatures
    At high temperatures, superconductors behave very similar to other conducting materials. But when they are cooled below a certain “critical temperature,” they change dramatically: their electrical resistance disappears completely and suddenly they can conduct electricity without any loss. This limit, at which a material changes between a superconducting and a normally conducting state, is called the “critical temperature.”
    “We have now been able to calculate this “critical temperature” for a whole range of materials. With our modelling on high-performance computers, we were able to predict the phase diagram of nickelate superconductivity with a high degree of accuracy, as the experiments then showed later,” says Prof. Karsten Held from the Institute of Solid State Physics at TU Wien.
    Many materials become superconducting only just above absolute zero (-273.15°C), while others retain their superconducting properties even at much higher temperatures. A superconductor that still remains superconducting at normal room temperature and normal atmospheric pressure would fundamentally revolutionise the way we generate, transport and use electricity. However, such a material has not yet been discovered. Nevertheless, high-temperature superconductors, including those from the cuprate class, play an important role in technology — for example, in the transmission of large currents or in the production of extremely strong magnetic fields.
    Copper? Nickel? Or Palladium?
    The search for the best possible superconducting materials is difficult: there are many different chemical elements that come into question. You can put them together in different structures, you can add tiny traces of other elements to optimise superconductivity. “To find suitable candidates, you have to understand on a quantum-physical level how the electrons interact with each other in the material,” says Prof. Karsten Held.
    This showed that there is an optimum for the interaction strength of the electrons. The interaction must be strong, but also not too strong. There is a “golden zone” in between that makes it possible to achieve the highest transition temperatures.
    Palladates as the optimal solution
    This golden zone of medium interaction can be reached neither with cuprates nor with nickelates — but one can hit the bull’s eye with a new type of material: so-called palladates. “Palladium is directly one line below nickel in the periodic table. The properties are similar, but the electrons there are on average somewhat further away from the atomic nucleus and each other, so the electronic interaction is weaker,” says Karsten Held.
    The model calculations show how to achieve optimal transition temperatures for palladium data. “The computational results are very promising,” says Karsten Held. “We hope that we can now use them to initiate experimental research. If we have a whole new, additional class of materials available with palladates to better understand superconductivity and to create even better superconductors, this could bring the entire research field forward.” More

  • in

    Cheaper method for making woven displays and smart fabrics — of any size or shape

    Researchers have developed next-generation smart textiles — incorporating LEDs, sensors, energy harvesting, and storage — that can be produced inexpensively, in any shape or size, using the same machines used to make the clothing we wear every day.
    The international team, led by the University of Cambridge, have previously demonstrated that woven displays can be made at large sizes, but these earlier examples were made using specialised manual laboratory equipment. Other smart textiles can be manufactured in specialised microelectronic fabrication facilities, but these are highly expensive and produce large volumes of waste.
    However, the team found that flexible displays and smart fabrics can be made much more cheaply, and more sustainably, by weaving electronic, optoelectronic, sensing and energy fibre components on the same industrial looms used to make conventional textiles. Their results, reported in the journal Science Advances, demonstrate how smart textiles could be an alternative to larger electronics in sectors including automotive, electronics, fashion and construction.
    Despite recent progress in the development of smart textiles, their functionality, dimensions and shapes have been limited by current manufacturing processes.
    “We could make these textiles in specialised microelectronics facilities, but these require billions of pounds of investment,” said Dr Sanghyo Lee from Cambridge’s Department of Engineering, the paper’s first author. “In addition, manufacturing smart textiles in this way is highly limited, since everything has to be made on the same rigid wafers used to make integrated circuits, so the maximum size we can get is about 30 centimetres in diameter.”
    “Smart textiles have also been limited by their lack of practicality,” said Dr Luigi Occhipinti, also from the Department of Engineering, who co-led the research. “You think of the sort of bending, stretching and folding that normal fabrics have to withstand, and it’s been a challenge to incorporate that same durability into smart textiles.”
    Last year, some of the same researchers showed that if the fibres used in smart textiles were coated with materials that can withstand stretching, they could be compatible with conventional weaving processes. Using this technique, they produced a 46-inch woven demonstrator display.

    Now, the researchers have shown that smart textiles can be made using automated processes, with no limits on their size or shape. Multiple types of fibre devices, including energy storage devices, light-emitting diodes, and transistors were fabricated, encapsulated, and mixed with conventional fibres, either synthetic or natural, to build smart textiles by automated weaving. The fibre devices were interconnected by an automated laser welding method with electrically conductive adhesive.
    The processes were all optimised to minimise damage to the electronic components, which in turn made the smart textiles durable enough to withstand the stretching of an industrial weaving machine. The encapsulation method was developed to consider the functionality of the fibre devices, and the mechanical force and thermal energy were investigated systematically to achieve the automated weaving and laser-based interconnection, respectively.
    The research team, working in partnership with textile manufacturers, were able to produce test patches of smart textiles of roughly 50×50 centimetres, although this can be scaled up to larger dimensions and produced in large volumes.
    “These companies have well-established manufacturing lines with high throughput fibre extruders and large weaving machines that can weave a metre square of textiles automatically,” said Lee. “So when we introduce the smart fibres to the process, the result is basically an electronic system that is manufactured exactly the same way other textiles are manufactured.”
    The researchers say it could be possible for large, flexible displays and monitors to be made on industrial looms, rather than in specialised electronics manufacturing facilities, which would make them far cheaper to produce. Further optimisation of the process is needed, however.
    “The flexibility of these textiles is absolutely amazing,” said Occhipinti. “Not just in terms of their mechanical flexibility, but the flexibility of the approach, and to deploy sustainable and eco-friendly electronics manufacturing platforms that contribute to the reduction of carbon emissions and enable real applications of smart textiles in buildings, car interiors and clothing. Our approach is quite unique in that way.”
    The research was supported in part by the European Union and UK Research and Innovation. More