Engineers at Caltech, ETH Zurich, and Harvard are developing an artificial intelligence (AI) that will allow autonomous drones to use ocean currents to aid their navigation, rather than fighting their way through them.
“When we want robots to explore the deep ocean, especially in swarms, it’s almost impossible to control them with a joystick from 20,000 feet away at the surface. We also can’t feed them data about the local ocean currents they need to navigate because we can’t detect them from the surface. Instead, at a certain point we need ocean-borne drones to be able to make decisions about how to move for themselves,” says John O. Dabiri (MS ’03, PhD ’05), the Centennial Professor of Aeronautics and Mechanical Engineering and corresponding author of a paper about the research that was published by Nature Communications on December 8.
The AI’s performance was tested using computer simulations, but the team behind the effort has also developed a small palm-sized robot that runs the algorithm on a tiny computer chip that could power seaborne drones both on Earth and other planets. The goal would be to create an autonomous system to monitor the condition of the planet’s oceans, for example using the algorithm in combination with prosthetics they previously developed to help jellyfish swim faster and on command. Fully mechanical robots running the algorithm could even explore oceans on other worlds, such as Enceladus or Europa.
In either scenario, drones would need to be able to make decisions on their own about where to go and the most efficient way to get there. To do so, they will likely only have data that they can gather themselves — information about the water currents they are currently experiencing.
To tackle this challenge, researchers turned to reinforcement learning (RL) networks. Compared to conventional neural networks, reinforcement learning networks do not train on a static data set but rather train as fast as they can collect experience. This scheme allows them to exist on much smaller computers — for the purposes of this project, the team wrote software that can be installed and run on a Teensy — a 2.4-by-0.7-inch microcontroller that anyone can buy for less than $30 on Amazon and only uses about a half watt of power.
Using a computer simulation in which flow past an obstacle in water created several vortices moving in opposite directions, the team taught the AI to navigate in such a way that it took advantage of low-velocity regions in the wake of the vortices to coast to the target location with minimal power used. To aid its navigation, the simulated swimmer only had access to information about the water currents at its immediate location, yet it soon learned how to exploit the vortices to coast toward the desired target. In a physical robot, the AI would similarly only have access to information that could be gathered from an onboard gyroscope and accelerometer, which are both relatively small and low-cost sensors for a robotic platform.
This kind of navigation is analogous to the way eagles and hawks ride thermals in the air, extracting energy from air currents to maneuver to a desired location with the minimum energy expended. Surprisingly, the researchers discovered that their reinforcement learning algorithm could learn navigation strategies that are even more effective than those thought to be used by real fish in the ocean.
“We were initially just hoping the AI could compete with navigation strategies already found in real swimming animals, so we were surprised to see it learn even more effective methods by exploiting repeated trials on the computer,” says Dabiri.
The technology is still in its infancy: currently, the team would like to test the AI on each different type of flow disturbance it would possibly encounter on a mission in the ocean — for example, swirling vortices versus streaming tidal currents — to assess its effectiveness in the wild. However, by incorporating their knowledge of ocean-flow physics within the reinforcement learning strategy, the researchers aim to overcome this limitation. The current research proves the potential effectiveness of RL networks in addressing this challenge — particularly because they can operate on such small devices. To try this in the field, the team is placing the Teensy on a custom-built drone dubbed the “CARL-Bot” (Caltech Autonomous Reinforcement Learning Robot). The CARL-Bot will be dropped into a newly constructed two-story-tall water tank on Caltech’s campus and taught to navigate the ocean’s currents.
“Not only will the robot be learning, but we’ll be learning about ocean currents and how to navigate through them,” says Peter Gunnarson, graduate student at Caltech and lead author of the Nature Communications paper.
Story Source:
Materials provided by California Institute of Technology. Original written by Robert Perkins. Note: Content may be edited for style and length.