More stories

  • in

    GPT-3 informs and disinforms us better

    A recent study conducted by researchers at the University of Zurich delved into the capabilities of AI models, specifically focusing on OpenAI’s GPT-3, to determine their potential risks and benefits in generating and disseminating (dis)information. Led by postdoctoral researchers Giovanni Spitale and Federico Germani, alongside Nikola Biller-Andorno, director of the Institute of Biomedical Ethics and History of Medicine (IBME), University of Zurich, the study involving 697 participants sought to evaluate whether individuals could differentiate between disinformation and accurate information presented in the form of tweets. Furthermore, the researchers aimed to determine if participants could discern whether a tweet was written by a genuine Twitter user or generated by GPT-3, an advanced AI language model. The topics covered included climate change, vaccine safety, the COVID-19 pandemic, flat earth theory, and homeopathic treatments for cancer.
    AI-powered systems could generate large-scale disinformation campaigns
    On the one hand, GPT-3 demonstrated the ability to generate accurate and, compared to tweets from real Twitter users, more easily comprehensible information. However, the researchers also discovered that the AI language model had an unsettling knack for producing highly persuasive disinformation. In a concerning twist, participants were unable to reliably differentiate between tweets created by GPT-3 and those written by real Twitter users. “Our study reveals the power of AI to both inform and mislead, raising critical questions about the future of information ecosystems,” says Federico Germani.
    These findings suggest that information campaigns created by GPT-3, based on well-structured prompts and evaluated by trained humans, would prove more effective for instance in a public health crisis which requires fast and clear communication to the public. The findings also raise significant concerns regarding the threat of AI perpetuating disinformation, particularly in the context of the rapid and widespread dissemination of misinformation and disinformation during a crisis or public health event. The study reveals that AI-powered systems could be exploited to generate large-scale disinformation campaigns on potentially any topic, jeopardizing not only public health but also the integrity of information ecosystems vital for functioning democracies.
    Proactive regulation highly recommended
    As the impact of AI on information creation and evaluation becomes increasingly pronounced, the researchers call on policymakers to respond with stringent, evidence-based and ethically informed regulations to address the potential threats posed by these disruptive technologies and ensure the responsible use of AI in shaping our collective knowledge and well-being. “The findings underscore the critical importance of proactive regulation to mitigate the potential harm caused by AI-driven disinformation campaigns,” says Nikola Biller-Andorno. “Recognizing the risks associated with AI-generated disinformation is crucial for safeguarding public health and maintaining a robust and trustworthy information ecosystem in the digital age.”
    Transparent research using open science best practice
    The study adhered to open science best practices throughout the entire pipeline, from pre-registration to dissemination. Giovanni Spitale, who is also an UZH Open Science Ambassador, states: “Open science is vital for fostering transparency and accountability in research, allowing for scrutiny and replication. In the context of our study, it becomes even more crucial as it enables stakeholders to access and evaluate the data, code, and intermediate materials, enhancing the credibility of our findings and facilitating informed discussions on the risks and implications of AI-generated disinformation.” Interested parties can access these resources through the OSF repository: https://osf.io/9ntgf/. More

  • in

    Geologists are using artificial intelligence to predict landslides

    A new technique developed by UCLA geologists that uses artificial intelligence to better predict where and why landslides may occur could bolster efforts to protect lives and property in some of the world’s most disaster-prone areas.
    The new method, described in a paper published in the journal Communications Earth & Environment, improves the accuracy and interpretability of AI-based machine-learning techniques, requires far less computing power and is more broadly applicable than traditional predictive models.
    The approach would be particularly valuable in places like California, the researchers say, where drought, wildfires and earthquakes create the perfect recipe for landslide disasters and where the situation is to expected to get worse as climate change brings stronger and wetter storms.
    Many factors influence where a landslide will occur, including the shape of the terrain, its slope and drainage areas, the material properties of soil and bedrock, and environmental conditions like climate, rainfall, hydrology and ground motion resulting from earthquakes. With so many variables, predicting when and where a chunk of earth is likely to lose its grip is as much an art as a science.
    Geologists have traditionally estimated an area’s landslide risk by incorporating these factors into physical and statistical models. With enough data, such models can achieve reasonably accurate predictions, but physical models are time- and resource-intensive and can’t be applied over broad areas, while statistical models give little insight into how they assess various risk factors to arrive at their predictions.
    Using artificial intelligence to predict landslides
    In recent years, researchers have trained AI machine-learning models known as deep neural networks, or DNNs, to predict landslides. When fed reams of landslide-related variables and historical landslide information, these large, interconnected networks of algorithms can very quickly process and “learn” from this data to make highly accurate predictions.

    Yet despite their advantages in processing time and learning power, as with statistical models, DNNs do not “show their work,” making it difficult for researchers to interpret their predictions and to know which causative factors to target in attempting to prevent possible landslides in the future.
    “DNNs will deliver a percentage likelihood of a landslide that may be accurate, but we are unable to figure out why and which specific variables were most important in causing the landslide,” said Kevin Shao, a doctoral student in Earth, planetary and space sciences and co-first author of the journal paper.
    The problem, said co-first author Khalid Youssef, a former student of biomedical engineering and postdoctoral researcher at UCLA, is that the various network layers of DNNs constantly feed into one another during the learning process, and untangling their analysis is impossible. The UCLA researchers’ new method aimed to address that.
    “We sought to enable a clear separation of the results from the different data inputs, which would make the analysis far more useful in determining which factors are the most important contributors to natural disasters,” he said.
    Youssef and Shao teamed with co-corresponding authors Seulgi Moon, a UCLA associate professor of Earth, planetary and space sciences, and Louis Bouchard, a UCLA professor of chemistry and bioengineering, to develop an approach that could decouple the analytic power of DNNs from their complex adaptive nature in order to deliver more actionable results.

    Their method uses a type of AI called a superposable neural network, or SNN, in which the different layers of the network run alongside each other — retaining the ability to assess the complex relationships between data inputs and output results — but only converging at the very end to yield the prediction.
    The researchers fed the SNN data about 15 geospatial and climatic variables relevant to the eastern Himalaya mountains. The region was selected because the majority of human losses due to landslides occur in Asia, with a substantial portion in the Himalayas. The SNN model was able to predict landslide susceptibility for Himalayan areas with an accuracy rivaling that of DNNs, but most importantly, the researchers could tease apart the variables to see which ones played bigger roles in producing the results.
    “Similar to how autopsies are required to determine the cause of death, identifying the exact trigger for a landslide will always require field measurements and historical records of soil, hydrologic and climate conditions, such as rainfall amount and intensity, which can be hard to obtain in remote places like the Himalayas,” Moon said. “Nonetheless, our new AI prediction model can identify key variables and quantify their contributions to landslide susceptibility.”
    The researchers’ new AI program also requires far fewer computer resources than DNNs and can run effectively with relatively little computing power.
    “The SNN is so small it can run on an Apple Watch, as opposed to DNNs, which require powerful computer servers to train,” Bouchard said.
    The team plans to extend their work to other landslide-prone regions of the world. In California, for example, where landslide risk is exacerbated by frequent wildfires and earthquakes, and in similar areas, the new system may help develop early warning systems that account for a multitude of signals and predict a range of other surface hazards, including floods. More

  • in

    Fiber optic smart pants offer a low-cost way to monitor movements

    With an aging global population comes a need for new sensor technologies that can help clinicians and caregivers remotely monitor a person’s health. New smart pants based on fiber optic sensors could help by offering a nonintrusive way to track a person’s movements and issue alerts if there are signs of distress.
    “Our polymer optical fiber smart pants can be used to detect activities such as sitting, squatting, walking or kicking without inhibiting natural movements,” said research team leader Arnaldo Leal-Junior from the Federal University of Espirito Santo in Brazil. “This approach avoids the privacy issues that come with image-based systems and could be useful for monitoring aging patients at home or measuring parameters such as range of motion in rehabilitation clinics.”
    In the Optica Publishing Group journal Biomedical Optics Express, the researchers describe the new smart pants, which feature transparent optical fibers directly integrated into the textile. They also developed a portable signal acquisition unit that can be placed inside the pants pocket.
    “This research shows that it is possible to develop low-cost wearable sensing systems using optical devices,” said Leal-Junior. “We also demonstrate that new machine learning algorithms can be used to extend the sensing capabilities of smart textiles and possibly enable the measurement of new parameters.”
    Creating fiber optic pants
    The research is part of a larger project focused on the development of photonic textiles for low-cost wearable sensors. Although devices like smartwatches can tell if a person is moving, the researchers wanted to develop a technology that could detect specific types of activity without hindering movement in any way. They did this by incorporating intensity variation polymer optical fiber sensors directly into fabric that was then used to create pants.

    The sensors were based on polymethyl methacrylate optical fibers that are 1 mm in diameter. The researchers created sensitive areas in the fibers by removing small sections of the outer cladding fiber core. When the fiber bends due to movement, this will cause a change in optical power traveling through the fiber and can be used to identify what type of physical modification was applied to the sensitive area of the fiber.
    By creating these sensitive fiber areas in various locations, the researchers created a multiplexed sensor system with 30 measurement points on each leg. They also developed a new machine learning algorithm to classify different types of activities and to classify gait parameters based on the sensor data.
    Classifying activities
    To test their prototype, the researchers had volunteers wear the smart pants and perform specific activities: slow walking, fast walking, squatting, sitting on a chair, sitting on the floor, front kicking and back kicking. The sensing approach achieved 100% accuracy in classifying these activities.
    “Fiber optic sensors have several advantages, including the fact that they are immune to electric or magnetic interference and can be easily integrated into different clothing accessories due to their compactness and flexibility,” said Leal-Junior. “Basing the device on a multiplexed optical power variation sensor also makes the sensing approach low-cost and highly reliable.”
    The researchers are now working to connect the signal acquisition unit to the cloud, which would enable the data to be accessed remotely. They also plan to test the smart textile in home settings.
    This work was performed in LabSensors/LabTel in the scope of assistive technologies framework funded by Brazilian agencies FINEP and CNPq. More

  • in

    ‘Electronic skin’ from bio-friendly materials can track human vital signs with ultrahigh precision

    Queen Mary University and University of Sussex researchers have used materials inspired by molecular gastronomy to create smart wearables that surpassed similar devices in terms of strain sensitivity. They integrated graphene into seaweed to create nanocomposite microcapsules for highly tunable and sustainable epidermal electronics. When assembled into networks, the tiny capsules can record muscular, breathing, pulse, and blood pressure measurements in real-time with ultrahigh precision.
    Currently much of the research on nanocomposite-based sensors is related to non-sustainable materials. This means that these devices contribute to plastic waste when they are no longer in use. A new study, published on 28 June in Advanced Functional Materials, shows for the first time that it is possible to combine molecular gastronomy concepts with biodegradable materials to create such devices that are not only environmentally friendly, but also have the potential to outperform the non-sustainable ones.
    Scientists used seaweed and salt, two very commonly used materials in the restaurant industry, to create graphene capsules made up of a solid seaweed/graphene gel layer surrounding a liquid graphene ink core. This technique is similar to how Michelin star restaurants serve capsules with a solid seaweed/raspberry jam layer surrounding a liquid jam core.
    Unlike the molecular gastronomy capsules though, the graphene capsules are very sensitive to pressure; so, when squeezed or compressed, their electrical properties change dramatically. This means that they can be utilised as highly efficient strain sensors and can facilitate the creation of smart wearable skin-on devices for high precision, real-time biomechanical and vital signs measurements.
    Dr Dimitrios Papageorgiou, Lecturer in Materials Science at Queen Mary University of London, said: “By introducing a ground-breaking fusion of culinary artistry and cutting-edge nanotechnology, we harnessed the extraordinary properties of newly-created seaweed-graphene microcapsules that redefine the possibilities of wearable electronics. Our discoveries offer a powerful framework for scientists to reinvent nanocomposite wearable technologies for high precision health diagnostics, while our commitment to recyclable and biodegradable materials is fully aligned with environmentally conscious innovation.”
    This research can now be used as a blueprint by other labs to understand and manipulate the strain sensing properties of similar materials, pushing the concept of nano-based wearable technologies to new heights.
    The environmental impact of plastic waste has had a profound effect on our livelihoods and there is a need for future plastic-based epidermal electronics to trend towards more sustainable approaches. The fact that these capsules are made using recyclable and biodegradable materials could impact the way we think about wearable sensing devices and the effect of their presence.
    Dr Papageorgiou said: “We are also very proud of the collaborative effort between Dr Conor Boland’s group from University of Sussex and my group from Queen Mary University of London that fuelled this ground-breaking research. This partnership exemplifies the power of scientific collaboration, bringing together diverse expertise to push the boundaries of innovation.” More

  • in

    Research breakthrough could be significant for quantum computing future

    Scientists using one of the world’s most powerful quantum microscopes have made a discovery that could have significant consequences for the future of computing.
    Researchers at the Macroscopic Quantum Matter Group laboratory in University College Cork (UCC) have discovered a spatially modulating superconducting state in a new and unusual superconductor Uranium Ditelluride (UTe2). This new superconductor may provide a solution to one of quantum computing’s greatest challenges.
    Their finding has been published in the  journal Nature.
    Lead author Joe Carroll, a PhD researcher working with UCC Prof. of Quantum Physics Séamus Davis, explains the subject of the paper.
    “Superconductors are amazing materials which have many strange and unusual properties. Most famously they allow electricity to flow with zero resistance. That is, if you pass a current through them they don’t start to heat up, in fact, they don’t dissipate any energy despite carrying a huge current. They can do this because instead of individual electrons moving through the metal we have pairs of electrons which bind together. These pairs of electrons together form macroscopic quantum mechanical fluid.”
    “What our team found was that some of the electron pairs form a new crystal structure embedded in this background fluid. These types of states were first discovered by our group in 2016 and are now called Electron Pair-Density Waves. These Pair Density Waves are a new form of superconducting matter the properties of which we are still discovering.”
    “What is particularly exciting for us and the wider community is that UTe2 appears to be a new type of superconductor. Physicists have been searching for a material like it for nearly 40 years. The pairs of electrons appear to have intrinsic angular momentum. If this is true, then what we have detected is the first Pair-Density Wave composed of these exotic pairs of electrons.”

    When asked about the practical implications of this work Mr. Carroll explained:
    “There are indications that UTe2 is a special type of superconductor that could have huge consequences for quantum computing.”
    “Typical, classical, computers use bits to store and manipulate information. Quantum computers rely on quantum bits or qubits to do the same. The problem facing existing quantum computers is that each qubit must be in a superposition with two different energies — just as Schrödinger’s cat could be called both ‘dead’ and ‘alive’. This quantum state is very easily destroyed by collapsing into the lowest energy state — ‘dead’ — thereby cutting off any useful computation.
    “This places huge limits on the application of quantum computers. However, since its discovery five years ago there has been a huge amount of research on UTe2 with evidence pointing to it being a superconductor which may be used as a basis for topological quantum computing. In such materials there is no limit on the lifetime of the qubit during computation opening up many new ways for more stable and useful quantum computers. In fact, Microsoft have already invested billions of dollars into topological quantum computing so this is a well-established theoretical science already.” he said.
    “What the community has been searching for is a relevant topological superconductor; UTe2 appears to be that.”
    “What we’ve discovered then provides another piece to the puzzle of UTe2. To make applications using materials like this we must understand their fundamental superconducting properties. All of modern science moves step by step. We are delighted to have contributed to the understanding of a material which could bring us closer to much more practical quantum computers.”
    Congratulating the research team at the Macroscopic Quantum Matter Group Laboratory in University College Cork, Professor John F. Cryan, Vice President Research and Innovation said:
    “This important discovery will have significant consequences for the future of quantum computing. In the coming weeks, the University will launch UCC Futures — Future Quantum and Photonics and research led by Professor Seamus Davis and the Macroscopic Quantum Matter Group, with the use of one of the world’s most powerful microscopes, will play a crucial role in this exciting initiative.” More

  • in

    Boy fly meets girl fly meets AI: Training an AI to recognize fly mating identifies a gene for mating positions

    A research group at the Graduate School of Science, Nagoya University in Japan has used artificial intelligence to determine that Piezo, a channel that receives mechanical stimuli, plays a role in controlling the mating posture of male fruit flies (Drosophila melanogaster). Inhibition of Piezo led the flies to adopt an ineffective mating posture that decreased their reproductive performance. Their findings were reported in iScience.
    Most previous studies of animal mating have been limited to behavioral studies, limiting our understanding of this essential process. Since many animals adopt a fixed posture during copulation, maintaining an effective mating position is vital for reproductive success. In fruit flies, the male mounts the female and maintains this posture for at least until he transfers sufficient sperm to fertilize the female, which occurs about 8 minutes after copulation initiation. The Nagoya University research group realized that some factor was involved in maintaining this copulation posture.
    A likely contender is Piezo. Piezo is a family of transmembrane proteins found in bristle cells, the sensitive cells in male genitals. Piezo is activated when a mechanical force is applied to a cell membrane, allowing ions to flow through the channel and generate an electrical signal. This signal triggers cellular responses, including the release of neurotransmitters in neurons and the contraction of muscle cells. Such feedback helps a fly maintain his mating position.
    After identifying that the piezo gene is involved in the mating of fruit flies, Professor Azusa Kamikouchi (she/her), Assistant Professor Ryoya Tanaka (he/him), and student Hayato M. Yamanouchi (he/him) used optogenetics to further explore the neural mechanism of this phenomenon. This technique combines genetic engineering and optical science to create genetically modified neurons that can be inactivated with light of specific wavelengths. When the light was turned on during mating, the neuron was silenced. This allowed the researchers to manipulate the activity of piezo-expressing neurons.
    “This step proved to be a big challenge for us,” Kamikouchi said. “Using optogenetics, specific neurons are silenced only when exposed to photostimulation. However, our interest was silencing neural activity during copulation. Therefore, we had to make sure that the light was only turned on during mating. However, if the experimenter manually turned the photostimulation on in response to the animal’s copulation, they needed to observe the animal throughout the experiment. Waiting around for fruit flies to mate is incredibly time-consuming.”
    The observation problem led the group to establish an experimental deep learning system that could recognize copulation. By training the AI to recognize when sexual intercourse was occurring, they could automatically control photostimulation. This allowed them to discover that when piezo-expressing neurons were inhibited, males adopted a wonky, largely ineffective mating posture. As one might expect, the males that showed difficulty in adopting an appropriate sexual position had fewer offspring. They concluded that a key role of the piezo gene was helping the male shift his axis in response to the female for maximum mating success.
    “Piezo proteins have been implicated in a variety of physiological processes, including touch sensation, hearing, blood pressure regulation, and bladder function,” said Kamikouchi. “Now our findings suggest that reproduction can be added to the list. Since mating is an important behavior for reproduction that is widely conserved in animals, understanding its control mechanism will lead to a greater understanding of the reproductive system of animals in general.”
    Kamikouchi is enthusiastic about the use of AI in such research. “With the recent development of informatics, experimental systems and analysis methods have advanced dramatically,” she concludes. “In this research, we have succeeded in creating a device that automatically detects mating using machine learning-based real-time analysis and controls photostimulation necessary for optogenetics. To investigate the neural mechanisms that control animal behavior, it is important to conduct experiments in which neural activity is manipulated only when an individual exhibits a specific behavior. The method established in this study can be applied not only to the study of mating in fruit flies but also to various behaviors in other animals. It should make a significant contribution to the promotion of neurobiological research.” More

  • in

    How the brain processes numbers — New procedure improves measurement of human brain activity

    Measuring human brain activity down to the cellular level: until now, this has been possible only to a limited extent. With a new approach developed by researchers at the Technical University of Munich (TUM), it will now be much easier. The method relies on microelectrodes along with the support of brain tumor patients, who participate in studies while undergoing “awake” brain surgery. This enabled the team to identify how our brain processes numbers.
    We use numbers every day. It happens in a very concrete way when we count objects. And it happens abstractly, for example when we see the symbol “8” or do complex calculations.
    In a study published in the journal Cell Reports, a team of researchers and clinicians working with Simon Jacob, Professor of Translational Neurotechnology at the Department of Neurosurgery at TUM’s university hospital Klinikum rechts der Isar, was able to show how the brain processes numbers. The researchers found that individual neurons in the brains of participants were specialized in handling specific numbers. Each one of these neurons was particularly active when its “preferred” number of elements in a dot pattern was presented to the patient. To a somewhat lesser degree this was also the case when the subjects processed number symbols.
    “We already knew that animals processed numbers of objects in this way,” says Prof. Jacob. “But until now, it was not possible to demonstrate conclusively how it works in humans. This has brought us a step closer to unravelling the mechanisms of cognitive functions and developing solutions when things go wrong with these brain functions, for example.”
    Recording individual neurons is a challenge
    To get to this result, Prof. Jacob and his team first had to solve a fundamental problem. “The brain functions by means of electrical impulses,” says Simon Jacob. “So it is by detecting these signals directly that we can learn the most about cognition and perception.”
    There are, however, few opportunities for direct measurements of human brain activity. Neurons cannot be individually recorded through the skull. Some medical teams surgically implant electrodes in epilepsy patients. However, these procedures do not reach the brain region believed to be responsible for processing numbers.

    Innovative advancement of established approaches
    Simon Jacob and an interdisciplinary team therefore developed an approach that adapts established technologies and opens up entirely new possibilities in neuroscience. At the heart of the procedure are microelectrode arrays that have undergone extensive testing in animal studies.
    To ensure that the electrodes would produce reliable data in awake surgeries on the human brain, the researchers had to reconfigure them in close collaboration with the manufacturer. The trick was to increase the distance between the needle-like sensors used to record the electrical activities of a cell. “In theory, tightly packed electrodes will produce more data,” says Simon Jacob. “But in practice the large number of contacts stuns the implanted brain region, so that no usable data are recorded.”
    Patients support research
    The development of the procedure was possible only because patients with brain tumors agreed to support the research team. While undergoing brain surgery, they permitted sensors to be implanted and performed test tasks for the researchers. According to Simon Jacob, the experimental procedures did not negatively affect the work of the surgical team.
    A greater number of medical centers can conduct studies
    “Our procedure has two key advantages,” says Simon Jacob. First, such tumor surgeries provide access to a much larger area of the brain. “And second, with the electrodes we used, which have been standardized and tested in years of animal trials, many more medical centers will be able to measure neuronal activity in the future” says Jacob. While epilepsy operations are performed only at a small number of centers and on relatively few patients, he explains, many more university hospitals perform awake operations on patients with brain tumors. “With a significantly larger number of studies with standardized methods and sensors, we can learn a lot more in the coming years about how the human brain functions,” says Simon Jacob. More

  • in

    Emulating how krill swim to build a robotic platform for ocean navigation

    Picture a network of interconnected, autonomous robots working together in a coordinated dance to navigate the pitch-black surroundings of the ocean while carrying out scientific surveys or search-and-rescue missions.
    In a new study published in Scientific Reports, a team led by Brown University researchers has presented important first steps in building these types of underwater navigation robots. In the study, the researchers outline the design of a small robotic platform called Pleobot that can serve as both a tool to help researchers understand the krill-like swimming method and as a foundation for building small, highly maneuverable underwater robots.
    Pleobot is currently made of three articulated sections that replicate krill-like swimming called metachronal swimming. To design Pleobot, the researchers took inspiration from krill, which are remarkable aquatic athletes and display mastery in swimming, accelerating, braking and turning. They demonstrate in the study the capabilities of Pleobot to emulate the legs of swimming krill and provide new insights on the fluid-structure interactions needed to sustain steady forward swimming in krill.
    According to the study, Pleobot has the potential to allow the scientific community to understand how to take advantage of 100 million years of evolution to engineer better robots for ocean navigation.
    “Experiments with organisms are challenging and unpredictable,” said Sara Oliveira Santos, a Ph.D. candidate at Brown’s School of Engineering and lead author of the new study. “Pleobot allows us unparalleled resolution and control to investigate all the aspects of krill-like swimming that help it excel at maneuvering underwater. Our goal was to design a comprehensive tool to understand krill-like swimming, which meant including all the details that make krill such athletic swimmers.”
    The effort is a collaboration between Brown researchers in the lab of Assistant Professor of Engineering Monica Martinez Wilhelmus and scientists in the lab of Francisco Cuenca-Jimenez at the Universidad Nacional Autónoma de México.

    A major aim of the project is to understand how metachronal swimmers, like krill, manage to function in complex marine environments and perform massive vertical migrations of over 1,000 meters — equivalent to stacking three Empire State Buildings — twice daily.
    “We have snapshots of the mechanisms they use to swim efficiently, but we do not have comprehensive data,” said Nils Tack, a postdoctoral associate in the Wilhelmus lab. “We built and programmed a robot that precisely emulates the essential movements of the legs to produce specific motions and change the shape of the appendages. This allows us to study different configurations to take measurements and make comparisons that are otherwise unobtainable with live animals.”
    The metachronal swimming technique can lead to remarkable maneuverability that krill frequently display through the sequential deployment of their swimming legs in a back to front wave-like motion. The researchers believe that in the future, deployable swarm systems can be used to map Earth’s oceans, participate in search-and-recovery missions by covering large areas, or be sent to moons in the solar system, such as Europa, to explore their oceans.
    “Krill aggregations are an excellent example of swarms in nature: they are composed of organisms with a streamlined body, traveling up to one kilometer each way, with excellent underwater maneuverability,” Wilhelmus said. “This study is the starting point of our long-term research aim of developing the next generation of autonomous underwater sensing vehicles. Being able to understand fluid-structure interactions at the appendage level will allow us to make informed decisions about future designs.”
    The researchers can actively control the two leg segments and have passive control of Pleobot’s biramous fins. This is believed to be the first platform that replicates the opening and closing motion of these fins. The construction of the robotic platform was a multi-year project, involving a multi-disciplinary team in fluid mechanics, biology and mechatronics.

    The researchers built their model at 10 times the scale of krill, which are usually about the size of a paperclip. The platform is primarily made of 3D printable parts and the design is open-access, allowing other teams to use Pleobot to continue answering questions on metachronal swimming not just for krill but for other organisms like lobsters.
    In the published study, the group reveals the answer to one of the many unknown mechanisms of krill swimming: how they generate lift in order not to sink while swimming forward. If krill are not swimming constantly, they will start sinking because they are a little heavier than water. To avoid this, they still have to create some lift even while swimming forward to be able to remain at that same height in the water, said Oliveira Santos.
    “We were able to uncover that mechanism by using the robot,” said Yunxing Su, a postdoctoral associate in the lab. “We identified an important effect of a low-pressure region at the back side of the swimming legs that contributes to the lift force enhancement during the power stroke of the moving legs.”
    In the coming years, the researchers hope to build on this initial success and further build and test the designs presented in the article. The team is currently working to integrate morphological characteristics of shrimp into the robotic platform, such as flexibility and bristles around the appendages.
    The work was partially funded by a NASA Rhode Island EPSCoR Seed Grant. More