More stories

  • in

    New AI tool provides much-needed help to protein scientists across the world

    Using artificial intelligence, UCPH researchers have solved a problem that until now has been the stumbling block for important protein research into the dynamics behind diseases such as cancer, Alzheimer’s and Parkinson’s, as well as in the development of sustainable chemistry and new gene-editing technologies.
    It has always been a time-consuming and challenging task to analyse the huge datasets collected by researchers as they used microscopy and the smFRET technique to see how proteins move and interact with their surroundings. At the same time the task required a high level of expertise. Hence, the proliferation of stuffed servers and hard drives. Now researchers at the Department of Chemistry, Nano-Science Center, Novo Nordisk Foundation Center for Protein Research and the Niels Bohr Institute, University of Copenhagen, have developed a machine learning algorithm to do the heavy lifting.
    “We used to sort data until we went loopy. Now our data is analysed at the touch of button. And, the algorithm does it at least as well or better than we can. This frees up resources for us to collect more data than ever before and get faster results,” explains Simon Bo Jensen, a biophysicist and PhD student at the Department of Chemistry and the Nano-Science Center.
    The algorithm has learned to recognize protein movement patterns, allowing it to classify data sets in seconds — a process that typically takes experts several days to accomplish.
    “Until now, we sat with loads of raw data in the form of thousands of patterns. We used to check through it manually, one at a time. In doing so, we became the bottleneck of our own research. Even for experts, conducting consistent work and reaching the same conclusions time and time again is difficult. After all, we’re humans who tire and are prone to error,” says Simon Bo Jensen.
    Just a second’s work for the algorithm
    The studies about the relationship between protein movements and functions conducted by the UCPH researchers is internationally recognized and essential for understanding how the human body functions. For example, diseases including cancer, Alzheimer’s and Parkinson’s are caused by proteins clumping up or changing their behaviour. The gene-editing technology CRISPR, which won the Nobel Prize in Chemistry this year, also relies on the ability of proteins to cut and splice specific DNA sequences. When UCPH researchers like Guillermo Montoya and Nikos Hatzakis study how these processes take place, they make use of microscopy data.

    advertisement

    “Before we can treat serious diseases or take full advantage of CRISPR, we need to understand how proteins, the smallest building blocks, work. This is where protein movement and dynamics come into play. And this is where our tool is of tremendous help,” says Guillermo Montoya, Professor at the Novo Nordisk Foundation Center for Protein Research.
    Attention from around the world
    It appears that protein researchers from around the world have been missing just such a tool. Several international research groups have already presented themselves and shown an interest in using the algorithm.
    “This AI tool is a huge bonus for the field as a whole because it provides common standards, ones that weren’t there before, for when researchers across world need to compare data. Previously, much of the analysis was based on subjective opinions about which patterns were useful. Those can vary from research group to research group. Now, we are equipped with a tool that can ensure we all reach the same conclusions,” explains research director Nikos Hatzakis, Associate Professor at the Department of Chemistry and Affiliate Associate Professor at the Novo Nordisk Foundation Center for Protein Research.
    He adds that the tool offers a different perspective as well:
    “While analysing the choreography of protein movement remains a niche, it has gained more and more ground as the advanced microscopes needed to do so have become cheaper. Still, analysing data requires a high level of expertise. Our tool makes the method accessible to a greater number of researchers in biology and biophysics, even those without specific expertise, whether it’s research into the coronavirus or the development of new drugs or green technologies.” More

  • in

    Students develop tool to predict the carbon footprint of algorithms

    On a daily basis, and perhaps without realizing it, most of us are in close contact with advanced AI methods known as deep learning. Deep learning algorithms churn whenever we use Siri or Alexa, when Netflix suggests movies and tv shows based upon our viewing histories, or when we communicate with a website’s customer service chatbot.
    However, the rapidly evolving technology, one that has otherwise been expected to serve as an effective weapon against climate change, has a downside that many people are unaware of — sky high energy consumption. Artificial intelligence, and particularly the subfield of deep learning, appears likely to become a significant climate culprit should industry trends continue. In only six years — from 2012 to 2018 — the compute needed for deep learning has grown 300,000%. However, the energy consumption and carbon footprint associated with developing algorithms is rarely measured, despite numerous studies that clearly demonstrate the growing problem.
    In response to the problem, two students at the University of Copenhagen’s Department of Computer Science, Lasse F. Wolff Anthony and Benjamin Kanding, together with Assistant Professor Raghavendra Selvan, have developed a software programme they call Carbontracker. The programme can calculate and predict the energy consumption and CO2 emissions of training deep learning models.
    “Developments in this field are going insanely fast and deep learning models are constantly becoming larger in scale and more advanced. Right now, there is exponential growth. And that means an increasing energy consumption that most people seem not to think about,” according to Lasse F. Wolff Anthony.
    One training session = the annual energy consumption of 126 Danish homes
    Deep learning training is the process during which the mathematical model learns to recognize patterns in large datasets. It’s an energy-intensive process that takes place on specialized, power-intensive hardware running 24 hours a day.

    advertisement

    “As datasets grow larger by the day, the problems that algorithms need to solve become more and more complex,” states Benjamin Kanding.
    One of the biggest deep learning models developed thus far is the advanced language model known as GPT-3. In a single training session, it is estimated to use the equivalent of a year’s energy consumption of 126 Danish homes, and emit the same amount of CO2 as 700,000 kilometres of driving.
    “Within a few years, there will probably be several models that are many times larger,” says Lasse F. Wolff Anthony.
    Room for improvement
    “Should the trend continue, artificial intelligence could end up being a significant contributor to climate change. Jamming the brakes on technological development is not the point. These developments offer fantastic opportunities for helping our climate. Instead, it is about becoming aware of the problem and thinking: How might we improve?” explains Benjamin Kanding.
    The idea of Carbontracker, which is a free programme, is to provide the field with a foundation for reducing the climate impact of models. Among other things, the programme gathers information on how much CO2 is used to produce energy in whichever region the deep learning training is taking place. Doing so makes it possible to convert energy consumption into CO2 emission predictions.
    Among their recommendations, the two computer science students suggest that deep learning practitioners look at when their model trainings take place, as power is not equally green over a 24-hour period, as well as what type of hardware and algorithms they deploy.
    “It is possible to reduce the climate impact significantly. For example, it is relevant if one opts to train their model in Estonia or Sweden, where the carbon footprint of a model training can be reduced by more than 60 times thanks to greener energy supplies. Algorithms also vary greatly in their energy efficiency. Some require less compute, and thereby less energy, to achieve similar results. If one can tune these types of parameters, things can change considerably,” concludes Lasse F. Wolff Anthony. More

  • in

    COVID-19 'super-spreading' events play outsized role in overall disease transmission

    There have been many documented cases of Covid-19 “super-spreading” events, in which one person infected with the SARS-CoV-2 virus infects many other people. But how much of a role do these events play in the overall spread of the disease? A new study from MIT suggests that they have a much larger impact than expected.
    The study of about 60 super-spreading events shows that events where one person infects more than six other people are much more common than would be expected if the range of transmission rates followed statistical distributions commonly used in epidemiology.
    Based on their findings, the researchers also developed a mathematical model of Covid-19 transmission, which they used to show that limiting gatherings to 10 or fewer people could significantly reduce the number of super-spreading events and lower the overall number of infections.
    “Super-spreading events are likely more important than most of us had initially realized. Even though they are extreme events, they are probable and thus are likely occurring at a higher frequency than we thought. If we can control the super-spreading events, we have a much greater chance of getting this pandemic under control,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering and the senior author of the new study.
    MIT postdoc Felix Wong is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.
    Extreme events
    For the SARS-CoV-2 virus, the “basic reproduction number” is around 3, meaning that on average, each person infected with the virus will spread it to about three other people. However, this number varies widely from person to person. Some individuals don’t spread the disease to anyone else, while “super-spreaders” can infect dozens of people. Wong and Collins set out to analyze the statistics of these super-spreading events.

    advertisement

    “We figured that an analysis that’s rooted in looking at super-spreading events and how they happened in the past can inform how we should propose strategies of dealing with, and better controlling, the outbreak,” Wong says.
    The researchers defined super-spreaders as individuals who passed the virus to more than six other people. Using this definition, they identified 45 super-spreading events from the current SARS-CoV-2 pandemic and 15 additional events from the 2003 SARS-CoV outbreak, all documented in scientific journal articles. During most of these events, between 10 and 55 people were infected, but two of them, both from the 2003 outbreak, involved more than 100 people.
    Given commonly used statistical distributions in which the typical patient infects three others, events in which the disease spreads to dozens of people would be considered very unlikely. For instance, a normal distribution would resemble a bell jar with a peak around three, with a rapidly-tapering tail in both directions. In this scenario, the probability of an extreme event declines exponentially as the number of infections moves farther from the average of three.
    However, the MIT team found that this was not the case for coronavirus super-spreading events. To perform their analysis, the researchers used mathematical tools from the field of extreme value theory, which is used to quantify the risk of so-called “fat-tail” events. Extreme value theory is used to model situations in which extreme events form a large tail instead of a tapering tail. This theory is often applied in fields such as finance and insurance to model the risk of extreme events, and it is also used to model the frequency of catastrophic weather events such as tornadoes.
    Using these mathematical tools, the researchers found that the distribution of coronavirus transmissions has a large tail, implying that even though super-spreading events are extreme, they are still likely to occur.

    advertisement

    “This means that the probability of extreme events decays more slowly than one would have expected,” Wong says. “These really large super-spreading events, with between 10 and 100 people infected, are much more common than we had anticipated.”
    Stopping the spread
    Many factors may contribute to making someone a super-spreader, including their viral load and other biological factors. The researchers did not address those in this study, but they did model the role of connectivity, defined as the number of people that an infected person comes into contact with.
    To study the effects of connectivity, the researchers created and compared two mathematical network models of disease transmission. In each model, the average number of contacts per person was 10. However, they designed one model to have an exponentially declining distribution of contacts, while the other model had a fat tail in which some people had many contacts. In that model, many more people became infected through super-spreader events. Transmission stopped, however, when people with more than 10 contacts were taken out of the network and assumed to be unable to catch the virus.
    The findings suggest that preventing super-spreading events could have a significant impact on the overall transmission of Covid-19, the researchers say.
    “It gives us a handle as to how we could control the ongoing pandemic, which is by identifying strategies that target super-spreaders,” Wong says. “One way to do that would be to, for instance, prevent anyone from interacting with over 10 people at a large gathering.”
    The researchers now hope to study how biological factors might also contribute to super-spreading.
    The research was funded by the James S. McDonnell Foundation. More

  • in

    Secrets behind 'Game of Thrones' unveiled by data science and network theory

    What are the secrets behind one of the most successful fantasy series of all time? How has a story as complex as “Game of Thrones” enthralled the world and how does it compare to other narratives?
    Researchers from five universities across the UK and Ireland came together to unravel “A Song of Ice and Fire,” the books on which the TV series is based.
    In a paper that has just been published in the Proceedings of the National Academy of Sciences, a team of physicists, mathematicians and psychologists from Coventry, Warwick, Limerick, Cambridge and Oxford universities have used data science and network theory to analyse the acclaimed book series by George R.R. Martin.
    The study shows the way the interactions between the characters are arranged is similar to how humans maintain relationships and interact in the real world. Moreover, although important characters are famously killed off at random as the story is told, the underlying chronology is not at all so unpredictable.
    The team found that, despite over 2,000 named characters in “A Song of Ice and Fire” and over 41,000 interactions between them, at chapter-by-chapter level these numbers average out to match what we can handle in real life. Even the most predominant characters — those who tell the story — average out to have only 150 others to keep track of. This is the same number that the average human brain has evolved to deal with.
    While matching mathematical motifs might have been expected to lead to a rather narrow script, the author, George R. R. Martin, keeps the tale bubbling by making deaths appear random as the story unfolds. But, as the team show, when the chronological sequence is reconstructed the deaths are not random at all: rather, they reflect how common events are spread out for non-violent human activities in the real world.

    advertisement

    ‘Game of Thrones’ has invited all sorts of comparison to history and myth and the marriage of science and humanities in this paper opens new avenues to comparative literary studies. It shows, for example, that it is more akin to the Icelandic sagas than to mythological stories such as England’s Beowulf or Ireland’s Táin Bó Cúailnge. The trick in Game of Thrones, it seems, is to mix realism and unpredictability in a cognitively engaging manner.
    Thomas Gessey-Jones, from the University of Cambridge, commented: “The methods developed in the paper excitingly allow us to test in a quantitative manner many of the observations made by readers of the series, such as the books famous habit of seemingly killing off characters at random.”
    Professor Colm Connaughton, from the University of Warwick, observed: “People largely make sense of the world through narratives, but we have no scientific understanding of what makes complex narratives relatable and comprehensible. The ideas underpinning this paper are steps towards answering this question.”
    Professor Ralph Kenna, from Coventry University, said: “This kind of study opens up exciting new possibilities for examining the structure and design of epics in all sorts of contexts; impact of related work includes outcry over misappropriation of mythology in Ireland and flaws in the processes that led to it.”
    Professor Robin Dunbar, from the University of Oxford, observed: “This study offers convincing evidence that good writers work very carefully within the psychological limits of the reader.”
    Dr Pádraig MacCarron, from University of Limerick commented: “These books are known for unexpected twists, often in terms of the death of a major character, it is interesting to see how the author arranges the chapters in an order that makes this appear even more random than it would be if told chronologically.”
    Dr Joseph Yose, from Coventry University said: “I am excited to see the use of network analysis grow in the future, and hopefully, combined with machine learning, we will be able to predict what an upcoming series may look like.” More

  • in

    Printing plastic webs to protect the cellphone screens of the future

    Follow the unbreakable bouncing phone! A Polytechnique Montréal team recently demonstrated that a fabric designed using additive manufacturing absorbs up to 96% of impact energy — all without breaking. Cell Reports Physical Science journal recently published an article with details about this innovation, which paves the way for the creation of unbreakable plastic coverings.
    The concept and accompanying research revealed in the article is relatively simple. Professors Frédérick Gosselin and Daniel Therriault from Polytechnique Montréal’s Department of Mechanical Engineering, along with doctoral student Shibo Zou, wanted to demonstrate how plastic webbing could be incorporated into a glass pane to prevent it from shattering on impact.
    It seems a simple enough concept, but further reflection reveals that there’s nothing simple about this plastic web.
    The researchers’ design was inspired by spider webs and their amazing properties. “A spider web can resist the impact of an insect colliding with it, due to its capacity to deform via sacrificial links at the molecular level, within silk proteins themselves,” Professor Gosselin explains. “We were inspired by this property in our approach.”
    Biomimicry via 3D printing
    Researchers used polycarbonate to achieve their results; when heated, polycarbonate becomes viscous like honey. Using a 3D printer, Professor Gosselin’s team harnessed this property to “weave” a series of fibres less than 2 mm thick, then repeated the process by printing a new series of fibres perpendicularly, moving fast, before the entire web solidified.
    It turns out that the magic is in the process itself — that’s where the final product acquires its key properties.
    As it’s slowly extruded by the 3D printer to form a fibre, the molten plastic creates circles that ultimately form a series of loops. “Once hardened, these loops turn into sacrificial links that give the fibre additional strength. When impact occurs, those sacrificial links absorb energy and break to maintain the fibre’s overall integrity — similar to silk proteins,” researcher Gosselin explains.
    In an article published in 2015, Professor Gosselin’s team demonstrated the principles behind the manufacturing of these fibres. The latest Cell Reports Physical Science article reveals how these fibres behave when intertwined to take the shape a of web.
    Study lead author Shibo Zou, used the opportunity to illustrate how such a web could behave when located inside a protective screen. After embedding a series of webs in transparent resin plates, he conducted impact tests. The result? Plastic wafers dispersed up to 96% of impact energy without breaking. Instead of cracking, they deform in certain places, preserving the wafers’ overall integrity.
    According to Professor Gosselin, this nature-inspired innovation could lead to the manufacture of a new type of bullet-proof glass, or lead to the production of more durable plastic protective smartphones screens. “It could also be used in aeronautics as a protective coating for aircraft engines,” the Professor Gosselin notes. In the meantime, he certainly intends to explore the possibilities that this approach may open for him.

    Story Source:
    Materials provided by Polytechnique Montréal. Note: Content may be edited for style and length. More

  • in

    Machine learning predicts anti-cancer drug efficacy

    With the advent of pharmacogenomics, machine learning research is well underway to predict the patients’ drug response that varies by individual from the algorithms derived from previously collected data on drug responses. Entering high-quality learning data that can reflect a person’s drug response as much as possible is the starting point for improving the accuracy of the prediction. Previously, preclinical study of animal models were used which were relatively easier to obtain compared to human clinical data.
    In light of this, a research team led by Professor Sanguk Kim in the Department of Life Sciences at POSTECH is drawing attention by successfully increasing the accuracy of anti-cancer drug response predictions by using data closest to a real person’s response. The team developed this machine learning technique through algorithms that learn the transcriptome information from artificial organoids derived from actual patients instead of animal models. These research findings were published in the international journal Nature Communications on October 30.
    Even patients with the same cancer have different reactions to anti-cancer drugs so customized treatment is considered paramount in treatment development. However, the current predictions were based on genetic information of cancer cells, limiting their accuracy. Due to unnecessary biomarker information, machine learning had an issue of learning based on false signals.
    To increase the predictive accuracy, the research team introduced machine learning algorithms that use protein interaction network that can interact with target proteins as well as the transcriptome of individual proteins that are directly related with drug targets. It induces learning the transcriptome production of a protein that is functionally close to the target protein. Through this, only selected biomarkers can be learned instead of false biomarkers that the conventional machine learning had to learn, which increases the accuracy.
    In addition, data from patient-derived organoids — not animal models — were used to narrow the discrepancy of responses in actual patients. With this method, colorectal cancer patients treated with 5-fluorouracil and bladder cancer patients treated with cisplatin were predicted to be comparable to actual clinical results.

    Story Source:
    Materials provided by Pohang University of Science & Technology (POSTECH). Note: Content may be edited for style and length. More

  • in

    Cockroaches and lizards inspire new robot

    A new high-speed amphibious robot inspired by the movements of cockroaches and lizards, developed by Ben-Gurion University of the Negev (BGU) researchers, swims and runs on top of water at high speeds and crawls on difficult terrain.
    The mechanical design of the AmphiSTAR robot and its control system were presented virtually last week at the IROS (International Conference on Intelligent Robots and Systems) by Dr. David Zarrouk, director, Bioinspired and Medical Robotics Laboratory in BGU’s Department of Mechanical Engineering, and graduate student Avi Cohen.
    “The AmphiSTAR uses a sprawling mechanism inspired by cockroaches, and it is designed to run on water at high speeds like the basilisk lizard,” says Zarrouk. “We envision that AmphiSTAR can be used for agricultural, search and rescue and excavation applications, where both crawling and swimming are required.”
    The palm-size AmphiSTAR, part of the family of STAR robots developed at the lab, is a wheeled robot fitted with four propellers underneath whose axes can be tilted using the sprawl mechanism. The propellers act as wheels over ground and as fins to propel the robot over water while swimming and running on water at high speeds of 1.5 m/s. Two air tanks enable it to float and transition smoothly between high speeds when hovering on water to lower speeds swimming, and from crawling to swimming and vice versa.
    The experimental robot can crawl over gravel, grass and concrete as fast as the original STAR robot and can attain speeds of 3.6 m/s (3.3 mph).
    “Our future research will focus on the scalability of the robot and on underwater swimming,” Zarrouk says.
    Video: https://www.youtube.com/watch?v=qXgPQ7_yld0&t=0s
    This study was supported in part by the BGU Helmsley Charitable Trust through the Agricultural, Biological and Cognitive Robotics Initiative, and by the Marcus Endowment Fund, both at Ben-Gurion University of the Negev. The Marcus legacy gift, of over $480 million, was donated in 2016 to American Associates, Ben-Gurion University of the Negev by Dr. Howard and Lottie Marcus. The donation is the largest gift given to any Israeli university and is believed to be the largest gift to any Israeli institution.

    Story Source:
    Materials provided by American Associates, Ben-Gurion University of the Negev. Note: Content may be edited for style and length. More

  • in

    An underwater navigation system powered by sound

    GPS isn’t waterproof. The navigation system depends on radio waves, which break down rapidly in liquids, including seawater. To track undersea objects like drones or whales, researchers rely on acoustic signaling. But devices that generate and send sound usually require batteries — bulky, short-lived batteries that need regular changing. Could we do without them?
    MIT researchers think so. They’ve built a battery-free pinpointing system dubbed Underwater Backscatter Localization (UBL). Rather than emitting its own acoustic signals, UBL reflects modulated signals from its environment. That provides researchers with positioning information, at net-zero energy. Though the technology is still developing, UBL could someday become a key tool for marine conservationists, climate scientists, and the U.S. Navy.
    These advances are described in a paper being presented this week at the Association for Computing Machinery’s Hot Topics in Networks workshop, by members of the Media Lab’s Signal Kinetics group. Research Scientist Reza Ghaffarivardavagh led the paper, along with co-authors Sayed Saad Afzal, Osvy Rodriguez, and Fadel Adib, who leads the group and is the Doherty Chair of Ocean Utilization as well as an associate professor in the MIT Media Lab and the MIT Department of Electrical Engineering and Computer Science.
    “Power-hungry”
    It’s nearly impossible to escape GPS’ grasp on modern life. The technology, which relies on satellite-transmitted radio signals, is used in shipping, navigation, targeted advertising, and more. Since its introduction in the 1970s and ’80s, GPS has changed the world. But it hasn’t changed the ocean. If you had to hide from GPS, your best bet would be underwater.
    Because radio waves quickly deteriorate as they move through water, subsea communications often depend on acoustic signals instead. Sound waves travel faster and further underwater than through air, making them an efficient way to send data. But there’s a drawback.

    advertisement

    “Sound is power-hungry,” says Adib. For tracking devices that produce acoustic signals, “their batteries can drain very quickly.” That makes it hard to precisely track objects or animals for a long time-span — changing a battery is no simple task when it’s attached to a migrating whale. So, the team sought a battery-free way to use sound.
    Good vibrations
    Adib’s group turned to a unique resource they’d previously used for low-power acoustic signaling: piezoelectric materials. These materials generate their own electric charge in response to mechanical stress, like getting pinged by vibrating soundwaves. Piezoelectric sensors can then use that charge to selectively reflect some soundwaves back into their environment. A receiver translates that sequence of reflections, called backscatter, into a pattern of 1s (for soundwaves reflected) and 0s (for soundwaves not reflected). The resulting binary code can carry information about ocean temperature or salinity.
    In principle, the same technology could provide location information. An observation unit could emit a soundwave, then clock how long it takes that soundwave to reflect off the piezoelectric sensor and return to the observation unit. The elapsed time could be used to calculate the distance between the observer and the piezoelectric sensor. But in practice, timing such backscatter is complicated, because the ocean can be an echo chamber.
    The sound waves don’t just travel directly between the observation unit and sensor. They also careen between the surface and seabed, returning to the unit at different times. “You start running into all of these reflections,” says Adib. “That makes it complicated to compute the location.” Accounting for reflections is an even greater challenge in shallow water — the short distance between seabed and surface means the confounding rebound signals are stronger.

    advertisement

    The researchers overcame the reflection issue with “frequency hopping.” Rather than sending acoustic signals at a single frequency, the observation unit sends a sequence of signals across a range of frequencies. Each frequency has a different wavelength, so the reflected sound waves return to the observation unit at different phases. By combining information about timing and phase, the observer can pinpoint the distance to the tracking device. Frequency hopping was successful in the researchers’ deep-water simulations, but they needed an additional safeguard to cut through the reverberating noise of shallow water.
    Where echoes run rampant between the surface and seabed, the researchers had to slow the flow of information. They reduced the bitrate, essentially waiting longer between each signal sent out by the observation unit. That allowed the echoes of each bit to die down before potentially interfering with the next bit. Whereas a bitrate of 2,000 bits/second sufficed in simulations of deep water, the researchers had to dial it down to 100 bits/second in shallow water to obtain a clear signal reflection from the tracker. But a slow bitrate didn’t solve everything.
    To track moving objects, the researchers actually had to boost the bitrate. One thousand bits/second was too slow to pinpoint a simulated object moving through deep water at 30 centimeters/second. “By the time you get enough information to localize the object, it has already moved from its position,” explains Afzal. At a speedy 10,000 bits/second, they were able to track the object through deep water.
    Efficient exploration
    Adib’s team is working to improve the UBL technology, in part by solving challenges like the conflict between low bitrate required in shallow water and the high bitrate needed to track movement. They’re working out the kinks through tests in the Charles River. “We did most of the experiments last winter,” says Rodriguez. That included some days with ice on the river. “It was not very pleasant.”
    Conditions aside, the tests provided a proof-of-concept in a challenging shallow-water environment. UBL estimated the distance between a transmitter and backscatter node at various distances up to nearly half a meter. The team is working to increase UBL’s range in the field, and they hope to test the system with their collaborators at the Wood Hole Oceanographic Institution on Cape Cod.
    They hope UBL can help fuel a boom in ocean exploration. Ghaffarivardavagh notes that scientists have better maps of the moon’s surface than of the ocean floor. “Why can’t we send out unmanned underwater vehicles on a mission to explore the ocean? The answer is: We will lose them,” he says.
    UBL could one day help autonomous vehicles stay found underwater, without spending precious battery power. The technology could also help subsea robots work more precisely, and provide information about climate change impacts in the ocean. “There are so many applications,” says Adib. “We’re hoping to understand the ocean at scale. It’s a long-term vision, but that’s what we’re working toward and what we’re excited about.”
    This work was supported, in part, by the Office of Naval Research. More