More stories

  • in

    Team develops new ‘attacker’ device to improve autonomous car safety

    Modern cars and autonomous vehicles use millimeter wave (mmWave) radio frequencies to enable self-driving or assisted driving features that ensure the safety of passengers and pedestrians. This connectivity, however, can also expose them to potential cyberattacks.
    To help improve the safety and security of autonomous vehicles, researchers from the lab of Dinesh Bharadia, an affiliate of the UC San Diego Qualcomm Institute (QI) and faculty member in the university’s Jacobs School of Engineering Department of Electrical and Computer Engineering, and colleagues from Northeastern University devised a novel algorithm designed to mimic an attacking device. The algorithm, described in the paper “mmSpoof: Resilient Spoofing of Automotive Millimeter-wave Radars using Reflect Array,” lets researchers identify areas for improvement in autonomous vehicle security.
    “The invention of autonomous systems, like self-driving cars, was to enable the safety of humanity and prevent loss of life,” said Bharadia. “Such autonomous systems use sensors and sensing to deliver autonomy. Therefore, safety and security rely on achieving high-fidelity sensing information from sensors. Our team exposed a radar sensor vulnerability and developed a solution that autonomous cars should strongly consider.”
    Defending Against Cyberattacks
    Autonomous cars detect obstacles and other potential hazards by sending out radio waves and recording their reflections as they bounce off surrounding objects. By measuring the time it takes for the signal to return, as well as changes in its frequency, the car can detect the distance and speed of other vehicles on the road.
    Like any wireless system, however, autonomous cars run the risk of cyberattacks. Attackers driving ahead of an autonomous unit can engage in “spoofing,” an activity that involves interfering with the vehicle’s return signal to trick it into registering an obstacle in its path. The vehicle may then brake suddenly, increasing the risk of an accident.

    To address this potential chink in autonomous cars’ armor, Vennam and colleagues devised a novel algorithm designed to mimic a spoofing attack. Previous attempts to develop an attacking device to test cars’ resistance have had limited feasibility, either assuming the attacker can synchronize with the victim’s radar signal to launch an assault, or assuming both cars are physically connected by a cable.
    In its new paper, presented by Vennam at the IEEE Symposium on Security and Privacy in San Francisco on May 22, the team describe a new technique that uses the victim vehicle’s radar against itself. By subtly changing the received signal’s parameters at “lightspeed” before reflecting it back, an attacker can disguise their sabotage and make it much harder for the vehicle to filter malicious behavior. All of this can be done “on the go” and in real-time without knowing anything about the victim’s radar.
    “Automotive vehicles heavily rely on mmWave radars to enable real-time situational awareness and advanced features to promote safe driving,” said Vennam. “Securing these radars is of paramount importance. We — mmSpoof — uncovered a serious security issue with mmWave radars and demonstrated a robust attack. What’s alarming is that anyone can build the prototype using off-the-shelf hardware components.”
    To counter this type of attack, Vennam suggests, researchers seeking to improve the safety of autonomous vehicles can use a high-resolution radar capable of capturing multiple reflections from a car to accurately identify the true reflection. Researchers might also create backup options for radar by incorporating cameras and “light detecting and ranging” (LiDAR), which records the time it takes for a laser pulse to hit an object and return to measure its surroundings, into their defense.
    Alternately, the team presents mmSpoof as a means of preventing dangerous tailgating. By placing an mmSpoof device on the back of their car, drivers can trick a tailgating car into registering a decelerating car in front of them and activating the brakes.
    In addition to Vennam and Bharadia, “mmSpoof: Resilient Spoofing of Automotive Millimeter-wave Radars using Reflect Array” was authored by Ish Kumar Jain, Kshitiz Bansal, Joshua Orozco and Puja Shukla of the UC San Diego Wireless Communication, Sensing and Networking Group and Jacobs School of Engineering, and Aanjhan Ranganathan of Northeastern University.
    The research was partially supported by grants from the National Science Foundation. More

  • in

    Making the structure of ‘fire ice’ with nanoparticles

    Cage structures made with nanoparticles could be a route toward making organized nanostructures with mixed materials, and researchers at the University of Michigan have shown how to achieve this through computer simulations.
    The finding could open new avenues for photonic materials that manipulate light in ways that natural crystals can’t. It also showcased an unusual effect that the team is calling entropy compartmentalization.
    “We are developing new ways to structure matter across scales, discovering the possibilities and what forces we can use,” said Sharon Glotzer, the Anthony C. Lembke Department Chair of Chemical Engineering, who led the study published today in Nature Chemistry. “Entropic forces can stabilize even more complex crystals than we thought.”
    While entropy is often explained as disorder in a system, it more accurately reflects the system’s tendency to maximize its possible states. Often, this ends up as disorder in the colloquial sense. Oxygen molecules don’t huddle together in a corner — they spread out to fill a room. But if you put them in the right size box, they will naturally order themselves into a recognizable structure.
    Nanoparticles do the same thing. Previously, Glotzer’s team had shown that bipyramid particles — like two short, three-sided pyramids stuck together at their bases — will form structures resembling that of fire ice if you put them into a sufficiently small box. Fire ice is made of water molecules that form cages around methane, and it can burn and melt at the same time. This substance is found in abundance under the ocean floor and is an example of a clathrate. Clathrate structures are under investigation for a range of applications, such as trapping and removing carbon dioxide from the atmosphere.
    Unlike water clathrates, earlier nanoparticle clathrate structures had no gaps to fill with other materials that might provide new and interesting possibilities for altering the structure’s properties. The team wanted to change that.

    “This time, we investigated what happens if we change the shape of the particle. We reasoned that if we truncate the particle a little, it would create space in the cage made by the bipyramid particles,” said Sangmin Lee, a recent doctoral graduate in chemical engineering and first author of the paper.
    He took the three central corners off each bipyramid and discovered the sweet spot where spaces appeared in the structure but the sides of the pyramids were still intact enough that they didn’t start organizing in a different way. The spaces filled in with more truncated bipyramids when they were the only particle in the system. When a second shape was added, that shape became the trapped guest particle.
    Glotzer has ideas for how to create selectively sticky sides that would enable different materials to act as cage and guest particles, but in this case, there was no glue holding the bipyramids together. Instead, the structure was completely stabilized by entropy.
    “What’s really fascinating, looking at the simulations, is that the host network is almost frozen. The host particles move, but they all move together like a single, rigid object, which is exactly what happens with water clathrates,” Glotzer said. “But the guest particles are spinning around like crazy — like the system dumped all the entropy into the guest particles.”
    This was the system with the most degrees of freedom that the truncated bipyramids could build in a limited space, but nearly all the freedom belonged to the guest particles. Methane in water clathrates rotates too, the researchers say. What’s more, when they removed the guest particles, the structure threw bipyramids that had been part of the networked cage structure into the cage interiors — it was more important to have spinning particles available to maximize the entropy than to have complete cages.
    “Entropy compartmentalization. Isn’t that cool? I bet that happens in other systems too — not just clathrates,” Glotzer said.
    Thi Vo, a former postdoctoral researcher in chemical engineering at U-M and now an assistant professor of chemical and biomolecular engineering at the Johns Hopkins University, contributed to the study.
    This study was funded by the Department of Energy and Office of Naval Research, with computing resources provided by the National Science Foundation’s Extreme Science and Engineering Discovery Environment and the University of Michigan.
    Glotzer is also the John Werner Cahn Distinguished University Professor of Engineering, the Stuart W. Churchill Collegiate Professor of Chemical Engineering, and a professor of materials science and engineering, macromolecular science and engineering, and physics. More

  • in

    Stretchable knee wearable offers insight into improving e-textiles for healthcare

    Mobility limitation is an initial stage of human mobility disability and an early sign of functional decline. It can manifest as muscle weakness, loss of balance, unsteady gait, and joint pain. Long-term and continuous monitoring of joint motion may potentially prevent or delay decline by allowing the early diagnosis, prognosis, and management of mobility-related conditions.
    This long-term and continuous monitoring is made possible by analysis systems that are either non-wearable or wearable. Non-wearable systems are reliable, but require a laboratory environment and trained individuals and are therefore impractical for daily use. On the other hand, wearable systems are portable, cheaper, and much easier to use. Unfortunately, typical wearable sensors tend to be inflexible and bulky.
    A relatively new player to the wearable systems field are wearables made from conductive fabric (CF), which are soft, lightweight, malleable, and non-invasive. These sensors are comfortable and suitable for long-term monitoring. However, most CF-based wearables become error-prone if displaced from their intended location and rely on external components that restrict the sensitivity and working range of the sensors.
    To overcome these limitations, a research team created a wearable with a high degree of functional and design freedom. Associate Professor Low Hong Yee and her colleagues from the Singapore University of Technology and Design (SUTD) collaborated with Dr Tan Ngiap Chuan of SingHealth Polyclinics and published their research paper, ‘All knitted and integrated soft wearable of high stretchability and sensitivity for continuous monitoring of human joint motion’ in Advanced Healthcare Materials.
    According to Associate Professor Low, their key considerations when designing the wearable were sensor data accuracy and reliability and for the sensor to rely on as few external components as possible. The result was a highly stretchable, fully functional sensing circuit made from a single fabric. Because the knee joint is important for lower limb mobility, the wearable was designed for the knee.
    To develop this single-fabric circuit, the team mechanically coupled an electrically conductive yarn with a dielectric yarn of high elasticity in various stitch patterns. Dimensions were customised according to the subject’s leg. The functional components — sensors, interconnects, and resistors — formed a stretchable circuit on the fully knitted wearable that allowed real-time data to be obtained.

    However, putting together sensors, interconnects, and resistors in a single stretchable knit is difficult. Associate Professor Low mentioned that “the synergy of yarns with different electrical and mechanical properties to achieve high signal sensitivity and high stretchability” was challenging, as the desired properties for each component were vastly different.
    Sensors need to produce a large change in resistance for enhanced sensitivity, while interconnects and resistors need fixed resistances of the highest and lowest values, respectively. As such, the researchers optimised yarn composition and stitch type for each component before connecting the functional circuit to a circuit board contained in a pocket of the wearable, allowing for wireless transmission of real-time data.
    With a soft knee wearable developed, its components functional, and data transmission possible, it was time to test the performance of the wearable. The team assessed the wearable through extension-flexion, walking, jogging, and staircase activities. Subjects wore the knee wearable together with reflective markers that were detected by a motion capture system, allowing the comparison between sensor data and actual joint movement.
    The sensor response time was less than 90 milliseconds for a step input, which is fast enough to monitor the human movements included in the study. Additionally, the smallest change in joint angle that the sensors could detect was 0.12 degrees. The sensor data showed strong correlation with joint movement data acquired from the motion capture system, demonstrating reliability of the sensor data.
    The potential impact of such device in the medical field is huge. Long-term continuous monitoring of joint motion is important to track mobility-related conditions. Often, people ignore early signs of mobility decline as they are not deemed serious enough to seek help. Wearable technology solves this problem by assessing a user’s mobility directly in real-time.
    Embedding a user-friendly sensor circuit into a soft and comfortable fabric may increase the public’s adoption of wearable technology, especially among athletes and the elderly. Data can be gathered in real-time and translated into indicators that can detect mobility decline. When signs of mobility decline are found, preventive care, prognosis, and management of the healthcare condition can be given.
    Building on this work, the team intends to study the effect of sweat and humidity on sensor signals and to extend the research to include subjects from both healthy and unhealthy populations in the future. “We have started working on extending the wearable to special user groups and to monitor other body joints, such as the shoulder,” stated Associate Professor Low. “We’re also looking at securing an incubation fund to explore the commercialisation potential of the wearable.”
    Video: https://youtu.be/KPlSPtDVs2k More

  • in

    Quantum scientists accurately measure power levels one trillion times lower than usual

    Scientists in Finland have developed a nanodevice that can measure the absolute power of microwave radiation down to the femtowatt level at ultra-low temperatures — a scale trillion times lower than routinely used in verifiable power measurements. The device has the potential to significantly advance microwave measurements in quantum technology.
    Measuring extremely low power
    Quantum science takes place mostly at ultra-low temperatures using devices called dilution refrigerators. The experiments also have to be done at tiny energy levels — down to the energy level of single photons or even less. Researchers have to measure these extremely low energy levels as accurately as possible, which means also accounting for heat — a persistent problem for quantum devices.
    To measure heat in quantum experiments, scientists use a special type of thermometer called a bolometer. An exceptionally accurate bolometer was recently developed at Aalto by a team led by Mikko Möttönen, associate professor of quantum technology at Aalto and VTT, but the device had more uncertainty than they had hoped for. Although it enabled them to observe the relative power level, they couldn’t determine the absolute amount of energy very accurately.
    In the new study, Möttönen’s team worked with researchers at the quantum-technology companies Bluefors and IQM, and VTT Technical Research Centre of Finland to improve the bolometer.
    ‘We added a heater to the bolometer, so we can apply a known heater current and measure the voltage. Since we know the precise amount of power we’re putting into the heater, we can calibrate the power of the input radiation against the heater power. The result is a self-calibrating bolometer working at low temperatures, which allows us to accurately measure absolute powers at cryogenic temperatures,’ Möttönen says.

    According to Russell Lake, director of quantum applications at Bluefors, the new bolometer is a significant step forward in measuring microwave power.
    ‘Commercial power sensors typically measure power at the scale of one milliwatt. This bolometer does that accurately and reliably at one femtowatt or below. That’s a trillion times less power than used in typical power calibrations.’
    Covering both deep and wide scales
    Möttönen explains that the new bolometer could improve the performance of quantum computers. ‘For accurate results, the measurement lines used to control qubits should be at very low temperatures, void of any thermal photons and excess radiation. Now with this bolometer, we can actually measure that radiation temperature without interference from the qubit circuitry,’ he says.
    The bolometer also covers a very broad range of frequencies.

    ‘The sensor is broadband, which means that it can measure what is the power absorbed in various frequencies. This is not a given in quantum technology as usually the sensors are limited to a very narrow band,’ says Jean-Philippe Girard, a scientist at Bluefors who also previously worked at Aalto on the device.
    The team says the bolometer provides a major boost to quantum technology fields.
    ‘Measuring microwaves happens in wireless communications, radar technology, and many other fields. They have their ways of performing accurate measurements, but there was no way to do the same when measuring very weak microwave signals for quantum technology. The bolometer is an advanced diagnostic instrument that has been missing from the quantum technology toolbox until now,’ Lake says.
    The work is a result of seamless collaboration between Aalto University and Bluefors, a perfect example of academy and industry complementing each other’s strengths. The device was developed at Aalto’s Quantum Computing and Devices (QCD) group, which is part of the Academy of Finland Centre of Excellence in Quantum Technology (QTF). They used Micronova cleanrooms that belong to the national research infrastructure OtaNano. Since the first experiments at Aalto, Bluefors has also successfully tested these devices in their own industrial facilities.
    ‘That shows that this is not just a lucky break in a university lab, but something that both the industrial and the academic professionals working in quantum technology can benefit from,’ Möttönen says. More

  • in

    The metaverse can lead to better science

    In 2021, Facebook made “metaverse” the buzziest word on the web, rebranding itself as Meta and announcing a plan to build “a set of interconnected digital spaces that lets you do things you can’t do in the physical world.” Since then, the metaverse has been called many different things. Some say it is the “future of the internet.” Others call it “an amorphous concept that no one really wants.”
    For Diego Gómez-Zará, an assistant professor in the University of Notre Dame’s Department of Computer Science and Engineering, the metaverse is something else: a tool for better science.
    In “The Promise and Pitfalls of the Metaverse for Science,” published in Nature Human Behavior, Gómez-Zará argues that scientists should take advantage of the metaverse for research while also guarding against the potential hazards that come with working in virtual reality.
    Virtual environments, real benefits
    Along with co-authors Peter Schiffer (Department of Applied Physics and Department of Physics, Yale University) and Dashun Wang (McCormick School of Engineering, Northwestern University), Gómez-Zará defines the metaverse as a virtual space where users can interact in a three-dimensional environment and take actions that affect the world outside.
    The researchers say the metaverse stands to benefit science in four main ways.

    First, it could remove barriers and make science more accessible. To understand these opportunities, Gómez-Zará says, we need not speculate about the distant future. Instead, we can point to ways researchers have already begun using virtual environments in their work.
    At the University College London School of Pharmacy, for example, scientists have made a digital replica of their lab that can be visited in virtual reality. This digital replica allows scientists at various points around the world to meet, collaborate and make decisions together about how to move a research project forward.
    Similarly, a virtual laboratory training developed by the Centers for Disease Control and Prevention teaches young scientists in many different locations to identify the parts of a lab and even conduct emergency procedures.
    This example shows a second benefit: improving teaching and learning.
    Gómez-Zará explains, “For someone training to become a surgeon, it is very hard to perform a procedure for the first time without any mistakes. And if you are working with a real patient, a mistake can be very harmful. Experiential learning in a virtual environment can help you try something and make mistakes along the way without harmful consequences, and the freedom from harmful consequences can improve research in other fields as well.”
    Gómez-Zará is also working with a team at Notre Dame’s Virtual Reality Lab to understand a third potential benefit, one related to the social side of science. The research team studies the effects of online environments on a team’s work processes. They find that virtual environments can help teams collaborate more effectively than videoconferencing.

    “Since the pandemic, we have all become comfortable videoconferencing,” says Gómez-Zará. “But that doesn’t mean getting on a video call is the most effective tool for every task. Especially for intense social activities like team building and innovation, virtual reality is a much closer replica of what we would have offline and could prove much more effective.”
    Gómez-Zará says the metaverse could also be used to create wholly new experimental environments.
    “If you can get data and images from somewhere, you can create a virtual replica of that place in virtual reality,” Gómez-Zará explains. For example, he says, we have images of Mars captured by satellites and robots. “These could be used to create a virtual reality version of the environment where scientists can experience what it is like there. Eventually they could even interact with the environment from a distance.”
    Potential pitfalls
    Gómez-Zará emphasizes that realizing the full benefits of the metaverse will also require us to avoid several pitfalls associated with it.
    There are still barriers to using virtual reality. Virtual reality goggles and related equipment, while becoming more affordable, still require a significant investment.
    This issue relates to a larger one: Who owns the metaverse? Currently, a few technology companies control the metaverse, but Gómez-Zará notes that there have been calls for agencies and others who support research to invest in building an open, public metaverse. In the meantime, he says, it is important for researchers to think through questions of ownership and privacy any time they work in the metaverse.
    His overall message, though, is a hopeful one. “We still tend to associate the metaverse with entertainment and casual socialization. This makes it all too easy to dismiss,” he says. “But look at how quickly we have all adapted to technologies we used rarely before the pandemic. It could be the same way with the metaverse. We need the research community exploring it. That is the best way to plan for the risks while also recognizing all of the possibilities.” More

  • in

    Scientists propose revolution in complex systems modelling with quantum technologies

    Scientists have made a significant advancement with quantum technologies that could transform complex systems modelling with an accurate and effective approach that requires significantly reduced memory.
    Complex systems play a vital role in our daily lives, whether that be predicting traffic patterns, weather forecasts, or understanding financial markets. However, accurately predicting these behaviours and making informed decisions relies on storing and tracking vast information from events in the distant past — a process which presents huge challenges.
    Current models using artificial intelligence see their memory requirements increase by more than a hundredfold every two years and can often involve optimisation over billions — or even trillions — of parameters. Such immense amounts of information lead to a bottleneck where we must trade-off memory cost against predictive accuracy.
    A collaborative team of researchers from The University of Manchester, the University of Science and Technology of China (USTC), the Centre for Quantum Technologies (CQT) at the National University of Singapore and Nanyang Technological University (NTU) propose that quantum technologies could provide a way to mitigate this trade-off.
    The team have successfully implemented quantum models that can simulate a family of complex processes with only a single qubit of memory — the basic unit of quantum information — offering substantially reduced memory requirements.
    Unlike classical models that rely on increasing memory capacity as more data from past events are added, these quantum models will only ever need one qubit of memory.

    The development, published in the journal Nature Communications, represents a significant advancement in the application of quantum technologies in complex system modelling.
    Dr Thomas Elliott, project leader and Dame Kathleen Ollerenshaw Fellow at The University of Manchester, said: “Many proposals for quantum advantage focus on using quantum computers to calculate things faster. We take a complementary approach and instead look at how quantum computers can help us reduce the size of the memory we require for our calculations.
    “One of the benefits of this approach is that by using as few qubits as possible for the memory, we get closer to what is practical with near-future quantum technologies. Moreover, we can use any extra qubits we free up to help mitigate against errors in our quantum simulators.”
    The project builds on an earlier theoretical proposal by Dr Elliott and the Singapore team. To test the feasibility of the approach, they joined forces with USTC, who used a photon-based quantum simulator to implement the proposed quantum models.
    The team achieved higher accuracy than is possible with any classical simulator equipped with the same amount of memory. The approach can be adapted to simulate other complex processes with different behaviours.
    Dr Wu Kang-Da, post-doctoral researcher at USTC and joint first author of the research, said: “Quantum photonics represents one of the least error-prone architectures that has been proposed for quantum computing, particularly at smaller scales. Moreover, because we are configuring our quantum simulator to model a particular process, we are able to finely-tune our optical components and achieve smaller errors than typical of current universal quantum computers.”
    Dr Chengran Yang, Research Fellow at CQT and also joint first author of the research, added: “This is the first realisation of a quantum stochastic simulator where the propagation of information through the memory over time is conclusively demonstrated, together with proof of greater accuracy than possible with any classical simulator of the same memory size.”
    Beyond the immediate results, the scientists say that the research presents opportunities for further investigation, such as exploring the benefits of reduced heat dissipation in quantum modelling compared to classical models. Their work could also find potential applications in financial modelling, signal analysis and quantum-enhanced neural networks.
    Next steps include plans to explore these connections, and to scale their work to higher-dimensional quantum memories. More

  • in

    Medical ‘microrobots’ could one day treat bladder disease, other human illnesses

    A team of engineers at the University of Colorado Boulder has designed a new class of tiny, self-propelled robots that can zip through liquid at incredible speeds — and may one day even deliver prescription drugs to hard-to-reach places inside the human body.
    The researchers describe their mini healthcare providers in a paper published last month in the journal Small.
    “Imagine if microrobots could perform certain tasks in the body, such as non-invasive surgeries,” said Jin Lee, lead author of the study and a postdoctoral researcher in the Department of Chemical and Biological Engineering. “Instead of cutting into the patient, we can simply introduce the robots to the body through a pill or an injection, and they would perform the procedure themselves.”
    Lee and his colleagues aren’t there yet, but the new research is big step forward for tiny robots.
    The group’s microrobots are really small. Each one measures only 20 micrometers wide, several times smaller than the width of a human hair. They’re also really fast, capable of traveling at speeds of about 3 millimeters per second, or roughly 9,000 times their own length per minute. That’s many times faster than a cheetah in relative terms.
    They have a lot of potential, too. In the new study, the group deployed fleets of these machines to transport doses of dexamethasone, a common steroid medication, to the bladders of lab mice. The results suggest that microrobots may be a useful tool for treating bladder diseases and other illnesses in people.

    “Microscale robots have garnered a lot of excitement in scientific circles, but what makes them interesting to us is that we can design them to perform useful tasks in the body,” said C. Wyatt Shields, a co-author of the new study and assistant professor of chemical and biological engineering.
    Fantastic Voyage
    If that sounds like something ripped from science fiction, that’s because it is. In the classic film Fantastic Voyage, a group of adventurers travels via a shrunken-down submarine into the body of a man in a coma.
    “The movie was released in 1966. Today, we are living in an era of micrometer- and nanometer-scale robots,” Lee said.
    He imagines that, just like in the movie, microrobots could swirl through a person’s blood stream, seeking out targeted areas to treat for various ailments.

    The team makes its microrobots out of materials called biocompatible polymers using a technology similar to 3D printing. The machines look a bit like small rockets and come complete with three tiny fins. They also include a little something extra: Each of the robots carries a small bubble of trapped air, similar to what happens when you dunk a glass upside-down in water. If you expose the machines to an acoustic field, like the kind used in ultrasound, the bubbles will begin to vibrate wildly, pushing water away and shooting the robots forward.
    Other CU Boulder co-authors of the new study include Nick Bottenus, assistant professor of mechanical engineering; Ankur Gupta, assistant professor of chemical and biological engineering; and engineering graduate students Ritu Raj, Cooper Thome, Nicole Day and Payton Martinez.
    To take their microrobots for a test drive, the researchers set their sights on a common problem for humans: bladder disease.
    Bringing relief
    Interstitial cystitis, also known as painful bladder syndrome, affects millions of Americans and, as its name suggests, can cause severe pelvic pain. Treating the disease can be equally uncomfortable. Often, patients have to come into a clinic several times over a period of weeks where a doctor injects a harsh solution of dexamethasone into the bladder through a catheter.
    Lee believes that microrobots may be able to provide some relief.
    In laboratory experiments, the researchers fabricated schools of microrobots encapsulating high concentrations of dexamethasone. They then introduced thousands of those bots into the bladders of lab mice. The result was a real-life Fantastic Voyage: The microrobots dispersed through the organs before sticking onto the bladder walls, which would likely make them difficult to pee out.
    Once there, the machines slowly released their dexamethasone over the course of about two days. Such a steady flow of medicine could allow patients to receive more drugs over a longer span of time, Lee said, improving outcomes for patients.
    He added that the team has a lot of work to do before microrobots can travel through real human bodies. For a start, the group wants to make the machines fully biodegradable so that they would eventually dissolve in the body.
    “If we can make these particles work in the bladder,” Lee said, “then we can achieve a more sustained drug release, and maybe patients wouldn’t have to come into the clinic as often.” More

  • in

    New method predicts extreme events more accurately

    With the rise of extreme weather events, which are becoming more frequent in our warming climate, accurate predictions are becoming more critical for all of us, from farmers to city-dwellers to businesses around the world. To date, climate models have failed to accurately predict precipitation intensity, particularly extremes. While in nature, precipitation can be very varied, with many extremes of precipitation, climate models predict a smaller variance in precipitation with a bias toward light rain.
    Missing piece in current algorithms: cloud organization
    Researchers have been working to develop algorithms that will improve prediction accuracy but, as Columbia Engineering climate scientists report, there has been a missing piece of information in traditional climate model parameterizations — a way to describe cloud structure and organization that is so fine-scale it is not captured on the computational grid being used. These organization measurements affect predictions of both precipitation intensity and its stochasticity, the variability of random fluctuations in precipitation intensity. Up to now, there has not been an effective, accurate way to measure cloud structure and quantify its impact.
    A new study from a team led by Pierre Gentine, director of the Learning the Earth with Artificial Intelligence and Physics (LEAP) Center, used global storm-resolving simulations and machine learning to create an algorithm that can deal separately with two different scales of cloud organization: those resolved by a climate model, and those that cannot be resolved as they are too small. This new approach addresses the missing piece of information in traditional climate model parameterizations and provides a way to predict precipitation intensity and variability more precisely.
    “Our findings are especially exciting because, for many years, the scientific community has debated whether to include cloud organization in climate models,” said Gentine, Maurice Ewing and J. Lamar Worzel Professor of Geophysics in the Departments of Earth and Environmental Engineering and Earth Environmental Sciences and a member of the Data Science Institute. “Our work provides an answer to the debate and a novel solution for including organization, showing that including this information can significantly improve our prediction of precipitation intensity and variability.”
    Using AI to design neural network algorithm
    Sarah Shamekh, a PhD student working with Gentine, developed a neural network algorithm that learns the relevant information about the role of fine-scale cloud organization (unresolved scales) on precipitation. Because Shamekh did not define a metric or formula in advance, the model learns implicitly — on its own — how to measure the clustering of clouds, a metric of organization, and then uses this metric to improve the prediction of precipitation. Shamekh trained the algorithm on a high-resolution moisture field, encoding the degree of small-scale organization.
    “We discovered that our organization metric explains precipitation variability almost entirely and could replace a stochastic parameterization in climate models,” said Shamekh, lead author of the study, published May 8, 2023, by PNAS. “Including this information significantly improved precipitation prediction at the scale relevant to climate models, accurately predicting precipitation extremes and spatial variability.”
    Machine-learning algorithm will improve future projections
    The researchers are now using their machine-learning approach, which implicitly learns the sub-grid cloud organization metric, in climate models. This should significantly improve the prediction of precipitation intensity and variability, including extreme precipitation events, and enable scientists to better project future changes in the water cycle and extreme weather patterns in a warming climate.
    Future work
    This research also opens up new avenues for investigation, such as exploring the possibility of precipitation creating memory, where the atmosphere retains information about recent weather conditions, which in turn influences atmospheric conditions later on, in the climate system. This new approach could have wide-ranging applications beyond just precipitation modeling, including better modeling of the ice sheet and ocean surface. More