More stories

  • in

    Can jack-of-all-trades AI reshape medicine?

    The vast majority of AI models used in medicine today are “narrow specialists,” trained to perform one or two tasks, such as scanning mammograms for signs of breast cancer or detecting lung disease on chest X-rays.
    But the everyday practice of medicine involves an endless array of clinical scenarios, symptom presentations, possible diagnoses, and treatment conundrums. So, if AI is to deliver on its promise to reshape clinical care, it must reflect that complexity of medicine and do so with high fidelity, says Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS.
    Enter generalist medical AI, a more evolved form of machine learning capable of performing complex tasks in a wide range of scenarios.
    Akin to general medicine physicians, Rajpurkar explained, generalist medical AI models can integrate multiple data types — such as MRI scans, X-rays, blood test results, medical texts, and genomic testing — to perform a range of tasks, from making complex diagnostic calls to supporting clinical decisions to choosing optimal treatment. And they can be deployed in a variety of settings, from the exam room to the hospital ward to the outpatient GI procedure suite to the cardiac operating room.
    While the earliest versions of generalist medical AI have started to emerge, its true potential and depth of capabilities have yet to materialize.
    “The rapidly evolving capabilities in the field of AI have completely redefined what we can do in the field of medical AI,” writes Rajpurkar in a newly published perspective in Nature, on which he is co-senior author with Eric Topol of the Scripps Research Institute and colleagues from Stanford University, Yale University, and the University of Toronto.

    Generalist medical AI is on the cusp of transforming clinical medicine as we know it, but with this opportunity come serious challenges, the authors say.
    In the article, the authors discuss the defining features of generalist medical AI, identify various clinical scenarios where these models can be used, and chart the road forward for their design, development, and deployment.
    Features of generalist medical AI
    Key characteristics that render generalist medical AI models superior to conventional models are their adaptability, their versatility, and their ability to apply existing knowledge to new contexts.
    For example, a traditional AI model trained to spot brain tumors on a brain MRI will look at a lesion on an image to determine whether it’s a tumor. It can provide no information beyond that. By contrast, a generalist model would look at a lesion and determine what type of lesion it is — a tumor, a cyst, an infection, or something else. It may recommend further testing and, depending on the diagnosis, suggest treatment options.

    “Compared with current models, generalist medical AI will be able to perform more sophisticated reasoning and integrate multiple data types, which lets it build a more detailed picture of a patient’s case,” said study co-first author Oishi Banerjee, a research associate in the Rajpurkar lab, which is already working on designing such models.
    According to the authors, generalist models will be able to: Adapt easily to new tasks without the need for formal retraining. They will perform the task by simply having it explained to them in plain English or another language. Analyze various types of data — images, medical text, lab results, genetic sequencing, patient histories, or any combination thereof — and generate a decision. In contrast, conventional AI models are limited to using predefined data types — text only, image only — and only in certain combinations. Apply medical knowledge to reason through previously unseen tasks and use medically accurate language to explain their reasoning.Clinical scenarios for use of generalist medical AI
    The researchers outline many areas in which generalist medical AI models would offer comprehensive solutions.
    Some of them are: Radiology reports. Generalist medical AI would act as a versatile digital radiology assistant to reduce workload and minimize rote work. These models could draft radiology reports that describe both abnormalities and relevant normal findings, while also taking into account the patient’s history. These models would also combine text narrative with visualization to highlight areas on an image described by the text. The models would also be able to compare previous and current findings on a patient’s image to illuminate telltale changes suggestive of disease progression. Real-time surgery assistance. If an operating team hits a roadblock during a procedure — such as failure to find a mass in an organ — the surgeon could ask the model to review the last 15 minutes of the procedure to look for any misses or oversights. If a surgeon encounters an ultra-rare anatomic feature during surgery, the model could rapidly access all published work on this procedure to offer insight in real time. Decision support at the patient bedside. Generalist models would offer alerts and treatment recommendations for hospitalized patients by continuously monitoring their vital signs and other parameters, including the patient’s records. The models would be able to anticipate looming emergencies before they occur. For example, a model might alert the clinical team when a patient is on the brink of going into circulatory shock and immediately suggest steps to avert it. Ahead, promise and peril
    Generalist medical AI models have the potential to transform health care, the authors say. They can alleviate clinician burnout, reduce clinical errors, and expedite and improve clinical decision-making.
    Yet, these models come with unique challenges. Their strongest features — extreme versatility and adaptability — also pose the greatest risks, the researchers caution, because they will require the collection of vast and diverse data.
    Some critical pitfalls include: Need for extensive, ongoing training. To ensure the models can switch data modalities quickly and adapt in real time depending on the context and type of question asked, they will need to undergo extensive training on diverse data from multiple complementary sources and modalities. That training would have to be undertaken periodically to keep up with new information. For instance, in the case of new SARS-CoV-2 variants, a model must be able to quickly retrieve key features on X-ray images of pneumonia caused by an older variant to contrast with lung changes associated with a new variant. Validation. Generalist models will be uniquely difficult to validate due to the versatility and complexity of tasks they will be asked to perform. This means the model needs to be tested on a wide range of cases it might encounter to ensure its proper performance. What this boils down to, Rajpurkar said, is defining the conditions under which the models perform and the conditions under which they fail. Verification. Compared with conventional models, generalist medical AI will handle much more data, more varied types of data, and data of greater complexity. This will make it that much more difficult for clinicians to determine how accurate a model’s decision is. For instance, a conventional model would look at an imaging study or a whole-slide image when classifying a patient’s tumor. A single radiologist or pathologist could verify whether the model was correct. By comparison, a generalist model could analyze pathology slides, CT scans, and medical literature, among many other variables, to classify and stage the disease and make a treatment recommendation. Such a complex decision would require verification by a multidisciplinary panel that includes radiologists, pathologists, and oncologists to assess the accuracy of the model. The researchers note that designers could make this verification process easier by incorporating explanations, such as clickable links to supporting passages in the literature, to allow clinicians to efficiently verify the model’s predictions. Another important feature would be building models that quantify their level of uncertainty. Biases. It is no secret that medical AI models can perpetuate biases, which they can acquire during training when exposed to limited datasets obtained from non-diverse populations. Such risks will be magnified when designing generalist medical AI due to the unprecedented scale and complexity of the datasets needed during their training. To minimize this risk, generalist medical AI models must be thoroughly validated to ensure that they do not underperform on particular populations, such as minority groups, the researchers recommend. Additionally, they will need to undergo continuous auditing and regulation after deployment. “These are serious but not insurmountable hurdles,” Rajpurkar said. “Having a clear-eyed understanding of all the challenges early on will help ensure that generalist medical AI delivers on its tremendous promise to change the practice of medicine for the better.” More

  • in

    Tunneling electrons

    By superimposing two laser fields of different strengths and frequency, the electron emission of metals can be measured and controlled precisely to a few attoseconds. Physicists from Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), the University of Rostock and the University of Konstanz have shown that this is the case. The findings could lead to new quantum-mechanical insights and enable electronic circuits that are a million times faster than today. The researchers have now published their findings in the journal Nature.
    Light is capable of releasing electrons from metal surfaces. This observation was already made in the first half of the 19th century by Alexandre Edmond Becquerel and later confirmed in various experiments, among others by Heinrich Hertz and Wilhelm Hallwachs. Since the photoelectric effect could not be reconciled with the light wave theory, Albert Einstein came to the conclusion that light must consist not only of waves, but also of particles. He laid the foundation for quantum mechanics.
    Strong laser light allows electrons to tunnel
    With the development of laser technology, research into the photoelectric effect has gained a new impetus. “Today, we can produce extremely strong and ultrashort laser pulses in a wide variety of spectral colors,” explains Prof. Dr. Peter Hommelhoff, Chair for Laser Physics at the Department of Physics at FAU. “This inspired us to capture and control the duration and intensity of the electron release of metals with greater accuracy.” So far, scientists have only been able to determine laser-induced electron dynamics precisely in gases — with an accuracy of a few attoseconds. Quantum dynamics and emission time windows have not yet been measured on solids.
    This is exactly what the researchers at FAU, the University of Rostock and the University of Konstanz have now succeeded in doing for the first time. They used a special strategy for this: Instead of just a strong laser pulse, which emits the electrons a pointy tungsten tip, they also used a second weaker laser with twice the frequency. “In principle, you have to know that with very strong laser light, the individual photons are no longer responsible for the release of the electrons, but rather the electric field of the laser,” explains Dr. Philip Dienstbier, a research associate at Peter Hommelhoff’s chair and leading author of the study. “The electrons then tunnel through the metal interface into the vacuum.” By deliberately superimposing the two light waves, physicists can control the shape and strength of the laser field — and thus also the emission of the electrons.
    Circuits a million times faster
    In the experiment, the researchers were able to determine the duration of the electron flow to 30 attoseconds — thirty billionths of a billionth of a second. This ultra-precise limitation of the emission time window could advance basic and application-related research in equal measure. “The phase shift of the two laser pulses allows us to gain deeper insights into the tunnel process and the subsequent movement of the electron in the laser field,” says Philip Dienstbier. “This enables new quantum mechanical insights into both the emission from the solid state body and the light fields used.”
    The most important field of application is light-field-driven electronics: With the proposed two-color method, the laser light can be modulated in such a way that an exactly defined sequence of electron pulses and thus of electrical signals could be generated. Dienstbier: “In the foreseeable future, it will be possible to integrate the components of our test setup — light sources, metal tip, electron detector — into a microchip.” Complex circuits with bandwidths up to the petahertz range are then conceivable — that would be almost a million times faster than current electronics. More

  • in

    ‘Smart’ tech is coming to a city near you

    If you own an internet-connected “smart” device, chances are it knows a lot about your home life.
    If you raid the pantry at 2 a.m. for a snack, your smart lights can tell. That’s because they track every time they’re switched on and off.
    Your Roomba knows the size and layout of your home and sends it to the cloud. Smart speakers eavesdrop on your every word, listening for voice commands.
    But the data-driven smart tech trend also extends far beyond our kitchens and living rooms. Over the past 20 years, city governments have been partnering with tech companies to collect real-time data on daily life in our cities, too.
    In urban areas worldwide, sidewalks, streetlights and buildings are equipped with sensors that log foot traffic, driving and parking patterns, even detect and pinpoint where gunshots may have been fired.
    In Singapore, for example, thousands of sensors and cameras installed across the city track everything from crowd density and traffic congestion to smoking where it’s not allowed.

    Copenhagen uses smart air quality sensors to monitor and map pollution levels.
    A 2016 report from the National League of Cities estimates that 66% of American cities had already invested in some type of ‘smart city’ technology, from intelligent meters that collect and share data on residents’ energy or water usage to sensor-laden street lights that can detect illegally parked cars.
    Proponents say the data collected will make cities cleaner, safer, more efficient. But many Americans worry that the benefits and harms of smart city tech may not be evenly felt across communities, says Pardis Emami-Naeini, assistant professor of computer science and director of the InSPIre Lab at Duke University.
    That’s one of the key takeaways of a survey Emami-Naeini and colleagues presented April 25 at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2023) in Hamburg, Germany.
    Nearly 350 people from across the United States participated in the survey. In addition, the researchers conducted qualitative interviews with 21 people aged 24 to 71 from underserved neighborhoods in Seattle that have been prioritized for smart city projects over the next 10 to 15 years.

    The study explored public attitudes on a variety of smart city technologies currently in use, from air quality sensors to surveillance cameras.
    While public awareness of smart cities was limited — most of the study respondents had never even heard of the term — researchers found that Americans have concerns about the ethical implications of the data being collected, particularly from marginalized communities.
    One of the technologies participants had significant concerns about was gunshot detection, which uses software and microphones placed around a neighborhood to detect gunfire and pinpoint its location, rather than relying solely on 911 calls to police.
    The technology is used in more than 135 cities across the U.S., including Chicago, Sacramento, Philadelphia and Durham.
    Though respondents acknowledged the potential benefits to public safety, they worried that the tech could contribute to racial disparities in policing, particularly when disproportionately installed in Black and brown neighborhoods.
    Some said the mere existence of smart city tech such as gunshot detectors or security cameras in their neighborhood could contribute to negative perceptions of safety that deter future home buyers and businesses.
    Even collecting and sharing seemingly innocuous data such as air quality raised concerns for some respondents, who worried it could potentially drive up insurance rates in poorer neighborhoods exposed to higher levels of pollution.
    In both interviews and surveys, people with lower incomes expressed more concern about the ethical implications of smart city tech than those with higher income levels.
    Emami-Naeini has spent several years studying the privacy concerns raised by smart devices and appliances in the home. But when she started asking people how they felt about the risks posed by smart tech in cities, she noticed a shift. Even when people weren’t concerned about the impacts of particular types of data collection on a personal level, she says they were still concerned about potential harms for the larger community.
    “They were concerned about how their neighborhoods would be perceived,” Emami-Naeini says. “They thought that it would widen disparities that they already see in marginalized neighborhoods.”
    Lack of attention to such concerns can hamstring smart city efforts, Emami-Naeini says.
    A proposed high-tech development in Toronto, for example, was cancelled after citizens and civic leaders raised concerns about what would happen with the data collected by the neighborhood’s sensors and devices, and how much of the city the tech company wanted to control.
    In 2017, San Diego launched a $30 million project to cover half the city with smart streetlights in an attempt to improve traffic congestion, but faced backlash after it surfaced that police had been quietly using the footage to solve crimes.
    “It’s not just a waste of resources — it damages people’s trust,” Emami-Naeini says.
    Worldwide, spending on smart cities initiatives is expected to reach $203 billion by 2024. But amid the enthusiasm, Emami-Naeini says, a key component has been neglected: the needs and views of city residents.
    “There’s a lack of user-centered research on this topic, especially from a privacy and ethics perspective” Emami-Naeini says.
    To make sure the ‘smart cities’ of the future are designed with residents firmly in mind, “transparency and communication are really important.”
    Her team’s findings indicate that people want to know things like where sensors are located, what kinds of data they collect and how often, how the data will be used, who has access, whether they have the ability to opt in or opt out, and who to contact if something goes wrong.
    The researchers hope the insights generated from their research will help inform the design of smart city initiatives and keep people front and center in all stages of a project, from brainstorming to deployment.
    “Communities that come together can actually change the fate of these projects,” Emami-Naeini says. “I think it’s really important to make sure that people’s voices are being heard, proactively and not reactively.”
    This work was supported by the U.S. National Science Foundation (CNS-1565252 and CNS-2114230), the University of Washington Tech Policy Lab (which receives support from the William and Flora Hewlett Foundation, the John D. and Catherine T. MacArthur Foundation, Microsoft, and the Pierre and Pamela Omidyar Fund at the Silicon Valley Community Foundation), and gifts from Google and Woven Planet. More

  • in

    How a horse whisperer can help engineers build better robots

    Humans and horses have enjoyed a strong working relationship for nearly 10,000 years — a partnership that transformed how food was produced, people were transported and even how wars were fought and won. Today, we look to horses for companionship, recreation and as teammates in competitive activities like racing, dressage and showing.
    Can these age-old interactions between people and their horses teach us something about building robots designed to improve our lives? Researchers with the University of Florida say yes.
    “There are no fundamental guiding principles for how to build an effective working relationship between robots and humans,” said Eakta Jain, an associate professor of computer and information science and engineering at UF’s Herbert Wertheim College of Engineering. “As we work to improve how humans interact with autonomous vehicles and other forms of AI, it occurred to me that we’ve done this before with horses. This relationship has existed for millennia but was never leveraged to provide insights for human-robot interaction.”
    Jain, who did her doctoral work at the Robotics Institute at Carnegie Mellon University, conducted a year of field work observing the special interactions among horses and humans at the UF Horse Teaching Unit in Gainesville, Florida. She will present her findings today at the ACM Conference on Human Factors in Computing Systems in Hamburg, Germany.
    Like horses did thousands of years before, robots are entering our lives and workplaces as companions and teammates. They vacuum our floors, help educate and entertain our children, and studies are showing that social robots can be effective therapy tools to help improve mental and physical health. Increasingly, robots are found in factories and warehouses, working collaboratively with human workers and sometimes even called co-bots.
    As a member of the UF Transportation Institute, Jain was leading the human factor subgroup that examines how humans should interact with autonomous vehicles, or AVs.

    “For the first time, cars and trucks can observe nearby vehicles and keep an appropriate distance from them as well as monitor the driver for signs of fatigue and attentiveness,” Jain said. “However, the horse has had these capabilities for a long time. I thought why not learn from our partnership with horses for transportation to help solve the problem of natural interaction between humans and AVs.”
    Looking at our history with animals to help shape our future with robots is not a new concept, though most studies have been inspired by the relationship humans have with dogs. Jain and her colleagues in the College of Engineering and UF Equine Sciences are the first to bring together engineering and robotics researchers with horse experts and trainers to conduct on-the-ground field studies with the animals.
    The multidisciplinary collaboration involved expertise in engineering, animal sciences and qualitative research methodologies, Jain explained. She first reached out Joel McQuagge, from UF’s equine behavior and management program who oversees the UF Horse Teaching Unit. He hadn’t thought about this connection between horses and robots, but he provided Jain with full access, and she spent months observing classes. She interviewed and observed horse experts, including thoroughbred trainers and devoted horse owners. Christina Gardner-McCune, an associate professor in UF’s department of computer and information science and engineering, provided expertise in qualitative data analysis.
    Data collected through observations and thematical analyses resulted in findings that can be applied by human-robot interaction researchers and robot designers.
    “Some of the findings are concrete and easy to visualize, while others are more abstract,” she says. “For example, we learned that a horse speaks with its body. You can see its ears pointing to where something caught its attention. We could build in similar types of nonverbal expressions in our robots, like ears that point when there is a knock on the door or something visual in the car when there’s a pedestrian on that side of the street.”
    A more abstract and groundbreaking finding is the notion of respect. When a trainer first works with a horse, he looks for signs of respect from the horse for its human partner.

    “We don’t typically think about respect in the context of human-robot interactions,” Jain says. “What ways can a robot show you that it respects you? Can we design behaviors similar to what the horse uses? Will that make the human more willing to work with the robot?”
    Jain, originally from New Delhi, says she grew up with robots the way people grow up with animals. Her father is an engineer who made educational and industrial robots, and her mother was a computer science teacher who ran her school’s robotics club.
    “Robots were the subject of many dinner table conversations,” she says, “so I was exposed to human-robot interactions early.”
    However, during her yearlong study of the human-horse relationship, she learned how to ride a horse and says she hopes to one day own a horse.
    “At first, I thought I could learn by observing and talking to people,” she says. “There is no substitute for doing, though. I had to feel for myself how the horse-human partnership works. From the first time I got on a horse, I fell in love with them.” More

  • in

    Jellyfish-like robots could one day clean up the world’s oceans

    Most of the world is covered in oceans, which are unfortunately highly polluted. One of the strategies to combat the mounds of waste found in these very sensitive ecosystems — especially around coral reefs — is to employ robots to master the cleanup. However, existing underwater robots are mostly bulky with rigid bodies, unable to explore and sample in complex and unstructured environments, and are noisy due to electrical motors or hydraulic pumps. For a more suitable design, scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) in Stuttgart looked to nature for inspiration. They configured a jellyfish-inspired, versatile, energy-efficient and nearly noise-free robot the size of a hand. Jellyfish-Bot is a collaboration between the Physical Intelligence and Robotic Materials departments at MPI-IS. “A Versatile Jellyfish-like Robotic Platform for Effective Underwater Propulsion and Manipulation” was published in Science Advances.
    To build the robot, the team used electrohydraulic actuators through which electricity flows. The actuators serve as artificial muscles which power the robot. Surrounding these muscles are air cushions as well as soft and rigid components which stabilize the robot and make it waterproof. This way, the high voltage running through the actuators cannot contact the surrounding water. A power supply periodically provides electricity through thin wires, causing the muscles to contract and expand. This allows the robot to swim gracefully and to create swirls underneath its body.
    “When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body. In this way, it can also collect nutrients. Our robot, too, circulates the water around it. This function is useful in collecting objects such as waste particles. It can then transport the litter to the surface, where it can later be recycled. It is also able to collect fragile biological samples such as fish eggs. Meanwhile, there is no negative impact on the surrounding environment. The interaction with aquatic species is gentle and nearly noise-free,” Tianlu Wang explains. He is a postdoc in the Physical Intelligence Department at MPI-IS and first author of the publication.
    His co-author Hyeong-Joon Joo from the Robotic Materials Department continues: “70% of marine litter is estimated to sink to the seabed. Plastics make up more than 60% of this litter, taking hundreds of years to degrade. Therefore, we saw an urgent need to develop a robot to manipulate objects such as litter and transport it upwards. We hope that underwater robots could one day assist in cleaning up our oceans.”
    Jellyfish-Bots are capable of moving and trapping objects without physical contact, operating either alone or with several in combination. Each robot works faster than other comparable inventions, reaching a speed of up to 6.1 cm/s. Moreover, Jellyfish-Bot only requires a low input power of around 100 mW. And it is safe for humans and fish should the polymer material insulating the robot one day be torn apart. Meanwhile, the noise from the robot cannot be distinguished from background levels. In this way Jellyfish-Bot interacts gently with its environment without disturbing it — much like its natural counterpart.
    The robot consists of several layers: some stiffen the robot, others serve to keep it afloat or insulate it. A further polymer layer functions as a floating skin. Electrically powered artificial muscles known as HASELs are embedded into the middle of the different layers. HASELs are liquid dielectric-filled plastic pouches that are partially covered by electrodes. Applying a high voltage across an electrode charges it positively, while surrounding water is charged negatively. This generates a force between positively-charged electrode and negatively-charged water that pushes the oil inside the pouches back and forth, causing the pouches to contract and relax — resembling a real muscle. HASELs can sustain the high electrical stresses generated by the charged electrodes and are protected against water by an insulating layer. This is important, as HASEL muscles were never before used to build an underwater robot.
    The first step was to develop Jellyfish-Bot with one electrode with six fingers or arms. In the second step, the team divided the single electrode into separated groups to independently actuate them.
    “We achieved grasping objects by making four of the arms function as a propeller, and the other two as a gripper. Or we actuated only a subset of the arms, in order to steer the robot in different directions. We also looked into how we can operate a collective of several robots. For instance, we took two robots and let them pick up a mask, which is very difficult for a single robot alone. Two robots can also cooperate in carrying heavy loads. However, at this point, our Jellyfish-Bot needs a wire. This is a drawback if we really want to use it one day in the ocean,” Hyeong-Joon Joo says.
    Perhaps wires powering robots will soon be a thing of the past. “We aim to develop wireless robots. Luckily, we have achieved the first step towards this goal. We have incorporated all the functional modules like the battery and wireless communication parts so as to enable future wireless manipulation,” Tianlu Wang continues. The team attached a buoyancy unit at the top of the robot and a battery and microcontroller to the bottom. They then took their invention for a swim in the pond of the Max Planck Stuttgart campus, and could successfully steer it along. So far, however, they could not direct the wireless robot to change course and swim the other way. More

  • in

    Creating a tsunami early warning system using artificial intelligence

    Tsunamis are incredibly destructive waves that can destroy coastal infrastructure and cause loss of life. Early warnings for such natural disasters are difficult because the risk of a tsunami is highly dependent on the features of the underwater earthquake that triggers it.
    In Physics of Fluids, by AIP Publishing, researchers from the University of California, Los Angeles and Cardiff University in the U.K. developed an early warning system that combines state-of-the-art acoustic technology with artificial intelligence to immediately classify earthquakes and determine potential tsunami risk.
    Underwater earthquakes can trigger tsunamis if a large amount of water is displaced, so determining the type of earthquake is critical to assessing the tsunami risk.
    “Tectonic events with a strong vertical slip element are more likely to raise or lower the water column compared to horizontal slip elements,” said co-author Bernabe Gomez. “Thus, knowing the slip type at the early stages of the assessment can reduce false alarms and enhance the reliability of the warning systems through independent cross-validation.”
    In these cases, time is of the essence, and relying on deep ocean wave buoys to measure water levels often leaves insufficient evacuation time. Instead, the researchers propose measuring the acoustic radiation (sound) produced by the earthquake, which carries information about the tectonic event and travels significantly faster than tsunami waves. Underwater microphones, called hydrophones, record the acoustic waves and monitor tectonic activity in real time.
    “Acoustic radiation travels through the water column much faster than tsunami waves. It carries information about the originating source and its pressure field can be recorded at distant locations, even thousands of kilometers away from the source. The derivation of analytical solutions for the pressure field is a key factor in the real-time analysis,” co-author Usama Kadri said.
    The computational model triangulates the source of the earthquake from the hydrophones and AI algorithms classify its slip type and magnitude. It then calculates important properties like effective length and width, uplift speed, and duration, which dictate the size of the tsunami.
    The authors tested their model with available hydrophone data and found it almost instantaneously and successfully described the earthquake parameters with low computational demand. They are improving the model by factoring in more information to increase the tsunami characterization’s accuracy.
    Their work predicting tsunami risk is part of a larger project to enhance hazard warning systems. The tsunami classification is a back-end aspect of a software that can improve the safety of offshore platforms and ships. More

  • in

    Scientists have full state of a quantum liquid down cold

    A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing — a method that can perform certain calculations significantly more efficiently than conventional computing.
    “In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which appears in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory — similar to those that describe the fundamental particles in our universe.”
    Sels adds that the breakthrough offers promise for technological advancement.
    “Quantum computing relies on the ability to generate entanglement between different subsystems, and that’s exactly what we can probe with our method,” he notes. “The ability to do such precise characterization could also lead to better quantum sensors — another application area of quantum technologies.”
    The research team, which included scientists from Vienna University of Technology, ETH Zurich, Free University of Berlin, and the Max-Planck Institute of Quantum Optics, performed a tomography of a quantum system — the reconstruction of a specific quantum state with the aim of seeking experimental evidence of a theory.
    The studied quantum system consisted of ultracold atoms — slow-moving atoms that make the movement easier to analyze because of their near-zero temperature — trapped on an atom chip.
    In their work, the scientists created two “copies” of this quantum system — cigar-shaped clouds of atoms that evolve over time without influencing each other. At different stages of this process, the team performed a series of experiments that revealed the two copies’ correlations.
    “By constructing an entire history of these correlations, we can infer what is the initial quantum state of the system and extract its properties,” explains Sels. “Initially, we have a very strongly coupled quantum liquid, which we split into two so that it evolves as two independent liquids, and then we recombine it to reveal the ripples that are in the liquid.
    “It’s like watching the ripples in a pond after throwing a rock in it and inferring the properties of the rock, such as its size, shape, and weight.”
    This research was supported by grants from the Air Force Office of Scientific Research (FA9550-21-1-0236) and the U.S. Army Research Office (W911NF-20-1-0163) as well as the Austrian Science Fund (FWF) and the German Research Research Foundation (DRG). More

  • in

    Researchers use AI to discover new planet outside solar system

    A University of Georgia research team has confirmed evidence of a previously unknown planet outside of our solar system, and they used machine learning tools to detect it.
    A recent study by the team showed that machine learning can correctly determine if an exoplanet is present by looking in protoplanetary disks, the gas around newly formed stars.
    The newly published findings represent a first step toward using machine learning to identify previously overlooked exoplanets.
    “We confirmed the planet using traditional techniques, but our models directed us to run those simulations and showed us exactly where the planet might be,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study.
    “When we applied our models to a set of older observations, they identified a disk that wasn’t known to have a planet despite having already been analyzed. Like previous discoveries, we ran simulations of the disk and found that a planet could re-create the observation.”
    According to Terry, the models suggested a planet’s presence, indicated by several images that strongly highlighted a particular region of the disk that turned out to have the characteristic sign of a planet — an unusual deviation in the velocity of the gas near the planet.
    “This is an incredibly exciting proof of concept. We knew from our previous work that we could use machine learning to find known forming exoplanets,” said Cassandra Hall, assistant professor of computational astrophysics and principal investigator of the Exoplanet and Planet Formation Research Group at UGA. “Now, we know for sure that we can use it to make brand new discoveries.”
    The discovery highlights how machine learning has the power to enhance scientists’ work, utilizing artificial intelligence as an added tool to expand researchers’ accuracy and more efficiently economize their time when engaged in such a vast endeavor as investigating deep, outer space.
    The models were able to detect a signal in data that people had already analyzed; they found something that previously had gone undetected.
    “This demonstrates that our models — and machine learning in general — have the ability to quickly and accurately identify important information that people can miss. This has the potential to dramatically speed up analysis and subsequent theoretical insights,” Terry said. “It only took about an hour to analyze that entire catalog and find strong evidence for a new planet in a specific spot, so we think there will be an important place for these types of techniques as our datasets get even larger.” More