More stories

  • in

    Swarming cicadas, stock traders, and the wisdom of the crowd

    Pick almost any location in the eastern United States — say, Columbus Ohio. Every 13 or 17 years, as the soil warms in springtime, vast swarms of cicadas emerge from their underground burrows singing their deafening song, take flight and mate, producing offspring for the next cycle.
    This noisy phenomenon repeats all over the eastern and southeastern US as 17 distinct broods emerge in staggered years. In spring 2024, billions of cicadas are expected as two different broods — one that appears every 13 years and another that appears every 17 years — emerge simultaneously.
    Previous research has suggested that cicadas emerge once the soil temperature reaches 18°C, but even within a small geographical area, differences in sun exposure, foliage cover or humidity can lead to variations in temperature.
    Now, in a paper published in the journal Physical Review E, researchers from the University of Cambridge have discovered how such synchronous cicada swarms can emerge despite these temperature differences.
    The researchers developed a mathematical model for decision-making in an environment with variations in temperature and found that communication between cicada nymphs allows the group to come to a consensus about the local average temperature that then leads to large-scale swarms. The model is closely related to one that has been used to describe ‘avalanches’ in decision-making like those among stock market traders, leading to crashes.
    Mathematicians have been captivated by the appearance of 17- and 13-year cycles in various species of cicadas, and have previously developed mathematical models that showed how the appearance of such large prime numbers is a consequence of evolutionary pressures to avoid predation. However, the mechanism by which swarms emerge coherently in a given year has not been understood.
    In developing their model, the Cambridge team was inspired by previous research on decision-making that represents each member of a group by a ‘spin’ like that in a magnet, but instead of pointing up or down, the two states represent the decision to ‘remain’ or ’emerge’.

    The local temperature experienced by the cicadas is then like a magnetic field that tends to align the spins and varies slowly from place to place on the scale of hundreds of metres, from sunny hilltops to shaded valleys in a forest. Communication between nearby nymphs is represented by an interaction between the spins that leads to local agreement of neighbours.
    The researchers showed that in the presence of such interactions the swarms are large and space-filling, involving every member of the population in a range of local temperature environments, unlike the case without communication in which every nymph is on its own, responding to every subtle variation in microclimate.
    The research was carried out Professor Raymond E Goldstein, the Alan Turing Professor of Complex Physical Systems in the Department of Applied Mathematics and Theoretical Physics (DAMTP), Professor Robert L Jack of DAMTP and the Yusuf Hamied Department of Chemistry, and Dr Adriana I Pesci, a Senior Research Associate in DAMTP.
    “As an applied mathematician, there is nothing more interesting than finding a model capable of explaining the behaviour of living beings, even in the simplest of cases,” said Pesci.
    The researchers say that while their model does not require any particular means of communication between underground nymphs, acoustical signalling is a likely candidate, given the ear-splitting sounds that the swarms make once they emerge from underground.
    The researchers hope that their conjecture regarding the role of communication will stimulate field research to test the hypothesis.
    “If our conjecture that communication between nymphs plays a role in swarm emergence is confirmed, it would provide a striking example of how Darwinian evolution can act for the benefit of the group, not just the individual,” said Goldstein.
    This work was supported in part by the Complex Physical Systems Fund. More

  • in

    Engineers develop hack to make automotive radar ‘hallucinate’

    A black sedan cruises silently down a quiet suburban road, driver humming Christmas carols quietly while the car’s autopilot handles the driving. Suddenly, red flashing lights and audible warnings blare to life, snapping the driver from their peaceful reprieve. They look at the dashboard screen and see the outline of a car speeding toward them for a head-on collision, yet the headlights reveal nothing ahead through the windshield.
    Despite the incongruity, the car’s autopilot grabs control and swerves into a ditch. Exasperated, the driver looks around the vicinity, finding no other vehicles as the incoming danger disappears from the screen. Moments later, the real threat emerges — a group of hijackers jogging toward the immobilized vehicle.
    This scene seems destined to become a common plot point in Hollywood films for decades to come. But due to the complexities of modern automotive detection systems, it remains firmly in the realm of science fiction. At least for the moment.
    Engineers at Duke University, led by Miroslav Pajic, the Dickinson Family Associate Professor of Electrical and Computer Engineering, and Tingjun Chen, assistant professor of electrical and computer engineering, have now demonstrated a system they’ve dubbed “MadRadar” for fooling automotive radar sensors into believing almost anything is possible.
    The technology can hide the approach of an existing car, create a phantom car where none exists or even trick the radar into thinking a real car has quickly deviated from its actual course. And it can achieve this feat in the blink of an eye without having any prior knowledge about the specific settings of the victim’s radar, making it the most troublesome threat to radar security to date.
    The researchers say MadRadar shows that manufacturers should immediately begin taking steps to better safeguard their products.
    The research will be published in 2024 Network and Distributed System Security Symposium, taking place February 26 — March 1 in San Diego, California.

    “Without knowing much about the targeted car’s radar system, we can make a fake vehicle appear out of nowhere or make an actual vehicle disappear in real-world experiments,” Pajic said. “We’re not building these systems to hurt anyone, we’re demonstrating the existing problems with current radar systems to show that we need to fundamentally change how we design them.”
    In modern cars that feature assistive and autonomous driving systems, radar is typically used to detect moving vehicles in front of and around the vehicle. It also helps to augment visual and laser-based systems to detect vehicles moving in front of or behind the car.
    Because there are now so many different cars using radar on a typical highway, it is unlikely that any two vehicles will have the exact same operating parameters, even if they share a make and model. For example, they might use slightly different operating frequencies or take measurements at slightly different intervals. Because of this, previous demonstrations of radar-spoofing systems have needed to know the specific parameters being used.
    “Think of it like trying to stop someone from listening to the radio,” explained Pajic. “To block the signal or to hijack it with your own broadcast, you’d need to know what station they were listening to first.”
    In the MadRadar demonstration, the team from Duke showed off the capabilities of a radar-spoofing system they’ve built that can accurately detect a car’s radar parameters in less than a quarter of a second. Once they’ve been discovered, the system can send out its own radar signals to fool the target’s radar.
    In one demonstration, MadRadar sends signals to the target car to make it perceive another car where none actually exist. This involves modifying the signal’s characteristics based on time and velocity in such a way that it mimics what a real contact would look like.

    In a second and much more complicated example, it fools the target’s radar into thinking the opposite — that there is no passing car when one actually does exist. It achieves this by delicately adding masking signals around the car’s true location to create a sort of bright spot that confuses the radar system.
    “You have to be judicious about adding signals to the radar system, because if you simply flooded the entire field of vision, it’d immediately know something was wrong,” said David Hunt, a PhD student working in Pajic’s lab.
    In a third kind of attack, the researchers mix the two approaches to make it seem as though an existing car has suddenly changed course. The researchers recommend that carmakers try randomizing a radar system’s operating parameters over time and adding safeguards to the processing algorithms to spot similar attacks.
    “Imagine adaptive cruise control, which uses radar, believing that the car in front of me was speeding up, causing your own car to speed up, when in reality it wasn’t changing speed at all,” said Pajic. “If this were done at night, by the time your car’s cameras figured it out you’d be in trouble.”
    Each of these attack demonstrations, the researchers emphasize, were done on real-world radar systems in actual cars moving at roadway speeds. It’s an impressive feat, given that if the spoofing radar signals are even a microsecond off the mark, the fake datapoint would be misplaced by the length of a football field.
    “These lessons go far beyond radar systems in cars as well,” Pajic said. “If you want to build drones that can explore dark environments, like in search and rescue or reconnaissance operations, that don’t cost thousands of dollars, radar is the way to go.”
    This research was supported by the Office of Naval Research (N00014-23-1-2206, N00014-20-1-2745), the Air Force Office of Scientific Research (FA9550-19-1-0169), the National Science Foundation (CNS-1652544, CNS-2211944), and the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks (Athena) (CNS-2112562). More

  • in

    Scientists make breakthrough in quantum materials research

    Researchers at the University of California, Irvine and Los Alamos National Laboratory, publishing in the latest issue of Nature Communications, describe the discovery of a new method that transforms everyday materials like glass into materials scientists can use to make quantum computers.
    “The materials we made are substances that exhibit unique electrical or quantum properties because of their specific atomic shapes or structures,” said Luis A. Jauregui, professor of physics & astronomy at UCI and lead author of the new paper. “Imagine if we could transform glass, typically considered an insulating material, and convert it into efficient conductors akin to copper. That’s what we’ve done.”
    Conventional computers use silicon as a conductor, but silicon has limits. Quantum computers stand to help bypass these limits, and methods like those described in the new study will help quantum computers become an everyday reality.
    “This experiment is based on the unique capabilities that we have at UCI for growing high-quality quantum materials. How can we transform these materials that are poor conductors into good conductors?” said Jauregui, who’s also a member of UCI’s Eddleman Quantum Institute. “That’s what we’ve done in this paper. We’ve been applying new techniques to these materials, and we’ve transformed them to being good conductors.”
    The key, Jauregui explained, was applying the right kind of strain to materials at the atomic scale. To do this, the team designed a special apparatus called a “bending station” at the machine shop in the UCI School of Physical Sciences that allowed them to apply large strain to change the atomic structure of a material called hafnium pentatelluride from a “trivial” material into a material fit for a quantum computer.
    “To create such materials, we need to ‘poke holes’ in the atomic structure,” said Jauregui. “Strain allows us to do that.”
    “You can also turn the atomic structure change on or off by controlling the strain, which is useful if you want to create an on-off switch for the material in a quantum computer in the future,” said Jinyu Liu, who is the first author of the paper and a postdoctoral scholar working with Jauregui.

    “I am pleased by the way theoretical simulations offer profound insights into experimental observations, thereby accelerating the discovery of methods for controlling the quantum states of novel materials,” said co-author Ruqian Wu, professor of physics and Associate Director of the UCI Center for Complex and Active Materials — a National Science Foundation Materials Research Science and Engineering Center (MRSEC). “This underscores the success of collaborative efforts involving diverse expertise in frontier research.”
    “I’m excited that our team was able to show that these elusive and much-sought-after material states can be made,” said Michael Pettes, study co-author and scientist with the Center for Integrated Nanotechnologies at Los Alamos National Laboratory. “This is promising for the development of quantum devices, and the methodology we demonstrate is compatible for experimentation on other quantum materials as well.”
    Right now, quantum computers only exist in a few places, such as in the offices of companies like IBM, Google and Rigetti. “Google, IBM and many other companies are looking for effective quantum computers that we can use in our daily lives,” said Jauregui. “Our hope is that this new research helps make the promise of quantum computers more of a reality.”
    Funding came from the UCI-MRSEC — an NSF CAREER grant to Jauregui and Los Alamos National Laboratory Directed Research and Development Directed Research program funds. More

  • in

    Paper calls for patient-first regulation of AI in healthcare

    Ever wonder if the latest and greatest artificial intelligence (AI) tool you read about in the morning paper is going to save your life? A new study published in JAMA led by John W. Ayers, Ph.D., of the Qualcomm Institute within the University of California San Diego, finds that question can be difficult to answer since AI products in healthcare do not universally undergo any externally evaluated approval process assessing how it might benefit patient outcomes before coming to market.
    The research team evaluated the recent White House Executive Order that instructed the Department of Health and Human Services to develop new AI-specific regulatory strategies addressing equity, safety, privacy, and quality for AI in healthcare before April 27, 2024. However, team members were surprised to find the order did not once mention patient outcomes, the standard metric by which healthcare products are judged before being allowed to access the healthcare marketplace.
    “The goal of medicine is to save lives,” said Davey Smith, M.D., head of the Division of Infectious Disease and Global Public Health at UC San Diego School of Medicine, co-director of the university’s Altman Clinical and Translational Research Institute, and study senior author. “AI tools should prove clinically significant improvements in patient outcomes before they are widely adopted.”
    According to the team, AI-powered early warning systems for sepsis, a fatal acute illness among hospitalized patients that affects 1.7 million Americans each year, demonstrates the consequences of inadequate prioritization of patient outcomes in regulations. A third-party evaluation of the most widely adopted AI sepsis prediction model revealed 67% of patients who developed sepsis were not identified by the system. Would hospital administrators have chosen this sepsis prediction system if trials assessing patient outcomes data were mandated, the team wondered, considering the array of available early warning systems for sepsis?
    “We are calling for a revision to the White House Executive Order that prioritizes patient outcomes when regulating AI products,” added John W. Ayers, Ph.D., who is deputy director of informatics in Altman Clinical and Translational Research Institute in addition to his Qualcomm Institute affiliation. “Similar to pharmaceutical products, AI tools that impact patient care should be evaluated by federal agencies for how they improve patients’ feeling, function, and survival.”
    The team points to its 2023 study in JAMA Internal Medicine on using AI-powered chatbots to respond to patient messages as an example of what patient outcome-centric regulations can achieve. “A study comparing standard care versus standard care enhanced by AI conversational agents found differences in downstream care utilization in some patient populations, such as heart failure patients,” said Nimit Desai, B.S., who is a research affiliate at the Qualcomm Institute, UC San Diego School of Medicine student, and study coauthor. “But studies like this don’t just happen unless regulators appropriately incentivize them. With a patient outcomes-centric approach, AI for patient messaging and all other clinical applications can truly enhance people’s lives.”
    The team recognizes that its proposed regulatory strategy can be a significant lift for AI and healthcare industry partners and may not be necessary for every flavor of AI use case in healthcare. However, the researchers say, excluding patient outcomes-centric rules in the White House Executive Order is a serious omission. More

  • in

    Bringing together real-world sensors and VR to improve building maintenance

    A new system that brings together real-world sensing and virtual reality would make it easier for building maintenance personnel to identify and fix issues in commercial buildings that are in operation. The system was developed by computer scientists at the University of California San Diego and Carnegie Mellon University.
    The system, dubbed BRICK, consists of a handheld device equipped with a suite of sensors to monitor temperature, CO2 and airflow. It is also equipped with a virtual reality environment that has access to the sensor data and metadata in a specific building while being connected to the building’s electronic control system.
    When an issue is reported in a specific location, a building manager can go on-site with the device and quickly scan the space with the Lidar tool on their smartphone, creating a virtual reality version of the space. The scanning can also occur ahead of time. Once they open this mixed reality recreation of the space on a smartphone or laptop, building managers can locate sensors, as well as the data gathered from the handheld device, overlaid onto that mixed reality environment.
    The goal is to allow building managers to quickly identify issues by inspecting hardware and gathering and logging relevant data.
    “Modern buildings are complex arrangements of multiple systems from climate control, lighting and security to occupant management. BRICK enables their efficient operation, much like a modern computer system,” said Rajesh K. Gupta, one of the paper’s senior authors, director of the UC San Diego Halicioglu Data Science Institute and a professor in the UC San Diego Department of Computer Science and Engineering.
    Currently, when building managers receive reports of a problem, they first have to consult the building management database for that specific location. But the system doesn’t tell them where the sensors and hardware are located exactly in that space. So managers have to go to the location, gather more data with cumbersome sensors, then compare that data against the information in the building management system and try to deduce what the issue is. It’s also difficult to log the data gathered at various spatial locations in a precise way.
    By contrast, with BRICK, the building manager can directly go to the location equipped with a handheld device and a laptop or smartphone. They will immediately have access on location to all the building management system data, the location of the sensors and the data from the handheld device all overlapping in one mixed reality environment. Using this system, the operators can also detect faults in the building equipment from stuck air-control valves to poorly operating handling systems.

    In the future, researchers hope to find CO2, temperature and airflow sensors that can directly connect to a smartphone, to enable occupants to take part in managing local environments as well as to simplify building operations.
    A team at Carnegie Mellon built the handheld device. Xiaohan Fu, a computer science Ph.D. student in the research group of Rajesh Gupta, director of the Halicioglu Data Science Institute, built the backend and VR components that build upon their earlier work on BRICK metadata schema that has been adopted by many commercial vendors.
    Ensuring that the location used in the VR environment was accurate was a major challenge. GPS is only accurate to a radius of about a meter. In this case, the system needs to be accurate within a few inches. The researchers’ solution was to post a (few) AprilTags-similar to QR codes — in every room that would be read by the handheld device’s camera and recalibrate the system to the correct location.
    “It’s an intricate system,” Fu said. “The mixed reality itself is not easy to build. From a software standpoint, connecting the building management system, where hardware, sensors and actuators are controlled, was a complex task that requires safety and security guarantees in a commercial environment. Our system architecture enables us to do it in an interactive and programmable way.”
    The team presented their work at the BuildSys 23 Conference on Nov. 15 and 16 in Istanbul, Turkey.
    The work was sponsored by the CONIX Research Center, one of the six centers in JUMP, a Semiconductor Research Corporation program sponsored by DARPA. More

  • in

    Machine learning guides carbon nanotechnology

    Carbon nanostructures could become easier to design and synthesize thanks to a machine learning method that predicts how they grow on metal surfaces. The new approach, developed by researchers at Japan’s Tohoku University and China’s Shanghai Jiao Tong University, will make it easier to exploit the unique chemical versatility of carbon nanotechnology. The method was published in the journal Nature Communications.
    The growth of carbon nanostructures on a variety of surfaces, including as atomically thin films, has been widely studied, but little is known about the dynamics and atomic-level factors governing the quality of the resulting materials. “Our work addresses a crucial challenge for realizing the potential of carbon nanostructures in electronics or energy processing devices,” says Hao Li of the Tohoku University team.
    The wide range of possible surfaces and the sensitivity of the process to several variables make direct experimental investigation challenging. The researchers therefore turned to machine learning simulations as a more effective way to explore these systems.
    With machine learning, various theoretical models can be combined with data from chemistry experiments to predict the dynamics of carbon crystalline growth and determine how it can be controlled to achieve specific results. The simulation program explores strategies and identifies which ones work and which don’t, without the need for humans to guide every step of the process.
    The researchers tested this approach by investigating simulations of the growth of graphene, a form of carbon, on a copper surface. After establishing the basic framework, they showed how their approach could also be applied to other metallic surfaces, such as titanium, chromium and copper contaminated with oxygen.
    The distribution of electrons around the nuclei of atoms in different forms of graphene crystals can vary. These subtle differences in atomic structure and electron arrangement affect the overall chemical and electrochemical properties of the material. The machine learning approach can test how these differences affect the diffusion of individual atoms and bonded atoms and the formation of carbon chains, arches and ring structures.
    The team validated the results of the simulations through experiments and found that they closely matched. “Overall, our work provides a practical and efficient method for designing metallic or alloy substrates to achieve desired carbon nanostructures and explore further opportunities,” Li says.
    He adds that future work will build on this to investigate topics such as the interfaces between solids and liquids in advanced catalysts and the chemical properties of materials used for processing and storing energy. More

  • in

    Tracking unconventional superconductivity

    At low enough temperatures, certain metals lose their electrical resistance and they conduct electricity without loss. This effect of superconductivity is known for more than hundred years and is well understood for so-called conventional superconductors. More recent, however, are unconventional superconductors, for which it is unclear yet how they work. A team from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), together with colleagues from the French research institution CEA (Commissariat à l’énergie atomique et aux énergies alternatives), from Tohoku University in Japan, and the Max Planck Institute for Chemical Physics of Solids in Dresden, has now gained new insights. The researchers report their recent findings in the journal Nature Communications. They could explain why a new material remains superconducting even at extremely high magnetic fields — a property that is missing in conventional superconductors, with the potential to enable previously unconceivable technological applications.
    “Uranium ditelluride, or UTe2 for short, is a high-flyer among superconducting materials,” says Dr. Toni Helm from the Dresden High Magnetic Field Laboratory (HLD) at HZDR. “As discovered in 2019, the compound conducts electricity without loss, however, in a different way than conventional superconductors do.” Since then, research groups around the world have become interested in the material. This includes Helm’s team, which has come a step closer to understanding the material.
    “To fully appreciate the hype surrounding the material, we need to take a closer look at superconductivity,” explains the physicist. “This phenomenon results from the movement of electrons in the material. Whenever they collide with atoms, they lose energy in form of heat. This manifests itself as electrical resistance. Electrons can avoid this by arranging themselves in pair formations, so-called Cooper pairs.” This is when two electrons combine at low temperatures to move through a solid without friction. They then make use of the atomic vibrations around them as a kind of wave on which they can surf without losing energy. These atomic vibrations explain conventional superconductivity.
    “For some years now, however, superconductors have also been known in which Cooper pairs are formed by effects that are not yet fully understood,” says the physicist. One possible form of unconventional superconductivity is spin-triplet superconductivity. It is believed to make use of magnetic fluctuations. “There are also metals in which the conduction electrons come together collectively,” explains Helm. “Together, they can shield the magnetism of the material, behaving as a single particle with — for electrons — an extremely high mass.” Such superconducting materials are known as heavy-fermion superconductors. UTe2, therefore, could be both a spin-triplet and a heavy-fermion superconductor, as current experiments suggest. On top of all, it is the heavyweight world champion: To date, no other heavy-fermion superconductor is known that is still superconducting at similar or higher magnetic fields. This too was confirmed by the present study.
    Extremely robust against magnetic fields
    Superconductivity depends on two factors: the critical transition temperature and the critical magnetic field. If the temperature falls below the critical transition temperature, the resistance drops to zero and the material becomes superconducting. External magnetic fields also influence superconductivity. If these exceed a critical value, the effect collapses. “Physicists have a rule of thumb for this,” reports Helm: “In many conventional superconductors, the value of the transition temperature in Kelvin is roughly one to two times the value of the critical magnetic-field strength in tesla. In spin-triplet superconductors, this ratio is often much higher.” With their studies on the heavyweight UTe2, the researchers have now been able to raise the bar even higher: At a transition temperature of 1.6 kelvin (-271.55°C), the critical magnetic-field strength reaches 73 tesla, setting the ratio at 45 — a record.
    “Until now, heavy-fermion superconductors were of little interest for technical applications,” explains the physicist. “They have a very low transition temperature and the effort required to cool them is comparatively high.” Nevertheless, their insensitivity to external magnetic fields could compensate for this shortcoming. This is because lossless current transport is mainly used today in superconducting magnets, for example in magnetic-resonance-imaging (MRI) scanners. However, the magnetic fields also influence the superconductor itself. A material that can withstand very high magnetic fields and still conducts electricity without loss would represent a major step forward.
    Special treatment for a demanding material
    “Of course, UTe2 cannot be used to make leads for a superconducting electromagnet,” says Helm. “Firstly, the material’s properties make it unsuitable for this endeavor, and secondly, it is radioactive. But it is perfectly suited for the exploration of the physics behind spin-triplet superconductivity.” Based on their experiments, the researchers developed a model that could serve as an explanation for superconductivity with extremely high stability against magnetic fields. To do this, they worked on samples with thicknesses of a few micrometers — only a fraction of the thickness of a human hair (approximately 70 micrometers). The radioactive radiation emitted by the samples, therefore, remains much lower than that of the natural background.
    In order to obtain and shape such a tiny sample, Helm used a high-precision ion beam with a diameter of just a few nanometers as a cutting tool. UTe2 is an air-sensitive material. Consequently, Helm carries out the sample preparation in vacuum and seals them in epoxide glue afterwards. “For the final proof that our material is a spin-triplet superconductor, we would have to examine it spectroscopically while it is exposed to strong magnetic fields. However, current spectroscopy methods still struggle at magnetic fields above 40 tesla. Alongside other teams, we are also working on developing novel techniques. Eventually, this will enable us to provide definitive proof,” says Helm confidently. More

  • in

    AI-powered app can detect poison ivy

    Poison ivy ranks among the most medically problematic plants. Up to 50 million people worldwide suffer annually from rashes caused by contact with the plant, a climbing, woody vine native to the United States, Canada, Mexico, Bermuda, the Western Bahamas and several areas in Asia.
    It’s found on farms, in woods, landscapes, fields, hiking trails and other open spaces. So, if you go to those places, you’re susceptible to irritation caused by poison ivy, which can lead to reactions that require medical attention. Worse, most people don’t know poison ivy when they see it.
    To find poison ivy before it finds you, University of Florida scientists published a new study in which they use artificial intelligence to confirm that an app can identify poison ivy.
    Nathan Boyd, a professor of horticultural sciences at the UF/IFAS Gulf Coast Research and Education Center near Tampa, led the research. Renato Herrig, a post-doctoral researcher in Boyd’s lab, designed the app.
    “We were the first to do this, and it was designed as a tool for hikers or others working outdoors,” Boyd said. “The app uses a camera to identify in real-time if poison ivy is present and provides you with a measure of certainty for the detection. It also functions even if you don’t have connectivity to the internet.”
    The next step is to make the app commercially available, and there’s no timetable for that yet, Boyd said.
    For the study, researchers collected thousands of images of poison ivy from five locations: Alderman’s Ford Conservation Park and Hillsborough River State Park, both in Florida; Eufala National Wildlife Refuge in Alabama; York River State Park in Virgina and Fall Creek Falls State Park in Tennessee.
    They labeled images, and in each image, scientists put boxes around the leaves and stems of the plant. The boxed images were critical because poison ivy has a unique leaf arrangement and shape. Scientists use those characteristics to identify the plant.
    They then ran the images through AI programs and taught a computer to recognize which plants are poison ivy. They also included images of plants that are not poison ivy or plants that look like poison ivy to be certain the computer learns to distinguish them.
    “We believe that by integrating an object-detection algorithm, public health and plant science, our research can encourage and support further investigations to understand poison ivy distribution and minimize health concerns,” Boyd said. In their future work UF/IFAS researchers hope to expand the use of the app to identify more noxious plants. More