More stories

  • in

    Researchers focus AI on finding exoplanets

    New research from the University of Georgia reveals that artificial intelligence can be used to find planets outside of our solar system. The recent study demonstrated that machine learning can be used to find exoplanets, information that could reshape how scientists detect and identify new planets very far from Earth.
    “One of the novel things about this is analyzing environments where planets are still forming,” said Jason Terry, doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study. “Machine learning has rarely been applied to the type of data we’re using before, specifically for looking at systems that are still actively forming planets.”
    The first exoplanet was found in 1992, and though more than 5,000 are known to exist, those have been among the easiest for scientists to find. Exoplanets at the formation stage are difficult to see for two primary reasons. They are too far away, often hundreds of lights years from Earth, and the discs where they form are very thick, thicker than the distance of the Earth to the sun. Data suggests the planets tend to be in the middle of these discs, conveying a signature of dust and gases kicked up by the planet.
    The research showed that artificial intelligence can help scientists overcome these difficulties.
    “This is a very exciting proof of concept,” said Cassandra Hall, assistant professor of astrophysics, principal investigator of the Exoplanet and Planet Formation Research Group, and co-author on the study. “The power here is that we used exclusively synthetic telescope data generated by computer simulations to train this AI, and then applied it to real telescope data. This has never been done before in our field, and paves the way for a deluge of discoveries as James Webb Telescope data rolls in.”
    The James Webb Space Telescope, launched by NASA in 2021, has inaugurated a new level of infrared astronomy, bringing stunning new images and reams of data for scientists to analyze. It’s just the latest iteration of the agency’s quest to find exoplanets, scattered unevenly across the galaxy. The Nancy Grace Roman Observatory, a 2.4-meter survey telescope scheduled to launch in 2027 that will look for dark energy and exoplanets, will be the next major expansion in capability — and delivery of information and data — to comb through the universe for life.

    The Webb telescope supplies the ability for scientists to look at exoplanetary systems in an extremely bright, high resolution, with the forming environments themselves a subject of great interest as they determine the resulting solar system.
    “The potential for good data is exploding, so it’s a very exciting time for the field,” Terry said.
    New analytical tools are essential
    Next-generation analytical tools are urgently needed to greet this high-quality data, so scientists can spend more time on theoretical interpretations rather than meticulously combing through the data and trying to find tiny little signatures.
    “In a sense, we’ve sort of just made a better person,” Terry said. “To a large extent the way we analyze this data is you have dozens, hundreds of images for a specific disc and you just look through and ask ‘is that a wiggle?’ then run a dozen simulations to see if that’s a wiggle and … it’s easy to overlook them — they’re really tiny, and it depends on the cleaning, and so this method is one, really fast, and two, its accuracy gets planets that humans would miss.”
    Terry says this is what machine learning can already accomplish — improve on human capacity to save time and money as well as efficiently guide scientific time, investments and new proposals.
    “There remains, within science and particularly astronomy in general, skepticism about machine learning and of AI, a valid criticism of it being this black box — where you have hundreds of millions of parameters and somehow you get out an answer. But we think we’ve demonstrated pretty strongly in this work that machine learning is up to the task. You can argue about interpretation. But in this case, we have very concrete results that demonstrate the power of this method.”
    The research team’s work is designed to develop a concrete foundation for future applications on observational data, demonstrating the method’s effectiveness by using simulational observations. More

  • in

    Can pigeons match wits with artificial intelligence?

    Can a pigeon match wits with artificial intelligence? At a very basic level, yes.
    In a new study, psychologists at the University of Iowa examined the workings of the pigeon brain and how the “brute force” of the bird’s learning shares similarities with artificial intelligence.
    The researchers gave the pigeons complex categorization tests that high-level thinking, such as using logic or reasoning, would not aid in solving. Instead, the pigeons, by virtue of exhaustive trial and error, eventually were able to memorize enough scenarios in the test to reach nearly 70% accuracy.
    The researchers equate the pigeons’ repetitive, trial-and-error approach to artificial intelligence. Computers employ the same basic methodology, the researchers contend, being “taught” how to identify patterns and objects easily recognized by humans. Granted, computers, because of their enormous memory and storage power — and growing ever more powerful in those domains — far surpass anything the pigeon brain can conjure.
    Still, the basic process of making associations — considered a lower-level thinking technique — is the same between the test-taking pigeons and the latest AI advances.
    “You hear all the time about the wonders of AI, all the amazing things that it can do,” says Ed Wasserman, Stuit Professor of Experimental Psychology in the Department of Psychological and Brain Sciences at Iowa and the study’s corresponding author. “It can beat the pants off people playing chess, or at any video game, for that matter. It can beat us at all kinds of things. How does it do it? Is it smart? No, it’s using the same system or an equivalent system to what the pigeon is using here.”
    The researchers sought to tease out two types of learning: one, declarative learning, is predicated on exercising reason based on a set of rules or strategies — a so-called higher level of learning attributed mostly to people. The other, associative learning, centers on recognizing and making connections between objects or patterns, such as, say, “sky-blue” and “water-wet.”

    Numerous animal species use associative learning, but only a select few — dolphins and chimpanzees among them — are thought to be capable of declarative learning.
    Yet AI is all the rage, with computers, robots, surveillance systems, and so many other technologies seemingly “thinking” like humans. But is that really the case, or is AI simply a product of cunning human inputs? Or, as the study’s authors put it, have we shortchanged the power of associative learning in human and animal cognition?
    Wasserman’s team devised a “diabolically difficult” test, as he calls it, to find out.
    Each test pigeon was shown a stimulus and had to decide, by pecking a button on the right or on the left, to which category that stimulus belonged. The categories included line width, line angle, concentric rings, and sectioned rings. A correct answer yielded a tasty pellet; an incorrect response yielded nothing. What made the test so demanding, Wasserman says, is its arbitrariness: No rules or logic would help decipher the task.
    “These stimuli are special. They don’t look like one another, and they’re never repeated,” says Wasserman, who has studied pigeon intelligence for five decades. “You have to memorize the individual stimuli or regions from where the stimuli occur in order to do the task.”
    Each of the four test pigeons began by correctly answering about half the time. But over hundreds of tests, the quartet eventually upped their score to an average of 68% right.

    “The pigeons are like AI masters,” Wasserman says. “They’re using a biological algorithm, the one that nature has given them, whereas the computer is using an artificial algorithm that humans gave them.”
    The common denominator is that AI and pigeons both employ associative learning, and yet that base-level thinking is what allowed the pigeons to ultimately score successfully. If people were to take the same test, Wasserman says, they’d score poorly and would probably give up.
    “The goal was to see to what extent a simple associative mechanism was capable of solving a task that would trouble us because people rely so heavily on rules or strategies,” Wasserman adds. “In this case, those rules would get in the way of learning. The pigeon never goes through that process. It doesn’t have that high-level thinking process. But it doesn’t get in the way of their learning. In fact, in some ways it facilitates it.”
    Wasserman sees a paradox in how associative learning is viewed.
    “People are wowed by AI doing amazing things using a learning algorithm much like the pigeon,” he says, “yet when people talk about associative learning in humans and animals, it is discounted as rigid and unsophisticated.”
    The study, “Resolving the associative learning paradox by category learning in pigeons,” was published online Feb. 7 in the journal Current Biology.
    Study co-authors include Drew Kain, who graduated with a neuroscience degree from Iowa in 2022 and is pursuing a doctorate in neuroscience at Iowa; and Ellen O’Donoghue, who earned a doctorate in psychology at Iowa last year and is now a postdoctoral scholar at Cardiff University.
    The National Institutes of Health funded the research. More

  • in

    Optimal layout for a hospital isolation room to contain COVID-19 includes ceiling vent

    A group of researchers recently modeled the transmission of COVID-19 within an isolation room at the Royal Brompton Hospital in London, U.K. Their goal was to explore the optimal room layout to reduce the risk of infection for health care staff.
    To accomplish this, they used an adaptive mesh finite-element computational fluid dynamics model to simulate 3D spatial distribution of the virus within the room — based on data collected from the room during a COVID-19 patient’s stay.
    In Physics of Fluids, from AIP Publishing, Wu et al. share their findings and guidance for isolation rooms. Their work centers on the location of the room’s air extractor (air outlet) and filtration rates, the location of the patient’s bed, and the health and safety of the hospital staff working within the area.
    “We modeled the virus transport and spreading processes and considered the effect of the temperature and humidity on the virus decay,” said Fangxin Fang, of Imperial College London. “We also modeled fluid and turbulence dynamics in our study, and explored the spatial distribution of virus, velocity field, and humidity under different air exchange rates and extractor locations.”
    They discovered that the area of highest risk of infection is above a patient’s bed at a height of 0.7 to 2 meters, where the highest concentration of COVID-19 virus is found. After the virus is expelled from a patient’s mouth, it gets driven vertically by buoyancy and wind forces within the room.
    Based on the group’s findings, the optimal layout for an isolation room to minimize infection risk is to use a ceiling extractor with an air exchange rate of 10 air changes per hour. The study focused on an isolation room within a hospital and its numerical results are limited due to the omission of droplet evaporation and particle matters, the researchers point out.
    Now, the group plans to include evaporation and particle processes in models of a standard hospital patient room, intensive care unit, and waiting room.
    “Further work will also focus on artificial intelligence-based surrogate modeling for rapid simulations, uncertainty analysis, and optimal control of ventilation systems as well as efficient energy use,” said Fang. More

  • in

    Engineers devise a modular system to produce efficient, scalable aquabots

    Underwater structures that can change their shapes dynamically, the way fish do, push through water much more efficiently than conventional rigid hulls. But constructing deformable devices that can change the curve of their body shapes while maintaining a smooth profile is a long and difficult process. MIT’s RoboTuna, for example, was composed of about 3,000 different parts and took about two years to design and build.
    Now, researchers at MIT and their colleagues — including one from the original RoboTuna team — have come up with an innovative approach to building deformable underwater robots, using simple repeating substructures instead of unique components. The team has demonstrated the new system in two different example configurations, one like an eel and the other a wing-like hydrofoil. The principle itself, however, allows for virtually unlimited variations in form and scale, the researchers say.
    The work is being reported in the journal Soft Robotics, in a paper by MIT research assistant Alfonso Parra Rubio, professors Michael Triantafyllou and Neil Gershenfeld, and six others.
    Existing approaches to soft robotics for marine applications are generally made on small scales, while many useful real-world applications require devices on scales of meters. The new modular system the researchers propose could easily be extended to such sizes and beyond, without requiring the kind of retooling and redesign that would be needed to scale up current systems.
    “Scalability is a strong point for us,” says Parra Rubio. Given the low density and high stiffness of the lattice-like pieces, called voxels, that make up their system, he says, “we have more room to keep scaling up,” whereas most currently used technologies “rely on high-density materials facing drastic problems” in moving to larger sizes.
    The individual voxels in the team’s experimental, proof-of-concept devices are mostly hollow structures made up of cast plastic pieces with narrow struts in complex shapes. The box-like shapes are load-bearing in one direction but soft in others, an unusual combination achieved by blending stiff and flexible components in different proportions.

    “Treating soft versus hard robotics is a false dichotomy,” Parra Rubio says. “This is something in between, a new way to construct things.” Gershenfeld, head of MIT’s Center for Bits and Atoms, adds that “this is a third way that marries the best elements of both.”
    “Smooth flexibility of the body surface allows us to implement flow control that can reduce drag and improve propulsive efficiency, resulting in substantial fuel saving,” says Triantafyllou, who is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering, and was part of the RoboTuna team.
    In one of the devices produced by the team, the voxels are attached end-to-end in a long row to form a meter-long, snake-like structure. The body is made up of four segments, each consisting of five voxels, with an actuator in the center that can pull a wire attached to each of the two voxels on either side, contracting them and causing the structure to bend. The whole structure of 20 units is then covered with a rib-like supporting structure, and then a tight-fitting waterproof neoprene skin. The researchers deployed the structure in an MIT tow tank to show its efficiency in the water, and demonstrated that it was indeed capable of generating forward thrust sufficient to propel itself forward using undulating motions.
    “There have been many snake-like robots before,” Gershenfeld says. “But they’re generally made of bespoke components, as opposed to these simple building blocks that are scalable.”
    For example, Parra Rubio says, a snake-like robot built by NASA was made up of thousands of unique pieces, whereas for this group’s snake, “we show that there are some 60 pieces.” And compared to the two years spent designing and building the MIT RoboTuna, this device was assembled in about two days, he says.
    The other device they demonstrated is a wing-like shape, or hydrofoil, made up of an array of the same voxels but able to change its profile shape and therefore control the lift-to-drag ratio and other properties of the wing. Such wing-like shapes could be used for a variety of purposes, ranging from generating power from waves to helping to improve the efficiency of ship hulls — a pressing demand, as shipping is a significant source of carbon emissions.
    The wing shape, unlike the snake, is covered in an array of scale-like overlapping tiles, designed to press down on each other to maintain a waterproof seal even as the wing changes its curvature. One possible application might be in some kind of addition to a ship’s hull profile that could reduce the formation of drag-inducing eddies and thus improve its overall efficiency, a possibility that the team is exploring with collaborators in the shipping industry.
    Ultimately, the concept might be applied to a whale-like submersible craft, using its morphable body shape to create propulsion. Such a craft that could evade bad weather by staying below the surface, but without the noise and turbulence of conventional propulsion. The concept could also be applied to parts of other vessels, such as racing yachts, where having a keel or a rudder that could curve gently during a turn instead of remaining straight could provide an extra edge. “Instead of being rigid or just having a flap, if you can actually curve the way fish do, you can morph your way around the turn much more efficiently,” Gershenfeld says.
    The research team included Dixia Fan of the Westlake University in China; Benjamin Jenett SM ’15, PhD ‘ 20 of Discrete Lattice Industries; Jose del Aguila Ferrandis, Amira Abdel-Rahman and David Preiss of MIT; and Filippos Tourlomousis of the Demokritos Research Center of Greece. The work was supported by the U.S. Army Research Lab, CBA Consortia funding, and the MIT Sea Grant Program. More