More stories

  • in

    Machine learning used to probe the building blocks of shapes

    Applying machine learning to find the properties of atomic pieces of geometry shows how AI has the power to accelerate discoveries in maths.
    Mathematicians from Imperial College London and the University of Nottingham have, for the first time, used machine learning to expand and accelerate work identifying ‘atomic shapes’ that form the basic pieces of geometry in higher dimensions. Their findings have been published in Nature Communications.
    The way they used artificial intelligence, in the form of machine learning, could transform how maths is done, say the authors. Dr Alexander Kasprzyk from the University of Nottingham said: “For mathematicians, the key step is working out what the pattern is in a given problem. This can be very difficult, and some mathematical theories can take years to discover.”
    Professor Tom Coates, from the Department of Mathematics at Imperial, added: “We have shown that machine learning can help uncover patterns within mathematical data, giving us both new insights and hints of how they can be proved.”
    PhD student Sara Veneziale, from the Department of Mathematics at Imperial, said: “This could be very broadly applicable, such that it could rapidly accelerate the pace at which maths discoveries are made. It’s like when computers were first used in maths research, or even calculators: it’s a step-change in the way we do maths.”
    Defining shapes
    Mathematicians describe shapes using equations, and by analysing these equations can break the shape down into fundamental pieces. These are the building blocks of shapes, the equivalent of atoms, and are called Fano varieties. More

  • in

    Birders and AI push bird conservation to the next level

    For the first time, big data and artificial intelligence (AI) are being used to model hidden patterns in nature, not just for one bird species, but for entire ecological communities across continents. And the models follow each species’ full annual life cycle, from breeding to fall migration to nonbreeding grounds, and back north again during spring migration. It begins with the more than 900,000 birders who report their sightings to the Cornell Lab of Ornithology’s eBird program, one of the world’s largest biodiversity science projects. When combined with innovations in technology and artificial intelligence-the same innovations that power self-driving cars and real-time language translation-these sightings are revealing more than ever about patterns of bird biodiversity, and the processes that underlie them.
    The development and application of this revolutionary computational tool is the result of a collaboration between the Cornell Lab of Ornithology and the Cornell Institute for Computational Sustainability. This work is now published in the journal Ecology.
    “This method uniquely tells us which species occur where, when, with what other species, and under what environmental conditions,” said lead author Courtney Davis, a researcher at the Cornell Lab. “With that type of information, we can identify and prioritize landscapes of high conservation value — vital information in this era of ongoing biodiversity loss.”
    “This model is very general and is suitable for various tasks, provided there’s enough data,” Gomes said. “This work on joint bird species distribution modeling is about predicting the presence and absence of species, but we are also developing models to estimate bird abundance — the number of individual birds per species. We’re also aiming to enhance the model by incorporating bird calls alongside visual observations.”
    Cross-disciplinary collaborations like this are necessary for the future of biodiversity conservation, according to Daniel Fink, researcher at the Cornell Lab and senior author of the study.
    “The task at hand is too big for ecologists to do on their own-we need the expertise of our colleagues in computer science and computational sustainability to develop targeted plans for landscape-scale conservation, restoration, and management around the world.”
    This work was funded by the National Science Foundation, The Leon Levy Foundation, The Wolf Creek Foundation, the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship — a Schmidt Future program, the Air Force Office of Scientific Research, and the U.S. Department of Agriculture’s National Institute of Food and Agriculture. More

  • in

    Could future AI crave a favorite food?

    Can artificial intelligence (AI) get hungry? Develop a taste for certain foods? Not yet, but a team of Penn State researchers is developing a novel electronic tongue that mimics how taste influences what we eat based on both needs and wants, providing a possible blueprint for AI that processes information more like a human being.
    Human behavior is complex, a nebulous compromise and interaction between our physiological needs and psychological urges. While artificial intelligence has made great strides in recent years, AI systems do not incorporate the psychological side of our human intelligence. For example, emotional intelligence is rarely considered as part of AI.
    “The main focus of our work was how could we bring the emotional part of intelligence to AI,” said Saptarshi Das, associate professor of engineering science and mechanics at Penn State and corresponding author of the study published recently in Nature Communications. “Emotion is a broad field and many researchers study psychology; however, for computer engineers, mathematical models and diverse data sets are essential for design purposes. Human behavior is easy to observe but difficult to measure and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that.”
    Das noted that our eating habits are a good example of emotional intelligence and the interaction between the physiological and psychological state of the body. What we eat is heavily influenced by the process of gustation, which refers to how our sense of taste helps us decide what to consume based on flavor preferences. This is different than hunger, the physiological reason for eating.
    “If you are someone fortunate to have all possible food choices, you will choose the foods you like most,” Das said. “You are not going to choose something that is very bitter, but likely try for something sweeter, correct?”
    Anyone who has felt full after a big lunch and still was tempted by a slice of chocolate cake at an afternoon workplace party knows that a person can eat something they love even when not hungry.
    “If you are given food that is sweet, you would eat it in spite of your physiological condition being satisfied, unlike if someone gave you say a hunk of meat,” Das said. “Your psychological condition still wants to be satisfied, so you will have the urge to eat the sweets even when not hungry.”
    While there are still many questions regarding the neuronal circuits and molecular-level mechanisms within the brain that underlie hunger perception and appetite control, Das said, advances such as improved brain imaging have offered more information on how these circuits work in regard to gustation. More

  • in

    These robots helped explain how insects evolved two distinct strategies for flight

    Robots built by engineers at the University of California San Diego helped achieve a major breakthrough in understanding how insect flight evolved, described in the Oct. 4, 2023 issue of the journal Nature. The study is a result of a six-year long collaboration between roboticists at UC San Diego and biophysicists at the Georgia Institute of Technology.
    The findings focus on how the two different modes of flight evolved in insects. Most insects use their brains to activate their flight muscles each wingstroke, just like we activate the muscles in our legs every stride we take. This is called synchronous flight. But some insects, such as mosquitoes, are able to flap their wings without their nervous system commanding each wingstroke. Instead, the muscles of these animals automatically activate when they are stretched. This is called asynchronous flight. Asynchronous flight is common in some of the insects in the four major insect groups, allowing them to flap their wings at great speeds, allowing some mosquitoes to flap their wings more than 800 times a second, for example.
    For years, scientists assumed the four groups of insects-bees, flies, beetles and true bugs (hemiptera)- all evolved asynchronous flight separately. However, a new analysis performed by the Georgia Tech team concludes that asynchronous flight actually evolved together in one common ancestor. Then some groups of insect species reverted back to synchronous flight, while others remained asynchronous.
    The finding that some insects such as moths have evolved from synchronous to asynchronous, and then back to synchronous flight led the researchers down a path of investigation that required insect, robot, and mathematical experiments. This new evolutionary finding posed two fundamental questions: do the muscles of moths exhibit signatures of their prior asynchrony and how can an insect maintain both synchronous and asynchronous properties in their muscles and still be capable of flight?
    The ideal specimen to study these questions of synchronous and asynchronous evolution is the Hawkmoth. That’s because moths use synchronous flight, but the evolutionary record tells us they have ancestors with asynchronous flight.
    Researchers at Georgia Tech first sought to measure whether signatures of asynchrony can be observed in the Hawkmoth muscle. Through mechanical characterization of the muscle they discovered that Hawkmoths still retain the physical characteristics of asynchronous flight muscles-even if they are not used.
    How can an insect have both synchronous and asynchronous properties and still fly? To answer this question researchers realized that using robots would allow them to perform experiments that could never be done on insects. For example, they would be able to equip the robots with motors that could emulate combinations of asynchronous and synchronous muscles and test what transitions might have occurred during the millions of years of evolution of flight. More

  • in

    AI drones to help farmers optimize vegetable yields

    For reasons of food security and economic incentive, farmers continuously seek to maximize their marketable crop yields. As plants grow inconsistently, at the time of harvesting, there will inevitably be variations in quality and size of individual crops. Finding the optimal time to harvest is therefore a priority for farmers. A new approach making heavy use of drones and artificial intelligence demonstrably improves this estimation by carefully and accurately analyzing individual crops to assess their likely growth characteristics.
    Some optimistic science fiction stories talk about a post-scarcity future, where human needs are catered for and hard labor is provided by machines. There are some ways in which this vision appears to predict some elements of current technological progress. One such area is in agricultural research, where automation has been making an impact. For the first time, researchers, including those from the University of Tokyo, have demonstrated a largely automated system to improve crop yields, which can benefit many and may help pave the way for future systems that could one day harvest crops directly.
    “The idea is relatively simple, but the design, implementation and execution is extraordinarily complex,” said Associate Professor Wei Guo from the Laboratory of Field Phenomics. “If farmers know the ideal time to harvest crop fields, they can reduce waste, which is good for them, for consumers and the environment. But optimum harvest times are not an easy thing to predict and ideally require detailed knowledge of each plant; such data would be cost and time prohibitive if people were employed to collect it. This is where the drones come in.”
    Guo has a background in both computer science and agricultural science, so is ideally suited to finding ways cutting-edge hardware and software could aid agriculture. He and his team have demonstrated that some low-cost drones with specialized software can image and analyze young plants — broccoli in the case of this study — and accurately predict their expected growth characteristics. The drones carry out the imaging process multiple times and do so without human interaction, meaning the system requires little in terms of labor costs.
    “It might surprise some to know that by harvesting a field as little as a day before or after the optimal time could reduce the potential income of that field for the farmer by 3.7% to as much as 20.4%,” said Guo. “But with our system, drones identify and catalog every plant in the field, and their imaging data feeds a model that uses deep learning to produce easy-to-understand visual data for farmers. Given the current relative low costs of drones and computers, a commercial version of this system should be within reach to many farmers.”
    The main challenge the team faced was in the image analysis and deep learning aspects. Collecting the image data itself is relatively trivial, but given the way plants move in the wind and how the light changes with time and the seasons, the image data contains a lot of variation that machines often find hard to compensate for. So, when training their system, the team had to invest a huge amount of time labeling various aspects of images the drones might see, in order to help the system learn to correctly identify what it was seeing. The vast data throughput was also challenging — image data was often of the order of trillions of pixels, tens of thousands of times larger than even a high-end smartphone camera.
    “I’m inspired to find more ways that plant phenotyping (measuring of plant growth traits) can go from the lab to the field in order to help solve the major problems we face,” said Guo. More

  • in

    Insect cyborgs: Towards precision movement

    Insect cyborgs may sound like science fiction, but it’s a relatively new phenomenon based on using electrical stimuli to control the movement of insects. These hybrid insect computer robots, as they are scientifically called, herald the future of small, high mobile and efficient devices.
    Despite significant progress being made, however, further advances are complicated by the vast differences between different insects’ nervous and muscle systems.
    In a recent study published in the journal eLife, an international research group has studied the relationship between electrical stimulation in stick insects’ leg muscles and the resultant torque (the twisting force that makes the leg move).
    They focused on three leg muscles that play essential roles in insect movement: one for propulsion, one for joint stiffness, and one for transitioning between standing and swinging the leg. The experiments involved the researchers keeping the body of the stick insects fixed, and electrically stimulating one out of the three leg muscles to produce walking-like movements.
    The research was led by Dai Owaki, associate professor at the Department of Robotics at Tohoku University’s Graduate School of Engineering. Experiments were conducted at Bielefeld University, Germany, in a lab run by Professors Volker Dürr and Josef Schmitz.
    “Based on our measurements, we could generate a model that predicted the created torque when different patterns of electrical stimulation were applied to a leg muscle,” points out Owaki. “We also identified a nearly linear relationship between the duration of the electrical stimulation and the torque generated, meaning we could predict how much twisting force we would generate by just looking at the length of the applied electrical pulse.”
    Using only a few measurements, Owaki and his collaborators could apply this to each individual insect. As a result of these findings, scientists will be able to refine the motor control of tuned biohybrid robots, making their movements more precise.
    While the team knows their insights could lead to adaptable and highly mobile devices with various applications, they still cite some key challenges that need to be addressed. “First, model testing needs to be implemented in free-walking insects, and the electrical stimuli must be refined to mimic natural neuromuscular signals more closely,” adds Owaki. More

  • in

    Power of rhythm as a design element in evolution and robotics

    As the internet quickly fills with viral videos of futuristic robots darting and racing around like the animals they’re built to mimic, Duke researchers say that there’s an element of their movement’s programming that should not be overlooked: rhythm.
    When analyzing legs, wings and fins for moving robots or animals in the real world, the mathematics looks fairly straightforward. Limbs with multiple sections of various lengths create different ratios for leverage, bodies with alternate shapes and sizes create drag coefficients and centers of mass, and feet, wings or fins of various shapes and sizes push on the world around them.
    All of these options create more degrees of freedom in the final design. But until now, say the researchers, nobody was paying much attention to the timing of how they’re all working together.
    “Minimizing the amount of work being done by varying the speed over the mover is an idea that’s been around a long time,” said Adrian Bejan, the J.A. Jones Distinguished Professor of Mechanical Engineering at Duke. “But varying the rhythm of that movement — the music of how the pieces move together over time — is a design aspect that has been overlooked, even though it can improve performance.”
    The reasoning and mathematics exploring this thesis was published in a paper online August 28 in the journal Scientific Reports.
    To illustrate his point in the paper, Bejan points to natural swimmers such as frogs or humans doing the breaststroke. Their swim gate is characterized by three time-intervals: a slow period of reaching forward, a fast period of pushing backward and a static period of coasting. For optimum performance, the lengths of time for those intervals typically go long, fast, long. But in certain situations — outracing or outmaneuvering a predator, for example — the ratios of those periods change drastically.
    In the design of robots built to emulate dogs, fish or birds, incorporating different rhythms into their standard cruising movements can make their normal operations more efficient. And those optimal rhythms will, in turn, affect the choices made for all of the other pieces of the overall design.
    The work builds on research Bejan published nearly 20 years ago, where he demonstrated that size and speed go hand-in-hand across the entire animal kingdom whether on land, in the air or under water. The physics underlying that work dealt with weight falling forward from a given animal’s height over and over again. In this paper, Bejan shows that his previous work was incomplete, and that all animals, robots and other moving things can further optimize their mechanics by adding an element of rhythm.
    “You can — and indeed you should — teach rhythms of movements to competitive swimmers and runners looking for an edge,” Bejan said. “Rhythm increases the number of knobs you can turn when trying to move through the world. It is yet another example of how good design — whether made by humans or through natural evolution — is truly a form of art.” More

  • in

    Human disease simulator lets scientists choose their own adventure

    Imagine a device smaller than a toddler’s shoebox that can simulate any human disease in multiple organs or test new drugs without ever entering — or harming — the body.
    Scientists at Northwestern University have developed this new technology — called Lattice — to study interactions between up to eight unique organ tissue cultures (cells from a human organ) for extended periods of time to replicate how actual organs will respond. It is a major advancement from current in vitro systems, which can only study two cell cultures simultaneously.
    The goal is to simulate what happens inside the body to analyze, for example, how obesity might affect a particular disease; how women metabolize drugs differently than men; or what might be initially driving a disease that eventually impacts multiple organs.
    “When something’s happening in the body, we don’t know exactly who’s talking to whom,” said lead scientist Julie Kim, professor of obstetrics and gynecology at Northwestern University Feinberg School of Medicine. “Currently, scientists use dishes that have one or two cell types, and then do in-depth research and analysis, but Lattice provides a huge advancement. This platform is much better suited to mimic what’s happening in the body, because it can simulate so many organs at once.”
    A study detailing the new technology will be published Oct. 3 in the journal Lab on a Chip.
    Choose-your-own-adventure disease simulator
    The microfluidic device has a series of channels and pumps that cause media (simulated blood) to flow between the eight wells. A computer connected to Lattice precisely controls how much media flows through each well, where it flows and when. Depending on which disease or drug the scientist wants to test, they can fill each well with a different organ tissue, hormone, disease or medication. More