More stories

  • in

    Comfort with a smaller carbon footprint

    As organizations work to reduce their energy consumption and associated carbon emissions, one area that remains to be optimized is indoor heating and cooling. In fact, HVAC — which stands for Heating, Ventilation, and Air Conditioning — represents, on average, about 40% of a building’s total energy use. Methods that conserve electricity while still providing a comfortable indoor environment for workers could make a significant difference in the fight against climate change.
    Now, researchers from Osaka University have demonstrated significant energy savings through the application of a new, AI-driven algorithm for controlling HVAC systems. This method does not require complex physics modelling, or even detailed previous knowledge about the building itself.
    During cold weather, it is sometimes challenging for conventional sensor-based systems to determine when the heating should be shut off. This is due to thermal interference from lighting, equipment, or even the heat produced by the workers themselves. This can lead to the HVAC being activated when it should not be, wasting energy.
    To overcome these obstacles, the researchers employed a control algorithm that worked to predict the thermodynamic response of the building based on data collected. This approach can be more effective than attempting to explicitly calculate the impact of the multitude of complex factors that might affect the temperature, such as insulation and heat generation. Thus, with enough information, ‘data driven’ approaches can often outperform even sophisticated models. Here, the HVAC control system was designed to ‘learn’ the symbolic relationships between the variables, including power consumption, based on a large dataset.
    The algorithm was able to save energy while still allowing the building occupants to work in comfort. “Our autonomous system showed significant energy savings, of 30% or more for office buildings, by leveraging the predictive power of machine learning to optimize the times the HVAC should operate.” says lead author Dafang Zhao. “Importantly, the rooms were comfortably warm despite it being winter.”
    The algorithm worked to minimize the total energy consumed, the difference between the actual and desired room temperature, and change in the rate of power output at peak demand. “Our system can be easily customized to prioritize energy conservation or temperature accuracy, depending on the needs of the situation,” adds senior author Ittetsu Taniguchi.
    To collectively achieve the goal of a carbon-neutral economy, it is highly likely that corporations will need to be at the vanguard of innovation. The researchers note that their approach may enjoy rapid adoption during times of rising energy costs, which makes their findings good for both the environment as well as company viability. More

  • in

    New technology could reduce lag, improve reliability of online gaming, meetings

    Whether you’re battling foes in a virtual arena or collaborating with colleagues across the globe, lag-induced disruptions can be a major hindrance to seamless communication and immersive experiences.
    That’s why researchers with the University of Central Florida’s College of Optics and Photonics (CREOL) and the University of California, Los Angeles, have developed new technology to make data transfer over optical fiber communication faster and more efficient.
    Their new development, a novel class of optical modulators, is detailed in a new study published recently in the journal Nature Communications. Modulators can be thought of as like a light switch that controls certain properties of data-carrying light in an optical communication system.
    “Carrying torrents of data between internet hubs and connecting servers, storage elements, and switches inside data centers, optical fiber communication is the backbone on which the digital world is built,” says Sasan Fathpour, the study’s co-author and CREOL professor. “The basic constituents of such links, the optical fiber, semiconductor laser, optical modulator and photoreceiver, all place limits on the bandwidth and the accuracy of data transmission.”
    Fathpour says particularly the dispersion of optical fibers, or signal distortion over long distances, and noise of semiconductor lasers, or unwanted signal interference, are two fundamental limitations of optical communication and signal processing systems that affect data transmission and reliability.
    He says their research has invented a unique class of optical modulators that simultaneously address both limitations by taking advantage of phase diversity, or varied timing of signals, and differential operations, or comparison of light signals.
    By doing so, the researchers have created an advanced “light switch” that not only controls data transmission but does so while comparing the amount and timing of data moving through the system to ensure accurate and efficient transmission. More

  • in

    Machine learning used to probe the building blocks of shapes

    Applying machine learning to find the properties of atomic pieces of geometry shows how AI has the power to accelerate discoveries in maths.
    Mathematicians from Imperial College London and the University of Nottingham have, for the first time, used machine learning to expand and accelerate work identifying ‘atomic shapes’ that form the basic pieces of geometry in higher dimensions. Their findings have been published in Nature Communications.
    The way they used artificial intelligence, in the form of machine learning, could transform how maths is done, say the authors. Dr Alexander Kasprzyk from the University of Nottingham said: “For mathematicians, the key step is working out what the pattern is in a given problem. This can be very difficult, and some mathematical theories can take years to discover.”
    Professor Tom Coates, from the Department of Mathematics at Imperial, added: “We have shown that machine learning can help uncover patterns within mathematical data, giving us both new insights and hints of how they can be proved.”
    PhD student Sara Veneziale, from the Department of Mathematics at Imperial, said: “This could be very broadly applicable, such that it could rapidly accelerate the pace at which maths discoveries are made. It’s like when computers were first used in maths research, or even calculators: it’s a step-change in the way we do maths.”
    Defining shapes
    Mathematicians describe shapes using equations, and by analysing these equations can break the shape down into fundamental pieces. These are the building blocks of shapes, the equivalent of atoms, and are called Fano varieties. More

  • in

    Birders and AI push bird conservation to the next level

    For the first time, big data and artificial intelligence (AI) are being used to model hidden patterns in nature, not just for one bird species, but for entire ecological communities across continents. And the models follow each species’ full annual life cycle, from breeding to fall migration to nonbreeding grounds, and back north again during spring migration. It begins with the more than 900,000 birders who report their sightings to the Cornell Lab of Ornithology’s eBird program, one of the world’s largest biodiversity science projects. When combined with innovations in technology and artificial intelligence-the same innovations that power self-driving cars and real-time language translation-these sightings are revealing more than ever about patterns of bird biodiversity, and the processes that underlie them.
    The development and application of this revolutionary computational tool is the result of a collaboration between the Cornell Lab of Ornithology and the Cornell Institute for Computational Sustainability. This work is now published in the journal Ecology.
    “This method uniquely tells us which species occur where, when, with what other species, and under what environmental conditions,” said lead author Courtney Davis, a researcher at the Cornell Lab. “With that type of information, we can identify and prioritize landscapes of high conservation value — vital information in this era of ongoing biodiversity loss.”
    “This model is very general and is suitable for various tasks, provided there’s enough data,” Gomes said. “This work on joint bird species distribution modeling is about predicting the presence and absence of species, but we are also developing models to estimate bird abundance — the number of individual birds per species. We’re also aiming to enhance the model by incorporating bird calls alongside visual observations.”
    Cross-disciplinary collaborations like this are necessary for the future of biodiversity conservation, according to Daniel Fink, researcher at the Cornell Lab and senior author of the study.
    “The task at hand is too big for ecologists to do on their own-we need the expertise of our colleagues in computer science and computational sustainability to develop targeted plans for landscape-scale conservation, restoration, and management around the world.”
    This work was funded by the National Science Foundation, The Leon Levy Foundation, The Wolf Creek Foundation, the Eric and Wendy Schmidt AI in Science Postdoctoral Fellowship — a Schmidt Future program, the Air Force Office of Scientific Research, and the U.S. Department of Agriculture’s National Institute of Food and Agriculture. More

  • in

    Could future AI crave a favorite food?

    Can artificial intelligence (AI) get hungry? Develop a taste for certain foods? Not yet, but a team of Penn State researchers is developing a novel electronic tongue that mimics how taste influences what we eat based on both needs and wants, providing a possible blueprint for AI that processes information more like a human being.
    Human behavior is complex, a nebulous compromise and interaction between our physiological needs and psychological urges. While artificial intelligence has made great strides in recent years, AI systems do not incorporate the psychological side of our human intelligence. For example, emotional intelligence is rarely considered as part of AI.
    “The main focus of our work was how could we bring the emotional part of intelligence to AI,” said Saptarshi Das, associate professor of engineering science and mechanics at Penn State and corresponding author of the study published recently in Nature Communications. “Emotion is a broad field and many researchers study psychology; however, for computer engineers, mathematical models and diverse data sets are essential for design purposes. Human behavior is easy to observe but difficult to measure and that makes it difficult to replicate in a robot and make it emotionally intelligent. There is no real way right now to do that.”
    Das noted that our eating habits are a good example of emotional intelligence and the interaction between the physiological and psychological state of the body. What we eat is heavily influenced by the process of gustation, which refers to how our sense of taste helps us decide what to consume based on flavor preferences. This is different than hunger, the physiological reason for eating.
    “If you are someone fortunate to have all possible food choices, you will choose the foods you like most,” Das said. “You are not going to choose something that is very bitter, but likely try for something sweeter, correct?”
    Anyone who has felt full after a big lunch and still was tempted by a slice of chocolate cake at an afternoon workplace party knows that a person can eat something they love even when not hungry.
    “If you are given food that is sweet, you would eat it in spite of your physiological condition being satisfied, unlike if someone gave you say a hunk of meat,” Das said. “Your psychological condition still wants to be satisfied, so you will have the urge to eat the sweets even when not hungry.”
    While there are still many questions regarding the neuronal circuits and molecular-level mechanisms within the brain that underlie hunger perception and appetite control, Das said, advances such as improved brain imaging have offered more information on how these circuits work in regard to gustation. More

  • in

    These robots helped explain how insects evolved two distinct strategies for flight

    Robots built by engineers at the University of California San Diego helped achieve a major breakthrough in understanding how insect flight evolved, described in the Oct. 4, 2023 issue of the journal Nature. The study is a result of a six-year long collaboration between roboticists at UC San Diego and biophysicists at the Georgia Institute of Technology.
    The findings focus on how the two different modes of flight evolved in insects. Most insects use their brains to activate their flight muscles each wingstroke, just like we activate the muscles in our legs every stride we take. This is called synchronous flight. But some insects, such as mosquitoes, are able to flap their wings without their nervous system commanding each wingstroke. Instead, the muscles of these animals automatically activate when they are stretched. This is called asynchronous flight. Asynchronous flight is common in some of the insects in the four major insect groups, allowing them to flap their wings at great speeds, allowing some mosquitoes to flap their wings more than 800 times a second, for example.
    For years, scientists assumed the four groups of insects-bees, flies, beetles and true bugs (hemiptera)- all evolved asynchronous flight separately. However, a new analysis performed by the Georgia Tech team concludes that asynchronous flight actually evolved together in one common ancestor. Then some groups of insect species reverted back to synchronous flight, while others remained asynchronous.
    The finding that some insects such as moths have evolved from synchronous to asynchronous, and then back to synchronous flight led the researchers down a path of investigation that required insect, robot, and mathematical experiments. This new evolutionary finding posed two fundamental questions: do the muscles of moths exhibit signatures of their prior asynchrony and how can an insect maintain both synchronous and asynchronous properties in their muscles and still be capable of flight?
    The ideal specimen to study these questions of synchronous and asynchronous evolution is the Hawkmoth. That’s because moths use synchronous flight, but the evolutionary record tells us they have ancestors with asynchronous flight.
    Researchers at Georgia Tech first sought to measure whether signatures of asynchrony can be observed in the Hawkmoth muscle. Through mechanical characterization of the muscle they discovered that Hawkmoths still retain the physical characteristics of asynchronous flight muscles-even if they are not used.
    How can an insect have both synchronous and asynchronous properties and still fly? To answer this question researchers realized that using robots would allow them to perform experiments that could never be done on insects. For example, they would be able to equip the robots with motors that could emulate combinations of asynchronous and synchronous muscles and test what transitions might have occurred during the millions of years of evolution of flight. More

  • in

    AI drones to help farmers optimize vegetable yields

    For reasons of food security and economic incentive, farmers continuously seek to maximize their marketable crop yields. As plants grow inconsistently, at the time of harvesting, there will inevitably be variations in quality and size of individual crops. Finding the optimal time to harvest is therefore a priority for farmers. A new approach making heavy use of drones and artificial intelligence demonstrably improves this estimation by carefully and accurately analyzing individual crops to assess their likely growth characteristics.
    Some optimistic science fiction stories talk about a post-scarcity future, where human needs are catered for and hard labor is provided by machines. There are some ways in which this vision appears to predict some elements of current technological progress. One such area is in agricultural research, where automation has been making an impact. For the first time, researchers, including those from the University of Tokyo, have demonstrated a largely automated system to improve crop yields, which can benefit many and may help pave the way for future systems that could one day harvest crops directly.
    “The idea is relatively simple, but the design, implementation and execution is extraordinarily complex,” said Associate Professor Wei Guo from the Laboratory of Field Phenomics. “If farmers know the ideal time to harvest crop fields, they can reduce waste, which is good for them, for consumers and the environment. But optimum harvest times are not an easy thing to predict and ideally require detailed knowledge of each plant; such data would be cost and time prohibitive if people were employed to collect it. This is where the drones come in.”
    Guo has a background in both computer science and agricultural science, so is ideally suited to finding ways cutting-edge hardware and software could aid agriculture. He and his team have demonstrated that some low-cost drones with specialized software can image and analyze young plants — broccoli in the case of this study — and accurately predict their expected growth characteristics. The drones carry out the imaging process multiple times and do so without human interaction, meaning the system requires little in terms of labor costs.
    “It might surprise some to know that by harvesting a field as little as a day before or after the optimal time could reduce the potential income of that field for the farmer by 3.7% to as much as 20.4%,” said Guo. “But with our system, drones identify and catalog every plant in the field, and their imaging data feeds a model that uses deep learning to produce easy-to-understand visual data for farmers. Given the current relative low costs of drones and computers, a commercial version of this system should be within reach to many farmers.”
    The main challenge the team faced was in the image analysis and deep learning aspects. Collecting the image data itself is relatively trivial, but given the way plants move in the wind and how the light changes with time and the seasons, the image data contains a lot of variation that machines often find hard to compensate for. So, when training their system, the team had to invest a huge amount of time labeling various aspects of images the drones might see, in order to help the system learn to correctly identify what it was seeing. The vast data throughput was also challenging — image data was often of the order of trillions of pixels, tens of thousands of times larger than even a high-end smartphone camera.
    “I’m inspired to find more ways that plant phenotyping (measuring of plant growth traits) can go from the lab to the field in order to help solve the major problems we face,” said Guo. More

  • in

    Insect cyborgs: Towards precision movement

    Insect cyborgs may sound like science fiction, but it’s a relatively new phenomenon based on using electrical stimuli to control the movement of insects. These hybrid insect computer robots, as they are scientifically called, herald the future of small, high mobile and efficient devices.
    Despite significant progress being made, however, further advances are complicated by the vast differences between different insects’ nervous and muscle systems.
    In a recent study published in the journal eLife, an international research group has studied the relationship between electrical stimulation in stick insects’ leg muscles and the resultant torque (the twisting force that makes the leg move).
    They focused on three leg muscles that play essential roles in insect movement: one for propulsion, one for joint stiffness, and one for transitioning between standing and swinging the leg. The experiments involved the researchers keeping the body of the stick insects fixed, and electrically stimulating one out of the three leg muscles to produce walking-like movements.
    The research was led by Dai Owaki, associate professor at the Department of Robotics at Tohoku University’s Graduate School of Engineering. Experiments were conducted at Bielefeld University, Germany, in a lab run by Professors Volker Dürr and Josef Schmitz.
    “Based on our measurements, we could generate a model that predicted the created torque when different patterns of electrical stimulation were applied to a leg muscle,” points out Owaki. “We also identified a nearly linear relationship between the duration of the electrical stimulation and the torque generated, meaning we could predict how much twisting force we would generate by just looking at the length of the applied electrical pulse.”
    Using only a few measurements, Owaki and his collaborators could apply this to each individual insect. As a result of these findings, scientists will be able to refine the motor control of tuned biohybrid robots, making their movements more precise.
    While the team knows their insights could lead to adaptable and highly mobile devices with various applications, they still cite some key challenges that need to be addressed. “First, model testing needs to be implemented in free-walking insects, and the electrical stimuli must be refined to mimic natural neuromuscular signals more closely,” adds Owaki. More