More stories

  • in

    Flapping frequency of birds, insects, bats and whales described by universal equation

    A single universal equation can closely approximate the frequency of wingbeats and fin strokes made by birds, insects, bats and whales, despite their different body sizes and wing shapes, Jens Højgaard Jensen and colleagues from Roskilde University in Denmark report in a new study in the open-access journal PLOS ONE, publishing June 5.
    The ability to fly has evolved independently in many different animal groups. To minimize the energy required to fly, biologists expect that the frequency that animals flap their wings should be determined by the natural resonance frequency of the wing. However, finding a universal mathematical description of flapping flight has proved difficult. Researchers used dimensional analysis to calculate an equation that describes the frequency of wingbeats of flying birds, insects and bats, and the fin strokes of diving animals, including penguins and whales.
    They found that flying and diving animals beat their wings or fins at a frequency that is proportional to the square root of their body mass, divided by their wing area. They tested the accuracy of the equation by plotting its predictions against published data on wingbeat frequencies for bees, moths, dragonflies, beetles, mosquitos, bats, and birds ranging in size from hummingbirds to swans.
    The researchers also compared the equation’s predictions against published data on fin stroke frequencies for penguins and several species of whale, including humpbacks and northern bottlenose whales. The relationship between body mass, wing area and wingbeat frequency shows little variation across flying and diving animals, despite huge differences in their body size, wing shape and evolutionary history, they found. Finally, they estimated that an extinct pterosaur (Quetzalcoatlus northropi) — the largest known flying animal — beat its 10 meter-square wings at a frequency of 0.7 hertz.
    The study shows that despite huge physical differences, animals as distinct as butterflies and bats have evolved a relatively constant relationship between body mass, wing area and wingbeat frequency. The researchers note that for swimming animals they didn’t find publications with all the required information; data from different publications was pieced together to make comparisons, and in some cases animal density was estimated based on other information. Furthermore, extremely small animals — smaller than any yet discovered — would likely not fit the equation, because the physics of fluid dynamics changes at such a small scale. This could have implications in the future for flying nanobots. The authors say that the equation is the simplest mathematical explanation that accurately describes wingbeats and fin strokes across the animal kingdom.
    The authors add: “Differing almost a factor 10000 in wing/fin-beat frequency, data for 414 animals from the blue whale to mosquitoes fall on the same line. As physicists, we were surprised to see how well our simple prediction of the wing-beat formula works for such a diverse collection of animals.” More

  • in

    AIs are irrational, but not in the same way that humans are

    Large Language Models behind popular generative AI platforms like ChatGPT gave different answers when asked to respond to the same reasoning test and didn’t improve when given additional context, finds a new study from researchers at UCL.
    The study, published in Royal Society Open Science, tested the most advanced Large Language Models (LLMs) using cognitive psychology tests to gauge their capacity for reasoning. The results highlight the importance of understanding how these AIs ‘think’ before entrusting them with tasks, particularly those involving decision-making.
    In recent years, the LLMs that power generative AI apps like ChatGPT have become increasingly sophisticated. Their ability to produce realistic text, images, audio and video has prompted concern about their capacity to steal jobs, influence elections and commit crime.
    Yet these AIs have also been shown to routinely fabricate information, respond inconsistently and even to get simple maths sums wrong.
    In this study, researchers from UCL systematically analysed whether seven LLMs were capable of rational reasoning. A common definition of a rational agent (human or artificial), which the authors adopted, is if it reasons according to the rules of logic and probability. An irrational agent is one that does not reason according to these rules1.
    The LLMs were given a battery of 12 common tests from cognitive psychology to evaluate reasoning, including the Wason task, the Linda problem and the Monty Hall problem2. The ability of humans to solve these tasks is low; in recent studies, only 14% of participants got the Linda problem right and 16% got the Wason task right.
    The models exhibited irrationality in many of their answers, such as providing varying responses when asked the same question 10 times. They were prone to making simple mistakes, including basic addition errors and mistaking consonants for vowels, which led them to provide incorrect answers.

    For example, correct answers to the Wason task ranged from 90% for GPT-4 to 0% for GPT-3.5 and Google Bard. Llama 2 70b, which answered correctly 10% of the time, mistook the letter K for a vowel and so answered incorrectly.
    While most humans would also fail to answer the Wason task correctly, it is unlikely that this would be because they didn’t know what a vowel was.
    Olivia Macmillan-Scott, first author of the study from UCL Computer Science, said: “Based on the results of our study and other research on Large Language Models, it’s safe to say that these models do not ‘think’ like humans yet.
    “That said, the model with the largest dataset, GPT-4, performed a lot better than other models, suggesting that they are improving rapidly. However, it is difficult to say how this particular model reasons because it is a closed system. I suspect there are other tools in use that you wouldn’t have found in its predecessor GPT-3.5.”
    Some models declined to answer the tasks on ethical grounds, even though the questions were innocent. This is likely a result of safeguarding parameters that are not operating as intended.
    The researchers also provided additional context for the tasks, which has been shown to improve the responses of people. However, the LLMs tested didn’t show any consistent improvement.

    Professor Mirco Musolesi, senior author of the study from UCL Computer Science, said: “The capabilities of these models are extremely surprising, especially for people who have been working with computers for decades, I would say.
    “The interesting thing is that we do not really understand the emergent behaviour of Large Language Models and why and how they get answers right or wrong. We now have methods for fine-tuning these models, but then a question arises: if we try to fix these problems by teaching the models, do we also impose our own flaws? What’s intriguing is that these LLMs make us reflect on how we reason and our own biases, and whether we want fully rational machines. Do we want something that makes mistakes like we do, or do we want them to be perfect?”
    The models tested were GPT-4, GPT-3.5, Google Bard, Claude 2, Llama 2 7b, Llama 2 13b and Llama 2 70b.
    1 Stein E. (1996). Without Good Reason: The Rationality Debate in Philosophy and Cognitive Science. Clarendon Press.
    2 These tasks and their solutions are available online. An example is the Wason task:
    The Wason task
    Check the following rule: If there is a vowel on one side of the card, there is an even number on the other side.
    You see four cards now: E K 4 7Which of these cards must in any case be turned over to check the rule?
    Answer: a) E and d) 7, as these are the only ones that can violate the rule. More

  • in

    Fighting fires from space in record time: How AI could prevent devastating wildfires

    Australian scientists are getting closer to detecting bushfires in record time, thanks to cube satellites with onboard AI now able to detect fires from space 500 times faster than traditional on-ground processing of imagery.
    Remote sensing and computer science researchers have overcome the limitations of processing and compressing large amounts of hyperspectral imagery on board the smaller, more cost-effective cube satellites before sending it to the ground for analysis, saving precious time and energy.
    The breakthrough, using artificial intelligence, means that bushfires will be detected earlier from space, even before they take hold and generate large amounts of heat, allowing on ground crews to respond more quickly and prevent loss of life and property.
    A project funded by the SmartSat CRC and led by the University of South Australia (UniSA) has used cutting-edge onboard AI technology to develop an energy-efficient early fire smoke detection system for South Australia’s first cube satellite, Kanyini.
    The Kanyini mission is a collaboration between the SA Government, SmartSat CRC and industry partners to launch a 6 U CubeSat satellite into low Earth orbit to detect bushfires as well as monitor inland and coastal water quality.
    Equipped with a hyperspectral imager, the satellite sensor captures reflected light from Earth in different wavelengths to generate detailed surface maps for various applications, including bushfire monitoring, water quality assessment and land management.
    Lead researcher UniSA geospatial scientist Dr Stefan Peters says that, traditionally, Earth observation satellites have not had the onboard processing capabilities to analyse complex images of Earth captured from space in real-time.

    His team, which includes scientists from UniSA, Swinburne University of Technology and Geoscience Australia, has overcome this by building a lightweight AI model that can detect smoke within the available onboard processing, power consumption and data storage constraints of cube satellites.
    Compared to the on-ground based processing of hyperspectral satellite imagery to detect fires, the AI onboard model reduced the volume of data downlinked to 16% of its original size, while consuming 69% less energy.
    The AI onboard model also detected fire smoke 500 times faster than traditional on-ground processing.
    “Smoke is usually the first thing you can see from space before the fire gets hot and big enough for sensors to identify it, so early detection is crucial,” Dr Peters says.
    To demonstrate the AI model, they used simulated satellite imagery of recent Australian bushfires, using machine learning to train the model to detect smoke in an image.
    “For most sensor systems, only a fraction of the data collected contains critical information related to the purpose of a mission. Because the data can’t be processed on board large satellites, all of it is downlinked to the ground where it is analysed, taking up a lot of space and energy. We have overcome this by training the model to differentiate smoke from cloud, which makes it much faster and more efficient.”
    Using a past fire event in the Coorong as a case study, the simulated Kanyini AI onboard approach took less than 14 minutes to detect the smoke and send the data to the South Pole ground station.

    “This research shows there are significant benefits of onboard AI compared to traditional on ground processing,” Dr Peters says. “This will not only prove invaluable in the event of bushfires but also serve as an early warning system for other natural disasters.”
    The research team hopes to demonstrate the onboard AI fire detection system in orbit in 2025 when the Kanyini mission is operational.
    “Once we have ironed out any issues, we hope to commercialise the technology and employ it on a CubeSat constellation, aiming to contribute to early fire detection within an hour.”
    A video explaining the research is also available at: https://youtu.be/dKQZ8V2Zagk More

  • in

    Babies use ‘helpless’ infant period to learn powerful foundation models, just like ChatGPT

    Babies’ brains are not as immature as previously thought, rather they are using the period of postnatal ‘helplessness’ to learn powerful foundation models similar to those underpinning generative Artificial Intelligence, according to a new study.
    The study, led by a Trinity College Dublin neuroscientist and just published in the journal Trends in Cognitive Sciences, finds for the first time that the classic explanation for infant helplessness is not supported by modern brain data.
    Compared to many animals, humans are helpless for a long time after birth. Many animals, such as horses and chickens, can walk on the day they are born. This protracted period of helplessness puts human infants at risk and places a huge burden on the parents, but surprisingly has survived evolutionary pressure.
    “Since the 1960s scientists have thought that the helplessness exhibited by human babies is due to the constraints of birth. The belief was that with big heads human babies have to be born early, resulting in immature brains and a helpless period that extends up to one year of age. We wanted to find out why human babies were helpless for such a long period,” explains Professor Rhodri Cusack, Professor of Cognitive Neuroscience, and lead author of the paper.
    The research team comprised Prof. Cusack, who measures development of the infant brain and mind using neuroimaging; Prof. Christine Charvet, Auburn University, USA, who compares brain development across species; and Dr. Marc’Aurelio Ranzato, a senior AI researcher at DeepMind.
    “Our study compared brain development across animal species. It drew from a long-standing project, Translating Time, that equates corresponding ages across species to establish that human brains are more mature than many other species at birth,” says Prof. Charvet.
    The researchers used brain imaging and found that many systems in the human infant’s brain are already functioning and processing the rich streams of information from the senses. This contradicts the long-held belief that many infant brain systems are too immature to function.

    The team then compared learning in humans with the latest machine learning models, where deep neural networks benefit from a ‘helpless’ period of pre-training.
    In the past, AI models were directly trained on tasks for which they were needed for example a self-driving car was trained to recognise what they see on a road. But now models are initially pre-trained to see patterns within vast quantities of data, without performing any task of importance. The resulting foundation model is subsequently used to learn specific tasks. It has been found this ultimately leads to quicker learning of new tasks and better performance.
    “We propose that human infants similarly use the ‘helpless’ period in infancy to pre-train, learning powerful foundation models, which go on to underpin cognition in later life with high performance and rapid generalisation. This is very similar to the powerful machine learning models that have led to the big breakthroughs in generative AI in recent years, such as OpenAI’s ChatGPT or Google’s Gemini,” Prof. Cusack explained.
    The researchers say that future research on how babies learn could well inspire the next generation of AI models.
    “Although there have been big breakthroughs in AI, foundation models consume vast quantities of energy and require vastly more data than babies. Understanding how babies learn may inspire the next generation of AI models. The next steps in research would be to directly compare learning in brains and AI,” he concluded. More

  • in

    A cracking discovery — eggshell waste can recover rare earth elements needed for green energy

    A collaborative team of researchers has made a cracking discovery with the potential to make a significant impact in the sustainable recovery of rare earth elements (REEs), which are in increasing demand for use in green energy technologies. The team found that humble eggshell waste could recover REES from water, offering a new, environmentally friendly method for their extraction.
    The researchers, from Trinity College Dublin’s School of Natural Sciences, and iCRAG, the Science Foundation Ireland research centre in applied geosciences, have just published their ground-breaking findings in the international journal ACS Omega.
    REEs, which are essential for the technologies used in electric cars and wind turbines, for example, are in increasing demand but in relatively short supply. As a result, scientists must find new ways of extracting them from the environment — and in sustainable ways, with current methods often harmful.
    Here, the researchers discovered that calcium carbonate (calcite) in eggshells can effectively absorb and separate these valuable REEs from water.
    The researchers placed eggshells in solutions containing REEs at various temperatures from a pleasant 25 °C to a scorching 205 °C, and for different time periods of up to three months. They found that the elements could enter the eggshells via diffusion along the calcite boundaries and the organic matrix, and, at higher temperatures, that the rare earth built new minerals on the eggshell surface.
    At 90 °C, the eggshell surface helped recover formations of a rare earth compound called kozoite. As things got hotter, the eggshells underwent a complete transformation with the calcite shells dissolving and being replaced by polycrystalline kozoite. And at the highest temperature of 205°C, this mineral gradually transitioned into bastnasite, the stable rare earth carbonate mineral that is used by industry to extract REEs for technology applications.
    This innovative method suggests that waste eggshells could be repurposed as a low-cost, eco-friendly material to help meet the growing demand for REES, as the eggshells trap distinct rare earths within their structure over time.
    Lead author Dr Remi Rateau commented on the significance of the research, stating, “This study presents a potential innovative use of waste material that not only offers a sustainable solution to the problem of rare earth element recovery but also aligns with the principles of circular economy and waste valorisation.”
    Principal Investigator, Prof. Juan Diego Rodriguez-Blanco, emphasised the broader implications of the findings, adding: “By transforming eggshell waste into a valuable resource for rare earth recovery, we address critical environmental concerns associated with traditional extraction methods and contribute to the development of greener technologies.”
    Work was conducted at the Department of Geology in the School of Natural Sciences, Trinity. iCRAG (Irish Centre for Research in Applied Geosciences) is an SFI centre dedicated to advancing geosciences research with a focus on sustainable resource management and environmental protection. More

  • in

    Top IT industry managers are divided on the need for face-to-face communication in the workplace

    Many managers are currently seeking a balance between digital and face-to-face communication. A recent study from the University of Eastern Finland shows that top IT industry managers have different views on when and for what purposes face-to-face communication in the workplace is needed.
    “Some top managers felt that all work tasks can be performed remotely with the help of digital communication. According to them, face-to-face communication is only necessary for maintaining interpersonal relationships and a sense of community,” says Doctoral Researcher Lotta Salin of the University of Eastern Finland.
    Others, however, felt that face-to-face communication is still needed, especially for complex tasks such as co-development, co-creation and co-innovation. Among the interviewees were also managers who felt that face-to-face communication in the workplace is important not only for maintaining interpersonal relationships but also for performing work tasks and maintaining a sense of community.
    Maintaining a sense of community requires special attention from management
    According to the study, managers shared the view of community building and maintenance in the workplace requiring deliberate attention. Remote work and digital communication have become the new norm in the IT industry and in the work of many professionals, which means that managers must deliberately devote their time and energy to fostering community-building communication.
    The study suggests that building and maintaining a sense of community is possible through both face-to-face and digital communication.
    “Face-to-face encounters provide opportunities for spontaneous and informal discussions when team members get together for lunch, coffee or company celebrations, for example. However, regular on-camera meetings and the opportunity to see colleagues in real time also creates the experience of being connected,” Salin notes.
    “Having an instant messaging platform where team members can exchange relaxed and informal messages fosters a sense of community. Through video, it is possible to organise activities that boost team spirit, ranging from remote coffee breaks for the team to entertaining video broadcasts aimed at the entire staff.”
    The findings of emphasise that managers’ objectives for workplace communication are not solely related to work tasks but are significantly broader. In addition to focusing on work tasks, managers’ communication highlights the building and maintaining of interpersonal relationships in the workplace. Moreover, managers aim to convey a certain image of themselves through communication, with some emphasising their own competence, while others present themselves as easily approachable. Furthermore, building and maintaining a sense of community through communication has recently emerged as a new, yet equally important, objective in managers’ work.
    The researchers interviewed 33 top managers from major IT industry companies in Finland. The managers had long leadership and e-leadership experience and they were members of the executive board of their company. More

  • in

    Great news, parents: You do have power over your tweens’ screen use

    Restricting use in bedrooms and at mealtimes have the biggest impact, but modeling good behavior is also important.
    For many parents, it can feel like curbing kids’ screen use is a losing battle. But new research from UC San Francisco (UCSF) has found the parenting practices that work best to curb screen time and addictive screen behavior: restricting screens in bedrooms and at mealtimes and modeling healthy practices at home.
    Researchers asked 12- to 13-year-olds how often they used screens for everything but school, including gaming, texting, social media, video chatting, watching videos and browsing the internet; and whether their screen use was problematic.
    Then, they asked parents how they used their own screens in front of their kids, how they monitored and restricted their kids’ screen use, and whether they used it to reward or punish behavior. They also asked about the family’s use of screens at mealtimes and the child’s use of screens in the bedroom.
    Using screens in bedrooms and at mealtime were linked to increased time and addictive use. But use went down when parents kept track of and limited their kids’ screen time, and when they modeled healthy behavior themselves.
    “These results are heartening because they give parents some concrete strategies they can use with their tweens and young teens: set screen time limits, keep track of your kids’ screen use, and avoid screens in bedrooms and at mealtimes,” said Jason Nagata, MD, a pediatrician at UCSF Benioff Children’s Hospitals and the first author of the study, publishing June 5 in Pediatric Research. “Also, try to practice what you preach.”
    Refining AAP guidance
    The study analyzed the effectiveness on tweens of parenting strategies recommended by the American Academy of Pediatrics’ (AAP) for children and adolescents aged 5 to 18 years old. It is one of the few studies to examine how parenting practices affect screen use in early adolescence, when children start to become more independent.

    “We wanted to look at young adolescents in particular, because they are at a stage when mobile phone and social media use often ramps up and sets the course for future habits,” Nagata said.
    The researchers collected data from 10,048 U.S. participants, 46% of whom were racial or ethnic minorities, from the Adolescent Brain Cognitive Development (ABCD) study.
    Parents were asked to rate, on a scale of 1 to 4, their level of agreement with such statements as, “My child falls asleep using a screen-based device.”
    The researchers then looked to see how the level of parental agreement predicted the children’s daily screen time, and found it went up 1.6 hours for each additional point related to bedroom screen use. The same held true for using screens at mealtimes, which added 1.24 hours. Poor modeling by parents added 0.66 hours.
    Limiting and monitoring their kids’ screen time reduced it by 1.29 hours and 0.83 hours, respectively. But using screen time as either a reward or a punishment was not effective, resulting in 0.36 more hours, as well as more problematic video game use.
    Used in moderation, screens can help maintain social connections and foster community, but especially for children, problematic use can lead to mental health problems, as well as physical inactivity and problems with sleep.
    “Screen time at bedtime displaces sleep time, which is essential for health and development in young adolescents,” Nagata said. “Parents can consider keeping screens outside their children’s bedroom and turning off devices and notifications overnight.” More

  • in

    AI approach elevates plasma performance and stability across fusion devices

    Achieving a sustained fusion reaction is a delicate balancing act, requiring a sea of moving parts to come together to maintain a high-performing plasma: one that is dense enough, hot enough, and confined for long enough for fusion to take place.
    Yet as researchers push the limits of plasma performance, they have encountered new challenges for keeping plasmas under control, including one that involves bursts of energy escaping from the edge of a super-hot plasma. These edge bursts negatively impact overall performance and even damage the plasma-facing components of a reactor over time.
    Now, a team of fusion researchers led by engineers at Princeton and the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) have successfully deployed machine learning methods to suppress these harmful edge instabilities — without sacrificing plasma performance.
    With their approach, which optimizes the system’s suppression response in real-time, the research team demonstrated the highest fusion performance without the presence of edge bursts at two different fusion facilities — each with its own set of operating parameters. The researchers reported their findings on May 11 in Nature Communications, underscoring the vast potential of machine learning and other artificial intelligence systems to quickly quash plasma instabilities.
    “Not only did we show our approach was capable of maintaining a high-performing plasma without instabilities, but we also showed that it can work at two different facilities,” said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment. “We demonstrated that our approach is not just effective — it’s versatile as well.”
    The costs of high-confinement
    Researchers have long experimented with various ways to operate fusion reactors to achieve the necessary conditions for fusion. Among the most promising approaches involves operating a reactor in high-confinement mode, a regime characterized by the formation of a steep pressure gradient at the plasma’s edge that offers enhanced plasma confinement.

    However, the high-confinement mode has historically come hand-in-hand with instabilities at the plasma’s edge, a challenge that has required fusion researchers to find creative workarounds.
    One fix involves using the magnetic coils that surround a fusion reactor to apply magnetic fields to the edge of the plasma, breaking up the structures that might otherwise develop into a full-fledged edge instability. Yet this solution is imperfect: while successful at stabilizing the plasma, applying these magnetic perturbations typically leads to lower overall performance.
    “We have a way to control these instabilities, but in turn, we’ve had to sacrifice performance, which is one of the main motivations for operating in the high-confinement mode in the first place,” said Kolemen, who is also a staff research physicist at PPPL.
    The performance loss is partly due to the difficulty of optimizing the shape and amplitude of the applied magnetic perturbations, which in turn stems from the computational intensity of existing physics-based optimization approaches. These conventional methods involve a set of complex equations and can take tens of seconds to optimize a single point in time — far from ideal when plasma behavior can change in mere milliseconds. Consequently, fusion researchers have had to preset the shape and amplitude of the magnetic perturbations ahead of each fusion run, losing the ability to make real-time adjustments.
    “In the past, everything has had to be pre-programmed,” said co-first author SangKyeun Kim, a staff research scientist at PPPL and former postdoctoral researcher in Kolemen’s group. “That limitation has made it difficult to truly optimize the system, because it means that the parameters can’t be changed in real time depending on how the conditions of the plasma unfold.”
    Raising performance by lowering computation time
    The Princeton-led team’s machine learning approach slashes the computation time from tens of seconds to the millisecond scale, opening the door for real-time optimization. The machine learning model, which is a more efficient surrogate for existing physics-based models, can monitor the plasma’s status from one millisecond to the next and alter the amplitude and shape of the magnetic perturbations as needed. This allows the controller to strike a balance between edge burst suppression and high fusion performance, without sacrificing one for the other.

    “With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” said co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.
    Because their approach is ultimately grounded in physics, the researchers said it would be straightforward to apply to different fusion devices around the world. In their paper, for instance, they demonstrated the success of their approach at both the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. At both facilities, which each have a unique set of magnetic coils, the method achieved strong confinement and high fusion performance without harmful plasma edge bursts.
    “Some machine learning approaches have been critiqued for being solely data-driven, meaning that they’re only as good as the amount of quality data they’re trained on,” Shousha said. “But since our model is a surrogate of a physics code, and the principles of physics apply equally everywhere, it’s easier to extrapolate our work to other contexts.”
    The team is already working to refine their model to be compatible with other fusion devices, including planned future reactors such as ITER, which is currently under construction.
    One active area of work in Kolemen’s group involves enhancing their model’s predictive capabilities. For instance, the current model still relies on encountering several edge bursts over the course of the optimization process before working effectively, posing unwanted risks to future reactors. If instead the researchers can improve the model’s ability to recognize the precursors to these harmful instabilities, it could be possible to optimize the system without encountering a single edge burst.
    Kolemen said the current work is yet another example of the potential for AI to overcome longstanding bottlenecks in developing fusion power as a clean energy resource. Previously, researchers led by Kolemen successfully deployed a separate AI controller to predict and avoid another type of plasma instability in real time at the DIII-D tokamak.
    “For many of the challenges we have faced with fusion, we’ve gotten to the point where we know how to approach a solution but have been limited in our ability to implement those solutions by the computational complexity of our traditional tools,” said Kolemen. “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges.” More