More stories

  • in

    Babies use ‘helpless’ infant period to learn powerful foundation models, just like ChatGPT

    Babies’ brains are not as immature as previously thought, rather they are using the period of postnatal ‘helplessness’ to learn powerful foundation models similar to those underpinning generative Artificial Intelligence, according to a new study.
    The study, led by a Trinity College Dublin neuroscientist and just published in the journal Trends in Cognitive Sciences, finds for the first time that the classic explanation for infant helplessness is not supported by modern brain data.
    Compared to many animals, humans are helpless for a long time after birth. Many animals, such as horses and chickens, can walk on the day they are born. This protracted period of helplessness puts human infants at risk and places a huge burden on the parents, but surprisingly has survived evolutionary pressure.
    “Since the 1960s scientists have thought that the helplessness exhibited by human babies is due to the constraints of birth. The belief was that with big heads human babies have to be born early, resulting in immature brains and a helpless period that extends up to one year of age. We wanted to find out why human babies were helpless for such a long period,” explains Professor Rhodri Cusack, Professor of Cognitive Neuroscience, and lead author of the paper.
    The research team comprised Prof. Cusack, who measures development of the infant brain and mind using neuroimaging; Prof. Christine Charvet, Auburn University, USA, who compares brain development across species; and Dr. Marc’Aurelio Ranzato, a senior AI researcher at DeepMind.
    “Our study compared brain development across animal species. It drew from a long-standing project, Translating Time, that equates corresponding ages across species to establish that human brains are more mature than many other species at birth,” says Prof. Charvet.
    The researchers used brain imaging and found that many systems in the human infant’s brain are already functioning and processing the rich streams of information from the senses. This contradicts the long-held belief that many infant brain systems are too immature to function.

    The team then compared learning in humans with the latest machine learning models, where deep neural networks benefit from a ‘helpless’ period of pre-training.
    In the past, AI models were directly trained on tasks for which they were needed for example a self-driving car was trained to recognise what they see on a road. But now models are initially pre-trained to see patterns within vast quantities of data, without performing any task of importance. The resulting foundation model is subsequently used to learn specific tasks. It has been found this ultimately leads to quicker learning of new tasks and better performance.
    “We propose that human infants similarly use the ‘helpless’ period in infancy to pre-train, learning powerful foundation models, which go on to underpin cognition in later life with high performance and rapid generalisation. This is very similar to the powerful machine learning models that have led to the big breakthroughs in generative AI in recent years, such as OpenAI’s ChatGPT or Google’s Gemini,” Prof. Cusack explained.
    The researchers say that future research on how babies learn could well inspire the next generation of AI models.
    “Although there have been big breakthroughs in AI, foundation models consume vast quantities of energy and require vastly more data than babies. Understanding how babies learn may inspire the next generation of AI models. The next steps in research would be to directly compare learning in brains and AI,” he concluded. More

  • in

    A cracking discovery — eggshell waste can recover rare earth elements needed for green energy

    A collaborative team of researchers has made a cracking discovery with the potential to make a significant impact in the sustainable recovery of rare earth elements (REEs), which are in increasing demand for use in green energy technologies. The team found that humble eggshell waste could recover REES from water, offering a new, environmentally friendly method for their extraction.
    The researchers, from Trinity College Dublin’s School of Natural Sciences, and iCRAG, the Science Foundation Ireland research centre in applied geosciences, have just published their ground-breaking findings in the international journal ACS Omega.
    REEs, which are essential for the technologies used in electric cars and wind turbines, for example, are in increasing demand but in relatively short supply. As a result, scientists must find new ways of extracting them from the environment — and in sustainable ways, with current methods often harmful.
    Here, the researchers discovered that calcium carbonate (calcite) in eggshells can effectively absorb and separate these valuable REEs from water.
    The researchers placed eggshells in solutions containing REEs at various temperatures from a pleasant 25 °C to a scorching 205 °C, and for different time periods of up to three months. They found that the elements could enter the eggshells via diffusion along the calcite boundaries and the organic matrix, and, at higher temperatures, that the rare earth built new minerals on the eggshell surface.
    At 90 °C, the eggshell surface helped recover formations of a rare earth compound called kozoite. As things got hotter, the eggshells underwent a complete transformation with the calcite shells dissolving and being replaced by polycrystalline kozoite. And at the highest temperature of 205°C, this mineral gradually transitioned into bastnasite, the stable rare earth carbonate mineral that is used by industry to extract REEs for technology applications.
    This innovative method suggests that waste eggshells could be repurposed as a low-cost, eco-friendly material to help meet the growing demand for REES, as the eggshells trap distinct rare earths within their structure over time.
    Lead author Dr Remi Rateau commented on the significance of the research, stating, “This study presents a potential innovative use of waste material that not only offers a sustainable solution to the problem of rare earth element recovery but also aligns with the principles of circular economy and waste valorisation.”
    Principal Investigator, Prof. Juan Diego Rodriguez-Blanco, emphasised the broader implications of the findings, adding: “By transforming eggshell waste into a valuable resource for rare earth recovery, we address critical environmental concerns associated with traditional extraction methods and contribute to the development of greener technologies.”
    Work was conducted at the Department of Geology in the School of Natural Sciences, Trinity. iCRAG (Irish Centre for Research in Applied Geosciences) is an SFI centre dedicated to advancing geosciences research with a focus on sustainable resource management and environmental protection. More

  • in

    Top IT industry managers are divided on the need for face-to-face communication in the workplace

    Many managers are currently seeking a balance between digital and face-to-face communication. A recent study from the University of Eastern Finland shows that top IT industry managers have different views on when and for what purposes face-to-face communication in the workplace is needed.
    “Some top managers felt that all work tasks can be performed remotely with the help of digital communication. According to them, face-to-face communication is only necessary for maintaining interpersonal relationships and a sense of community,” says Doctoral Researcher Lotta Salin of the University of Eastern Finland.
    Others, however, felt that face-to-face communication is still needed, especially for complex tasks such as co-development, co-creation and co-innovation. Among the interviewees were also managers who felt that face-to-face communication in the workplace is important not only for maintaining interpersonal relationships but also for performing work tasks and maintaining a sense of community.
    Maintaining a sense of community requires special attention from management
    According to the study, managers shared the view of community building and maintenance in the workplace requiring deliberate attention. Remote work and digital communication have become the new norm in the IT industry and in the work of many professionals, which means that managers must deliberately devote their time and energy to fostering community-building communication.
    The study suggests that building and maintaining a sense of community is possible through both face-to-face and digital communication.
    “Face-to-face encounters provide opportunities for spontaneous and informal discussions when team members get together for lunch, coffee or company celebrations, for example. However, regular on-camera meetings and the opportunity to see colleagues in real time also creates the experience of being connected,” Salin notes.
    “Having an instant messaging platform where team members can exchange relaxed and informal messages fosters a sense of community. Through video, it is possible to organise activities that boost team spirit, ranging from remote coffee breaks for the team to entertaining video broadcasts aimed at the entire staff.”
    The findings of emphasise that managers’ objectives for workplace communication are not solely related to work tasks but are significantly broader. In addition to focusing on work tasks, managers’ communication highlights the building and maintaining of interpersonal relationships in the workplace. Moreover, managers aim to convey a certain image of themselves through communication, with some emphasising their own competence, while others present themselves as easily approachable. Furthermore, building and maintaining a sense of community through communication has recently emerged as a new, yet equally important, objective in managers’ work.
    The researchers interviewed 33 top managers from major IT industry companies in Finland. The managers had long leadership and e-leadership experience and they were members of the executive board of their company. More

  • in

    Great news, parents: You do have power over your tweens’ screen use

    Restricting use in bedrooms and at mealtimes have the biggest impact, but modeling good behavior is also important.
    For many parents, it can feel like curbing kids’ screen use is a losing battle. But new research from UC San Francisco (UCSF) has found the parenting practices that work best to curb screen time and addictive screen behavior: restricting screens in bedrooms and at mealtimes and modeling healthy practices at home.
    Researchers asked 12- to 13-year-olds how often they used screens for everything but school, including gaming, texting, social media, video chatting, watching videos and browsing the internet; and whether their screen use was problematic.
    Then, they asked parents how they used their own screens in front of their kids, how they monitored and restricted their kids’ screen use, and whether they used it to reward or punish behavior. They also asked about the family’s use of screens at mealtimes and the child’s use of screens in the bedroom.
    Using screens in bedrooms and at mealtime were linked to increased time and addictive use. But use went down when parents kept track of and limited their kids’ screen time, and when they modeled healthy behavior themselves.
    “These results are heartening because they give parents some concrete strategies they can use with their tweens and young teens: set screen time limits, keep track of your kids’ screen use, and avoid screens in bedrooms and at mealtimes,” said Jason Nagata, MD, a pediatrician at UCSF Benioff Children’s Hospitals and the first author of the study, publishing June 5 in Pediatric Research. “Also, try to practice what you preach.”
    Refining AAP guidance
    The study analyzed the effectiveness on tweens of parenting strategies recommended by the American Academy of Pediatrics’ (AAP) for children and adolescents aged 5 to 18 years old. It is one of the few studies to examine how parenting practices affect screen use in early adolescence, when children start to become more independent.

    “We wanted to look at young adolescents in particular, because they are at a stage when mobile phone and social media use often ramps up and sets the course for future habits,” Nagata said.
    The researchers collected data from 10,048 U.S. participants, 46% of whom were racial or ethnic minorities, from the Adolescent Brain Cognitive Development (ABCD) study.
    Parents were asked to rate, on a scale of 1 to 4, their level of agreement with such statements as, “My child falls asleep using a screen-based device.”
    The researchers then looked to see how the level of parental agreement predicted the children’s daily screen time, and found it went up 1.6 hours for each additional point related to bedroom screen use. The same held true for using screens at mealtimes, which added 1.24 hours. Poor modeling by parents added 0.66 hours.
    Limiting and monitoring their kids’ screen time reduced it by 1.29 hours and 0.83 hours, respectively. But using screen time as either a reward or a punishment was not effective, resulting in 0.36 more hours, as well as more problematic video game use.
    Used in moderation, screens can help maintain social connections and foster community, but especially for children, problematic use can lead to mental health problems, as well as physical inactivity and problems with sleep.
    “Screen time at bedtime displaces sleep time, which is essential for health and development in young adolescents,” Nagata said. “Parents can consider keeping screens outside their children’s bedroom and turning off devices and notifications overnight.” More

  • in

    AI approach elevates plasma performance and stability across fusion devices

    Achieving a sustained fusion reaction is a delicate balancing act, requiring a sea of moving parts to come together to maintain a high-performing plasma: one that is dense enough, hot enough, and confined for long enough for fusion to take place.
    Yet as researchers push the limits of plasma performance, they have encountered new challenges for keeping plasmas under control, including one that involves bursts of energy escaping from the edge of a super-hot plasma. These edge bursts negatively impact overall performance and even damage the plasma-facing components of a reactor over time.
    Now, a team of fusion researchers led by engineers at Princeton and the U.S. Department of Energy’s Princeton Plasma Physics Laboratory (PPPL) have successfully deployed machine learning methods to suppress these harmful edge instabilities — without sacrificing plasma performance.
    With their approach, which optimizes the system’s suppression response in real-time, the research team demonstrated the highest fusion performance without the presence of edge bursts at two different fusion facilities — each with its own set of operating parameters. The researchers reported their findings on May 11 in Nature Communications, underscoring the vast potential of machine learning and other artificial intelligence systems to quickly quash plasma instabilities.
    “Not only did we show our approach was capable of maintaining a high-performing plasma without instabilities, but we also showed that it can work at two different facilities,” said research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment. “We demonstrated that our approach is not just effective — it’s versatile as well.”
    The costs of high-confinement
    Researchers have long experimented with various ways to operate fusion reactors to achieve the necessary conditions for fusion. Among the most promising approaches involves operating a reactor in high-confinement mode, a regime characterized by the formation of a steep pressure gradient at the plasma’s edge that offers enhanced plasma confinement.

    However, the high-confinement mode has historically come hand-in-hand with instabilities at the plasma’s edge, a challenge that has required fusion researchers to find creative workarounds.
    One fix involves using the magnetic coils that surround a fusion reactor to apply magnetic fields to the edge of the plasma, breaking up the structures that might otherwise develop into a full-fledged edge instability. Yet this solution is imperfect: while successful at stabilizing the plasma, applying these magnetic perturbations typically leads to lower overall performance.
    “We have a way to control these instabilities, but in turn, we’ve had to sacrifice performance, which is one of the main motivations for operating in the high-confinement mode in the first place,” said Kolemen, who is also a staff research physicist at PPPL.
    The performance loss is partly due to the difficulty of optimizing the shape and amplitude of the applied magnetic perturbations, which in turn stems from the computational intensity of existing physics-based optimization approaches. These conventional methods involve a set of complex equations and can take tens of seconds to optimize a single point in time — far from ideal when plasma behavior can change in mere milliseconds. Consequently, fusion researchers have had to preset the shape and amplitude of the magnetic perturbations ahead of each fusion run, losing the ability to make real-time adjustments.
    “In the past, everything has had to be pre-programmed,” said co-first author SangKyeun Kim, a staff research scientist at PPPL and former postdoctoral researcher in Kolemen’s group. “That limitation has made it difficult to truly optimize the system, because it means that the parameters can’t be changed in real time depending on how the conditions of the plasma unfold.”
    Raising performance by lowering computation time
    The Princeton-led team’s machine learning approach slashes the computation time from tens of seconds to the millisecond scale, opening the door for real-time optimization. The machine learning model, which is a more efficient surrogate for existing physics-based models, can monitor the plasma’s status from one millisecond to the next and alter the amplitude and shape of the magnetic perturbations as needed. This allows the controller to strike a balance between edge burst suppression and high fusion performance, without sacrificing one for the other.

    “With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” said co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.
    Because their approach is ultimately grounded in physics, the researchers said it would be straightforward to apply to different fusion devices around the world. In their paper, for instance, they demonstrated the success of their approach at both the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. At both facilities, which each have a unique set of magnetic coils, the method achieved strong confinement and high fusion performance without harmful plasma edge bursts.
    “Some machine learning approaches have been critiqued for being solely data-driven, meaning that they’re only as good as the amount of quality data they’re trained on,” Shousha said. “But since our model is a surrogate of a physics code, and the principles of physics apply equally everywhere, it’s easier to extrapolate our work to other contexts.”
    The team is already working to refine their model to be compatible with other fusion devices, including planned future reactors such as ITER, which is currently under construction.
    One active area of work in Kolemen’s group involves enhancing their model’s predictive capabilities. For instance, the current model still relies on encountering several edge bursts over the course of the optimization process before working effectively, posing unwanted risks to future reactors. If instead the researchers can improve the model’s ability to recognize the precursors to these harmful instabilities, it could be possible to optimize the system without encountering a single edge burst.
    Kolemen said the current work is yet another example of the potential for AI to overcome longstanding bottlenecks in developing fusion power as a clean energy resource. Previously, researchers led by Kolemen successfully deployed a separate AI controller to predict and avoid another type of plasma instability in real time at the DIII-D tokamak.
    “For many of the challenges we have faced with fusion, we’ve gotten to the point where we know how to approach a solution but have been limited in our ability to implement those solutions by the computational complexity of our traditional tools,” said Kolemen. “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges.” More

  • in

    Largest-ever antibiotic discovery effort uses AI to uncover potential cures in microbial dark matter

    Almost a century ago, the discovery of antibiotics like penicillin revolutionized medicine by harnessing the natural bacteria-killing abilities of microbes. Today, a new study co-led by researchers at the Perelman School of Medicine at the University of Pennsylvania suggests that natural-product antibiotic discovery is about to accelerate into a new era, powered by artificial intelligence (AI).
    The study, published in Cell, details how the researchers used a form of AI called machine learning to search for antibiotics in a vast dataset containing the recorded genomes of tens of thousands of bacteria and other primitive organisms. This unprecedented effort yielded nearly one million potential antibiotic compounds, with dozens showing promising activity in initial tests against disease-causing bacteria.
    “AI in antibiotic discovery is now a reality and has significantly accelerated our ability to discover new candidate drugs. What once took years can now be achieved in hours using computers” said study co-senior author César de la Fuente, PhD, a Presidential Assistant Professor in Psychiatry, Microbiology, Chemistry, Chemical and Biomolecular Engineering, and Bioengineering.
    Nature has always been a good place to look for new medicines, especially antibiotics. Bacteria, ubiquitous on our planet, have evolved numerous antibacterial defenses, often in the form of short proteins (“peptides”) that can disrupt bacterial cell membranes and other critical structures. While the discovery of penicillin and other natural-product-derived antibiotics revolutionized medicine, the growing threat of antibiotic resistance has underscored the urgent need for new antimicrobial compounds.
    In recent years, de la Fuente and colleagues have pioneered AI-powered searches for antimicrobials. They have identified preclinical candidates in the genomes of contemporary humans, extinct Neanderthals and Denisovans, woolly mammoths, and hundreds of other organisms. One of the lab’s primary goals is to mine the world’s biological information for useful molecules, including antibiotics.
    For this new study, the research team used a machine learning platform to sift through multiple public databases containing microbial genomic data. The analysis covered 87,920 genomes from specific microbes as well as 63,410 mixes of microbial genomes — “metagenomes” — from environmental samples. This comprehensive exploration spanned diverse habitats around the planet.
    This extensive exploration succeeded in identifying 863,498 candidate antimicrobial peptides, more than 90 percent of which had never been described before. To validate these findings, the researchers synthesized 100 of these peptides and tested them against 11 disease-causing bacterial strains, including antibiotic-resistant strains of E. coli and Staphylococcus aureus.

    “Our initial screening revealed that 63 of these 100 candidates completely eradicated the growth of at least one of the pathogens tested, and often multiple strains,” de la Fuente said. “In some cases, these molecules were effective against bacteria at very low doses.”
    Promising results were also observed in preclinical animal models, where some of the potent compounds successfully stopped infections. Further analysis suggested that many of these candidate molecules destroy bacteria by disrupting their outer protective membranes, effectively popping them like balloons.
    The identified compounds originated from microbes living in a wide variety of habitats, including human saliva, pig guts, soil and plants, corals, and many other terrestrial and marine organisms. This validates the researchers’ broad approach to exploring the world’s biological data.
    Overall, the findings demonstrate the power of AI in discovering new antibiotics, providing multiple new leads for antibiotic developers, and signaling the start of a promising new era in antibiotic discovery.
    The team has published their repository of putative antimicrobial sequences, which they call AMPSphere, which is open access and freely available at https://ampsphere.big-data-biology.org/ More

  • in

    Accelerating the R&D of wearable tech: Combining collaborative robotics, AI

    Engineers at the University of Maryland (UMD) have developed a model that combines machine learning and collaborative robotics to overcome challenges in the design of materials used in wearable green tech.
    Led by Po-Yen Chen, assistant professor in UMD’s Department of Chemical and Biomolecular Engineering, the accelerated method to create aerogel materials used in wearable heating applications — published June 1 in the journal Nature Communications- could automate design processes for new materials.
    Similar to water-based gels, but instead made using air, aerogels are lightweight and porous materials used in thermal insulation and wearable technologies, due to their mechanical strength and flexibility. But despite their seemingly simplistic nature, the aerogel assembly line is complex; researchers rely on time-intensive experiments and experience-based approaches to explore a vast design space and design the materials.
    To overcome these challenges, the research team combined robotics, machine learning algorithms, and materials science expertise to enable the accelerated design of aerogels with programmable mechanical and electrical properties. Their prediction model is built to generate sustainable products with a 95 percent accuracy rate.
    “Materials science engineers often struggle to adopt machine learning design due to the scarcity of high-quality experimental data. Our workflow, which combines robotics and machine learning, not only enhances data quality and collection rates, but also assists researchers in navigating the complex design space,” said Chen.
    The team’s strong and flexible aerogels were made using conductive titanium nanosheets, as well as naturally occurring components such as cellulose (an organic compound found in plant cells) and gelatin (a collagen-derived protein found in animal tissue and bones).
    The team says their tool can also be expanded to meet other applications in aerogel design — such as green technologies used in oil spill cleanup, sustainable energy storage, and thermal energy products like insulating windows.
    “The blending of these approaches is putting us at the frontier of materials design with tailorable complex properties. We foresee leveraging this new scaleup production platform to design aerogels with unique mechanical, thermal, and electrical properties for harsh working environments,” said Eleonora Tubaldi, an assistant professor in mechanical engineering and collaborator in the study.
    Looking ahead, Chen’s group will conduct studies to understand the microstructures responsible for aerogel flexibility and strength properties. His work has been supported by a UMD Grand Challenges Team Project Grant for the programmable design of natural plastic substitutes, jointly awarded to UMD Mechanical Engineering Professor Teng Li. More

  • in

    Internet addiction affects the behavior and development of adolescents

    Adolescents with an internet addiction undergo changes in the brain that could lead to additional addictive behaviour and tendencies, finds a new study by UCL researchers.
    The findings, published in PLOS Mental Health, reviewed 12 articles involving 237 young people aged 10-19 with a formal diagnosis of internet addiction between 2013 and 2023.
    Internet addiction has been defined as a person’s inability to resist the urge to use the internet, negatively impacting their psychological wellbeing, as well as their social, academic and professional lives.
    The studies used functional magnetic resonance imaging (fMRI) to inspect the functional connectivity (how regions of the brain interact with each other) of participants with internet addiction, both while resting and completing a task.
    The effects of internet addiction were seen throughout multiple neural networks in the brains of adolescents. There was a mixture of increased and decreased activity in the parts of the brain that are activated when resting (the default mode network).
    Meanwhile, there was an overall decrease in the functional connectivity in the parts of the brain involved in active thinking (the executive control network).
    These changes were found to lead to addictive behaviours and tendencies in adolescents, as well as behaviour changes associated with intellectual ability, physical coordination, mental health and development.

    Lead author, MSc student, Max Chang (UCL Great Ormond Street Institute for Child Health) said: “Adolescence is a crucial developmental stage during which people go through significant changes in their biology, cognition, and personalities. As a result, the brain is particularly vulnerable to internet addiction related urges during this time, such as compulsive internet usage, cravings towards usage of the mouse or keyboard and consuming media.
    “The findings from our study show that this can lead to potentially negative behavioural and developmental changes that could impact the lives of adolescents. For example, they may struggle to maintain relationships and social activities, lie about online activity and experience irregular eating and disrupted sleep.”
    With smartphones and laptops being ever more accessible, internet addiction is a growing problem across the globe. Previous research has shown that people in the UK spend over 24 hours every week online and, of those surveyed, more than half self-reported being addicted to the internet.
    Meanwhile, Ofcom found that of the 50 million internet users in the UK, over 60% said their internet usage had a negative effect on their lives — such as being late or neglecting chores.
    Senior author, Irene Lee (UCL Great Ormond Street Institute of Child Health), said: “There is no doubt that the internet has certain advantages. However, when it begins to affect our day-to-day lives, it is a problem.
    “We would advise that young people enforce sensible time limits for their daily internet usage and ensure that they are aware of the psychological and social implications of spending too much time online.”
    Mr Chang added: “We hope our findings will demonstrate how internet addiction alters the connection between the brain networks in adolescence, allowing physicians to screen and treat the onset of internet addiction more effectively.

    “Clinicians could potentially prescribe treatment to aim at certain brain regions or suggest psychotherapy or family therapy targeting key symptoms of internet addiction.
    “Importantly, parental education on internet addiction is another possible avenue of prevention from a public health standpoint. Parents who are aware of the early signs and onset of internet addiction will more effectively handle screen time, impulsivity, and minimise the risk factors surrounding internet addiction.”
    Study limitations
    Research into the use of fMRI scans to investigate internet addiction is currently limited and the studies had small adolescent samples. They were also primarily from Asian countries. Future research studies should compare results from Western samples to provide more insight on therapeutic intervention. More