More stories

  • in

    Study explores the promises and pitfalls of evolutionary genomics

    The second century Alexandrian astronomer and mathematician Claudius Ptolemy had a grand ambition. Hoping to make sense of the motion of stars and the paths of planets, he published a magisterial treatise on the subject, known as the Almagest. Ptolemy created a complex mathematical model of the universe that seemed to recapitulate the movements of the celestial objects he observed.
    Unfortunately, a fatal flaw lay at the heart of his cosmic scheme. Following the prejudices of his day, Ptolemy worked from the premise that the Earth was the center of the universe. The Ptolemaic universe, composed of complex “epicycles” to account for planet and star movements, has long since been consigned to the history books, though its conclusions remained the scientific dogma for over 1200 years.
    The field of evolutionary biology is no less subject to misguided theoretical approaches, sometimes producing impressive models that nevertheless fail to convey the true workings of nature as it shapes the dizzying assortment of living forms on Earth.
    A new study examines mathematical models designed to draw inferences about how evolution operates at the level of populations of organisms. The study concludes that such models must be constructed with the greatest care, avoiding unwarranted initial assumptions, weighing the quality of existing knowledge and remaining open to alternate explanations.
    Failure to apply strict procedures in null model construction can lead to theories that seem to square with certain aspects of available data derived from DNA sequencing, yet fail to correctly elucidate underlying evolutionary processes, which are often highly complex and multifaceted.
    Such theoretical frameworks may offer compelling but ultimately flawed pictures of how evolution actually acts on populations over time, be these populations of bacteria, shoals of fish, or human societies and their various migrations during prehistory. More

  • in

    Bumps could smooth quantum investigations

    Atoms do weird things when forced out of their comfort zones. Rice University engineers have thought up a new way to give them a nudge.
    Materials theorist Boris Yakobson and his team at Rice’s George R. Brown School of Engineering have a theory that changing the contour of a layer of 2D material, thus changing the relationships between its atoms, might be simpler to do than previously thought.
    While others twist 2D bilayers — two layers stacked together — of graphene and the like to change their topology, the Rice researchers suggest through computational models that growing or stamping single-layer 2D materials on a carefully designed undulating surface would achieve “an unprecedented level of control” over their magnetic and electronic properties.
    They say the discovery opens a path to explore many-body effects, the interactions between multiple microscopic particles, including quantum systems.
    The paper by Yakobson and two alumni, co-lead author Sunny Gupta and Henry Yu, of his lab appears in Nature Communications.
    The researchers were inspired by recent discoveries that twisting or otherwise deforming 2D materials bilayers like bilayer graphene into “magic angles” induced interesting electronic and magnetic phenomena, including superconductivity. More

  • in

    Growing wildfire threats loom over the birthplace of the atomic bomb

    There are things I will always remember from my time in New Mexico. The way the bark of towering ponderosa pines smells of vanilla when you lean in close. Sweeping vistas, from forested mountaintops to the Rio Grande Valley, that embellish even the most mundane shopping trip. The trepidation that comes with the tendrils of smoke rising over nearby canyons and ridges during the dry, wildfire-prone summer months.

    There were no major wildfires near Los Alamos National Laboratory during the year and a half that I worked in public communications there and lived just across Los Alamos Canyon from the lab. I’m in Maryland now, and social media this year has brought me images and video clips of the wildfires that have been devastating parts of New Mexico, including the Cerro Pelado fire in the Jemez Mountains just west of the lab.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Wherever they pop up, wildfires can ravage the land, destroy property and displace residents by the tens of thousands. The Cerro Pelado fire is small compared with others raging east of Santa Fe — it grew only to the size of Washington, D.C. The fire, which started mysteriously on April 22, is now mostly contained. But at one point it came within 5.6 kilometers of the lab, seriously threatening the place that’s responsible for creating and maintaining key portions of fusion bombs in our nation’s nuclear arsenal.

    That close call may be just a hint of growing fire risks to come for the weapons lab as the Southwest suffers in the grip of an epic drought made worse by human-caused climate change (SN: 4/16/20). May and June typically mark the start of the state’s wildfire season. This year, fires erupted in April and were amplified by a string of warm, dry and windy days. The Hermits Peak and Calf Canyon fires east of Santa Fe have merged to become the largest wildfire in New Mexico’s recorded history.

    Los Alamos National Lab is in northern New Mexico, about 56 kilometers northwest of Santa Fe. The lab’s primary efforts revolve around nuclear weapons, accounting for 71 percent of its $3.9 billion budget, according the lab’s fiscal year 2021 numbers. The budget covers a ramp-up in production of hollow plutonium spheres, known as “pits” because they are the cores of nuclear bombs, to 30 per year beginning in 2026. That’s triple the lab’s current capability of 10 pits per year. The site is also home to radioactive waste and debris that has been a consequence of weapons production since the first atomic bomb was built in Los Alamos in the early 1940s (SN: 8/6/20).

    What is the danger due to fire approaching the lab’s nuclear material and waste? According to literature that Peter Hyde, a spokesperson for the lab, sent to me to ease my concern, not much.

    Over the last 3½ years, the lab has removed 3,500 tons of trees and other potential wildfire fuel from the sprawling, 93-square-kilometer complex. Lab facilities, a lab pamphlet says, “are designed and operated to protect the materials that are inside, and radiological and other potentially hazardous materials are stored in containers that are engineered and tested to withstand extreme environments, including heat from fire.”

    What’s more, most of roughly 20,000 drums full of nuclear waste that were stored under tents on the lab’s grounds have been removed. They were a cause for anxiety during the last major fire to threaten the lab in 2011. According to the most recent numbers on the project’s website, all but 3,812 of those drums have been shipped off to be stored 655 meters underground at the Waste Isolation Pilot Plant near Carlsbad, N.M.

    But there’s still 3,500 cubic meters of nuclear waste  in the storage area, according to a March 2022 DOE strategic planning document for Los Alamos. That’s enough to fill 17,000 55-gallon drums. So potentially disastrous quantities of relatively exposed nuclear waste remain at the lab — a single drum from the lab site that exploded after transport to Carlsbad in 2014 resulted in a two-year shutdown of the storage facility. With a total budgeted cleanup cost of $2 billion, the incident is one of the most expensive nuclear accidents in the nation’s history.

    Since the 2011 fire, a wider buffer space around the tents has been cleared of vegetation. In conjunction with fire suppression systems, it’s unlikely that wildfire will be a danger to the waste-filled drums, according to a 2016 risk analysis of extreme wildfire scenarios conducted by the lab.

    But a February 2021 audit by the U.S. Department of Energy’s Office of Inspector General is less rosy. It found that, despite the removal of most of the waste drums and the multiyear wildfire mitigation efforts that the lab describes, the lab’s wildfire protection is still lacking.

    According to the 20-page federal audit, the lab at that time had not developed a “comprehensive, risk-based approach to wildland fire management” in accordance with federal policies related to wildland fire management. The report also noted compounding issues, including the absence of federal oversight of the lab’s wildfire management activities.

    A canyon on lab grounds that runs alongside the adjacent city of Los Alamos (two spots shown) was called out in an audit by the Department of Energy’s Office of Inspector General because it was packed with about 400 to 500 trees per acre. The ideal number from a wildfire management viewpoint is 40 to 50 trees per acre.The Department of Energy’s Wildland Fire Prevention Efforts at the Los Alamos National Laboratory

    Among the ongoing risks, not all fire roads were maintained well enough to provide a safe route for firefighters and others, “which could create dangerous conditions for emergency responders and delay response times,” the auditors wrote.

    And a canyon that runs between the lab and the adjacent town of Los Alamos was identified in the report as being packed with 10 times the number of trees that would be ideal, from a wildfire safety perspective. To make matters worse, there’s a hazardous waste site at the bottom of the canyon that could, the auditors wrote, “produce a health risk to the environment and to human health during a fire.”

    “The report was pretty stark,” says Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists. “And certainly, after all the warnings, if they’re still not doing all they need to do to fully mitigate the risk, then that’s just foolishness.”

    A 2007 federal audit of Los Alamos, as well as nuclear weapons facilities in Washington state and Idaho, showed similar problems. In short, it seems little has changed at Los Alamos in the 14-year span between 2007 and 2021. Lab spokespeople did not respond to my questions about the lab’s efforts to address the specific problems identified in the 2021 report, despite repeated requests. 

    The Los Alamos area has experienced three major wildfires since the lab was founded — the Cerro Grande fire in 2000, Las Conchas in 2011 and Cerro Pelado this year. But we probably can’t count on 11-year gaps between future wildfires near Los Alamos, according to Alice Hill, the senior fellow for energy and the environment with the Council on Foreign Relations, who’s based in Washington, D.C.

    The changing climate is expected to dramatically affect wildfire risks in years to come, turning Los Alamos and surrounding areas into a tinderbox. A study in 2018 in Climatic Change found that the region extending from the higher elevations in New Mexico, where Los Alamos is located, into Colorado and Arizona will experience the greatest increase in wildfire probabilities in the Southwest. A new risk projection tool that was recommended by Hill, called Risk Factor, also shows increasing fire risk in the Los Alamos area over the next 30 years.

    “We are at the point where we are imagining, as we have to, things that we’ve never experienced,” Hill says. “That is fundamentally different than how we have approached these problems throughout human history, which is to look to the past to figure out how to be safer in the future…. The nature of wildfire has changed as more heat is added [to the planet], as temperatures rise.”

    Increased plutonium pit production will add to the waste that needs to be shipped to Carlsbad. “Certainly, the radiological assessments in sort of the worst case of wildfire could lead to a pretty significant release of radioactivity, not only affecting the workers onsite but also the offsite public. It’s troubling,” says Lyman, who suggests that nuclear labs like Los Alamos should not be located in such fire-prone areas.

    The Los Alamos Neutron Science Center (shown in March of 2019), a key facility at Los Alamos National Laboratory, was evacuated in March 2019 when power lines sparked a nearby wildfire. It could be damaged or even destroyed if a high-intensity wildfire burned through a nearby heavily forested canyon, according to an audit by the Department of Energy’s Office of Inspector General.The Department of Energy’s Wildland Fire Prevention Efforts at the Los Alamos National Laboratory

    For now, some risks from the Cerra Pelado wildfire will persist, according to Jeff Surber, operations section chief for the U.S. Department of Agriculture Forestry Service’s efforts to fight the fire. Large wildfires like Cerra Pelado “hold heat for so long and they continue to smolder in the interior where it burns intermittently,” he said in a May 9 briefing to Los Alamos County residents, and to concerned people like me watching online.

    It will be vital to monitor the footprint of the fire until rain or snow finally snuffs it out late in the year. Even then, some danger will linger in the form of “zombie fires” that can flame up long after wildfires appear to have been extinguished (SN: 5/19/21). “We’ve had fires come back in the springtime because there was a root underground that somehow stayed lit all winter long,” said Surber.

    So the Cerro Pelado fire, and its occasional smoky tendrils, will probably be a part of life in northern New Mexico for months still. And the future seems just as fiery, if not worse. That’s something all residents, including the lab, need to be preparing for.

    Meantime, if you make it out to the mountains of New Mexico soon enough, be sure to sniff a vanilla-flavored ponderosa while you still can. I know I will. More

  • in

    'Beam-steering' technology takes mobile communications beyond 5G

    Birmingham scientists have revealed a new beam-steering antenna that increases the efficiency of data transmission for ‘beyond 5G’ — and opens up a range of frequencies for mobile communications that are inaccessible to currently used technologies.
    Experimental results, presented today for the first time at the 3rd International Union of Radio Science Atlantic / Asia-Pacific Radio Science Meeting, show the device can provide continuous ‘wide-angle’ beam steering, allowing it to track a moving mobile phone user in the same way that a satellite dish turns to track a moving object, but with significantly enhanced speeds.
    Devised by researchers from the University of Birmingham’s School of Engineering, the technology has demonstrated vast improvements in data transmissoin efficiency at frequencies ranging across the millimetre wave spectrum, specifically those identified for 5G (mmWave) and 6G, where high efficiency is currently only achievable using slow, mechanically steered antenna solutions.
    For 5G mmWave applications, prototypes of the beam-steering antenna at 26 GHz have shown unprecedented data transmission efficiency.
    The device is fully compatible with existing 5G specifications that are currently used by mobile communications networks. Moreover, the new technology does not require the complex and inefficient feeding networks required for commonly deployed antenna systems, instead using a low complexity system which improves performance and is simple to fabricate.
    The beam-steering antenna was developed by Dr James Churm, Dr Muhammad Rabbani, and Professor Alexandros Feresidis, Head of the Metamaterials Engineering Laboratory, as a solution for fixed, base station antenna, for which current technology shows reduced efficiency at higher frequencies, limiting the use of these frequencies for long-distance transmission. More

  • in

    Great timing, supercomputer upgrade lead to successful forecast of volcanic eruption

    In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.
    Initially developed on an iMac computer, the new modeling approach had already garnered attention for successfully recreating the unexpected eruption of Alaska’s Okmok volcano in 2008. Gregg’s team, based out of the University of Illinois Urbana-Champaign and the National Center for Supercomputing Applications, wanted to test the model’s new high-performance computing upgrade, and Geist’s Sierra Negra observations showed signs of an imminent eruption.
    “Sierra Negra is a well-behaved volcano,” said Gregg, the lead author of a new report of the successful effort. “Meaning that, before eruptions in the past, the volcano has shown all the telltale signs of an eruption that we would expect to see like groundswell, gas release and increased seismic activity. This characteristic made Sierra Negra a great test case for our upgraded model.”
    However, many volcanoes don’t follow these neatly established patterns, the researchers said. Forecasting eruptions is one of the grand challenges in volcanology, and the development of quantitative models to help with these trickier scenarios is the focus of Gregg and her team’s work.
    Over the winter break of 2017-18, Gregg and her colleagues ran the Sierra Negra data through the new supercomputing-powered model. They completed the run in January 2018 and, even though it was intended as a test, it ended up providing a framework for understanding Sierra Negra’s eruption cycles and evaluating the potential and timing of future eruptions — but nobody realized it yet.
    “Our model forecasted that the strength of the rocks that contain Sierra Negra’s magma chamber would become very unstable sometime between June 25 and July 5, and possibly result in a mechanical failure and subsequent eruption,” said Gregg, who also is an NCSA faculty fellow. “We presented this conclusion at a scientific conference in March 2018. After that, we became busy with other work and did not look at our models again until Dennis texted me on June 26, asking me to confirm the date we had forecasted. Sierra Negra erupted one day after our earliest forecasted mechanical failure date. We were floored.”
    Though it represents an ideal scenario, the researchers said, the study shows the power of incorporating high-performance supercomputing into practical research. “The advantage of this upgraded model is its ability to constantly assimilate multidisciplinary, real-time data and process it rapidly to provide a daily forecast, similar to weather forecasting,” said Yan Zhan, a former Illinois graduate student and co-author of the study. “This takes an incredible amount of computing power previously unavailable to the volcanic forecasting community.” More

  • in

    AI ethical decision making: Is society ready?

    With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, “is society ready for AI ethical decision making?” by studying human interaction with autonomous cars.
    The team published their findings on May 6, 2022, in the Journal of Behavioral and Experimental Economics.
    In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another — the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver’s decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.
    In their second experiment, 563 human subjects responded to the researchers’ questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to “vote” whether to allow the autonomous cars to make ethical decisions. In both cases, the subjects could choose to be in favor of or against the decisions made by the technology. This second experiment was designed to test the effect of two alternative ways of introducing AI into society.
    The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. The researchers believe that the discrepancy between the two results is caused by a combination of two elements.
    The first element is that individual people believe society as a whole does not want AI ethical decision making, and so they assign a positive weight to their beliefs when asked for their opinion on the matter. “Indeed, when participants are asked explicitly to separate their answers from those of society, the difference between the permissibility for AI and human drivers vanishes,” said Johann Caro-Burnett, an assistant professor in the Graduate School of Humanities and Social Sciences, Hiroshima University.
    The second element is that when introducing this new technology into society, allowing discussion of the topic has mixed results depending on the country. “In regions where people trust their government and have strong political institutions, information and decision-making power improve how subjects evaluate the ethical decisions of AI. In contrast, in regions where people do not trust their government and have weak political institutions, decision-making capability deteriorates how subjects evaluate the ethical decisions of AI,” said Caro-Burnett.
    “We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society’s opinion,” said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.
    The researchers believe this rejection of a new technology, that is mostly due to incorporating individuals’ beliefs about society’s opinion, is likely to apply in other machines and robots. “Therefore, it will be important to determine how to aggregate individual preferences into one social preference. Moreover, this task will also have to be different across countries, as our results suggest,” said Kaneko.
    Story Source:
    Materials provided by Hiroshima University. Note: Content may be edited for style and length. More

  • in

    An atomic-scale window into superconductivity paves the way for new quantum materials

    Superconductors are materials with no electrical resistance whatsoever, commonly requiring extremely low temperatures. They are used in a wide range of domains, from medical applications to a central role in quantum computers. Superconductivity is caused by specially linked pairs of electrons known as Cooper pairs. So far, the occurrence of Cooper pairs has been measured indirectly macroscopically in bulk, but a new technique developed by researchers at Aalto University and Oak Ridge National Laboratories in the US can detect their occurrence with atomic precision.
    The experiments were carried out by Wonhee Ko and Petro Maksymovych at Oak Ridge National Laboratory with the theoretical support of Professor Jose Lado of Aalto University. Electrons can quantum tunnel across energy barriers, jumping from one system to another through space in a way that cannot be explained with classical physics. For example, if an electron pairs with another electron right at the point where a metal and superconductor meet, it could form a Cooper pair that enters the superconductor while also “kicking back” another kind of particle into the metal in a process known as Andreev reflection. The researchers looked for these Andreev reflections to detect Cooper pairs.
    To do this, they measured the electrical current between an atomically sharp metallic tip and a superconductor, as well as how the current depended on the separation between the tip and the superconductor. This enabled them to detect the amount of Andreev reflection going back to the superconductor, while maintaining an imaging resolution comparable to individual atoms. The results of the experiment corresponded exactly to Lado’s theoretical model.
    This experimental detection of Cooper pairs at the atomic scale provides an entirely new method for understanding quantum materials. For the first time, researchers can uniquely determine how the wave functions of Cooper pairs are reconstructed at the atomic scale and how they interact with atomic-scale impurities and other obstacles.
    ‘This technique establishes a critical new methodology for understanding the internal quantum structure of exotic types of superconductors known as unconventional superconductors, potentially allowing us to tackle a variety of open problems in quantum materials,’ Lado says. Unconventional superconductors are a potential fundamental building block for quantum computers and could provide a platform to realize superconductivity at room temperature. Cooper pairs have unique internal structures in unconventional superconductors which so far have been challenging to understand.
    This discovery allows for the direct probing of the state of Cooper pairs in unconventional superconductors, establishing a critical new technique for a whole family of quantum materials. It represents a major step forward in our understanding of quantum materials and helps push forward the work of developing quantum technologies.
    Story Source:
    Materials provided by Aalto University. Note: Content may be edited for style and length. More

  • in

    Creating artificial intelligence that acts more human by 'knowing that it knows'

    A research group from the Graduate School of Informatics, Nagoya University, has taken a big step towards creating a neural network with metamemory through a computer-based evolution experiment.
    In recent years, there has been rapid progress in designing artificial intelligence technology using neural networks that imitate brain circuits. One goal of this field of research is understanding the evolution of metamemory to use it to create artificial intelligence with a human-like mind.
    Metamemory is the process by which we ask ourselves whether we remember what we had for dinner yesterday and then use that memory to decide whether to eat something different tonight. While this may seem like a simple question, answering it involves a complex process. Metamemory is important because it involves a person having knowledge of their own memory capabilities and adjusting their behavior accordingly.
    “In order to elucidate the evolutionary basis of the human mind and consciousness, it is important to understand metamemory,” explains lead author Professor Takaya Arita. “A truly human-like artificial intelligence, which can be interacted with and enjoyed like a family member in a person’s home, is an artificial intelligence that has a certain amount of metamemory, as it has the ability to remember things that it once heard or learned.”
    When studying metamemory, researchers often employ a ‘delayed matching-to-sample task’. In humans, this task consists of the participant seeing an object, such as a red circle, remembering it, and then taking part in a test to select the thing that they had previously seen from multiple similar objects. Correct answers are rewarded and wrong answers punished. However, the subject can choose not to do the test and still earn a smaller reward.
    A human performing this task would naturally use their metamemory to consider if they remembered seeing the object. If they remembered it, they would take the test to get the bigger reward, and if they were unsure, they would avoid risking the penalty and receive the smaller reward instead. Previous studies reported that monkeys could perform this task as well.
    The Nagoya University team comprising Professor Takaya Arita, Yusuke Yamato, and Reiji Suzuki of the Graduate School of Informatics created an artificial neural network model that performed the delayed matching-to-sample task and analyzed how it behaved.
    Despite starting from random neural networks that did not even have a memory function, the model was able to evolve to the point that it performed similarly to the monkeys in previous studies. The neural network could examine its memories, keep them, and separate outputs. The intelligence was able to do this without requiring any assistance or intervention by the researchers, suggesting the plausibility of it having metamemory mechanisms. “The need for metamemory depends on the user’s environment. Therefore, it is important for artificial intelligence to have a metamemory that adapts to its environment by learning and evolving,” says Professor Arita of the finding. “The key point is that the artificial intelligence learns and evolves to create a metamemory that adapts to its environment.”
    Creating an adaptable intelligence with metamemory is a big step towards making machines that have memories like ours. The team is enthusiastic about the future, “This achievement is expected to provide clues to the realization of artificial intelligence with a ‘human-like mind’ and even consciousness.”
    The research results were published in the online edition of the international scientific journal Scientific Reports. The study was partly supported by a JSPS/MEXT Grants-in-Aid for Scientific Research KAKENHI (JP17H06383 in #4903).
    Story Source:
    Materials provided by Nagoya University. Note: Content may be edited for style and length. More