More stories

  • in

    Extreme climate shifts long ago may have helped drive reptile evolution

    There’s nothing like a big mass extinction to open up ecological niches and clear out the competition, accelerating evolution for some lucky survivors. Or is there? A new study suggests that the rate of climate change may play just as large a role in speeding up evolution.

    The study focuses on reptile evolution across 57 million years — before, during and after the mass extinction at the end of the Permian Period (SN: 12/6/18). That extinction event, triggered by carbon dioxide pumped into the atmosphere and oceans through increased volcanic activity about 252 million years ago, knocked out a whopping 86 percent of Earth’s species. Yet reptiles recovered from the chaos relatively well. Their exploding diversity of species around that time has been widely regarded as a result of their slithering into newly available niches.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    But rapid climate fluctuations were already taking place much earlier in the Permian, and so were surges of reptile diversification, researchers say. Analyzing fossils from 125 reptile species shows that bursts of evolutionary diversity in reptiles were tightly correlated with relatively rapid fluctuations in climate throughout the Permian and millions of years into the next geologic period, the Triassic, researchers report August 19 in Science Advances.

    Scientists’ understanding of evolution is expanding as they become more tuned into the connection between it and environmental change, says Jessica Whiteside, a geologist at the University of Southampton in England who works on mass extinctions but was not involved in the new work. “This study is bound to become an important part of that conversation.”

    To investigate reptile evolution, evolutionary paleobiologist Tiago Simões of Harvard University and colleagues precisely measured and scanned reptile fossils ranging from 294 million to 237 million years old. In all, the researchers examined 1,000 specimens at 50 research institutions in 20 countries.  For climate data, the team used an existing large database of sea surface temperatures based on oxygen isotope data, extending back 450 million years, published in 2021.

    By closely tracking changes in body and head size and shape in so many species, paired with that climate data, the researchers found that the faster the rate of climate change, the faster reptiles evolved. The fastest rate of reptile diversification did not occur at the end-Permian extinction, the team found, but several million years later in the Triassic, when climate change was at its most rapid and global temperatures witheringly hot. Ocean surface temperatures during this time soared to 40° Celsius, or 104⁰ Fahrenheit — about the temperature of a hot tub, says Simões.

    A few species did evolve less rapidly than their kin, Simões says. The difference? Size. For instance, reptiles with smaller body sizes are already preadapted to live in rapidly warming climates, he says. Due to their greater surface area to body ratio, “small-bodied reptiles can better exchange heat with their surrounding environment,” so stay relatively cooler than larger animals.

    “The smaller reptiles were basically being forced by natural selection to stay the same, while during that same period of time, the large reptiles were being told by natural selection ‘You need to change right away or you’re going to go extinct,’” Simões says.

    This phenomenon, called the Lilliput effect, is not a new proposal, Simões says, adding that it’s been well established in marine organisms. “But it’s the first time it’s been quantified in limbed vertebrates across this critical period in Earth’s history.”

    Simões and colleagues’ detailed work has refined the complex evolutionary tree for reptiles and their ancestors. But, for now, it’s unclear which played a bigger role in reptile evolution long ago — all those open ecological niches after the end-Permian mass extinction, or the dramatic climate fluctuations outside of the extinction event.

    “We cannot say which one was more important,” Simões says. “Without either one, the course of evolution in the Triassic and the rise of reptiles to global dominance in terrestrial ecosystems would have been quite different.”  More

  • in

    Common, cheap ingredients can break down some ‘forever chemicals’

    There’s a new way to rip apart harmful “forever chemicals,” scientists say.

    Perfluoroalkyl and polyfluoroalkyl substances, also known as PFAS, are found in nonstick pans, water-repellent fabrics and food packaging and they are pervasive throughout the environment. They’re nicknamed forever chemicals for their ability to stick around and not break down. In part, that’s because PFAS have a super strong bond between their carbon and fluorine atoms (SN: 6/4/19). Now, using a bit of heat and two relatively common compounds, researchers have degraded one major type of forever chemical in the lab, the team reports in the Aug. 19 Science. The work could help pave the way for a process for breaking down certain forever chemicals commercially, for instance by treating wastewater.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “The fundamental knowledge of how the materials degrade is the single most important thing coming out of this study,” organic chemist William Dichtel said in an August 16 news conference.

    While some scientists have found relatively simple ways of breaking down select PFAS, most degradation methods require harsh, energy-intensive processes using intense pressure — in some cases over 22 megapascals — or extremely high temperatures — sometimes upwards of 1000⁰ Celsius — to break the chemical bonds (SN: 6/3/22).

    Dichtel, of Northwestern University in Evanston, Ill., and his team experimented with two substances found in nearly every chemistry lab cabinet: sodium hydroxide, also known as lye, and a solvent called dimethyl sulfoxide, or DMSO. The team worked specifically with a group of forever chemicals called PFCAs, which contain carboxylic acid and constitute a large percentage of all PFAS. Some of these kinds of forever chemicals are found in water-resistant clothes.

    When the team combined PFCAs with the lye and DMSO at 120⁰ C and with no extra pressure needed, the carboxylic acid fell off the chemical and became carbon dioxide in a process called decarboxylation. What happened next was unexpected, Dichtel said. Loss of the acid led to a process causing “the entire molecule to fall apart in a cascade of complex reactions.” This cascade involved steps that degraded the rest of the chemical into fluoride ions and smaller carbon-containing products, leaving behind virtually no harmful by-products.     .

    “It’s a neat method, it’s different from other ones that have been tried,” says Chris Sales, an environmental engineer at Drexel University in Philadelphia who was not involved in the study. “The biggest question is, how could this be adapted and scaled up?” Northwestern has filed a provisional patent on behalf of the researchers.

    Understanding this mechanism is just one step in undoing forever chemicals, Dichtel’s team said. And more research is needed: There are other classes of PFAS that require their own solutions. This process wouldn’t work to tackle PFAS out in the environment, because it requires a concentrated amount of the chemicals. But it could one day be used in wastewater treatment plants, where the pollutants could be filtered out of the water, concentrated and then broken down. More

  • in

    Building blocks of the future for photovoltaics

    An international research team led by the University of Göttingen has, for the first time, observed the build-up of a physical phenomenon that plays a role in the conversion of sunlight into electrical energy in 2D materials. The scientists succeeded in making quasiparticles — known as dark Moiré interlayer excitons — visible and explaining their formation using quantum mechanics. The researchers show how an experimental technique newly developed in Göttingen, femtosecond photoemission momentum microscopy, provides profound insights at a microscopic level, which will be relevant to the development of future technology. The results were published in Nature.
    Atomically thin structures made of two-dimensional semiconductor materials are promising candidates for future components in electronics, optoelectronics and photovoltaics. Interestingly, the properties of these semiconductors can be controlled in an unusual way: like Lego bricks, the atomically thin layers can be stacked on top of each other. However, there is another important trick: while Lego bricks can only be stacked on top — whether directly or twisted at an angle of 90 degrees — the angle of rotation in the structure of the semiconductors can be varied. It is precisely this angle of rotation that is interesting for the production of new types of solar cells.
    However, although changing this angle can reveal breakthroughs for new technologies, it also leads to experimental challenges. In fact, typical experimental approaches have only indirect access to the moiré interlayer excitons, therefore, these excitons are commonly termed “dark” excitons. “With the help of femtosecond photoemission momentum microscopy, we actually managed to make these dark excitons visible,” explains Dr. Marcel Reutzel, junior research group leader at the Faculty of Physics at Göttingen University. “This allows us to measure how the excitons are formed at a time scale of a millionth of a millionth of a millisecond. We can describe the dynamics of the formation of these excitons using quantum mechanical theory developed by Professor Ermin Malic’s research group at Marburg.”
    “These results not only give us a fundamental insight into the formation of dark Moiré interlayer excitons, but also open up a completely new perspective to enable scientists to study the optoelectronic properties of new and fascinating materials,” says Professor Stefan Mathias, head of the study at Göttingen University’s Faculty of Physics. “This experiment is ground-breaking because, for the first time, we have detected the signature of the Moiré potential imprinted on the exciton, that is, the impact of the combined properties of the two twisted semiconductor layers. In the future, we will study this specific effect further to learn more about the properties of the resulting materials.”
    This research was made possible thanks to the German Research Foundation (DFG) who provided Collaborative Research Centre funding for the CRCs “Control of Energy Conversion on Atomic Scales” and “Mathematics of Experiment” in Göttingen, and the CRC “Structure and Dynamics of Internal Interfaces” in Marburg.
    Story Source:
    Materials provided by University of Göttingen. Note: Content may be edited for style and length. More

  • in

    No one-size-fits-all artificial intelligence approach works for prevention, diagnosis or treatment using precision medicine

    A Rutgers analysis of dozens of artificial intelligence (AI) software programs used in precision, or personalized, medicine to prevent, diagnose and treat disease found that no program exists that can be used for all treatments.
    “Precision medicine is one of the most trending subjects in basic and medical science today,” said Zeeshan Ahmed, an assistant professor of medicine at Rutgers Robert Wood Johnson Medical School who led the study, published in Briefings in Bioinformatics. “Major reasons include its potential to provide predictive diagnostics and personalized treatment to variable known and rare disorders. However, until now, there has been very little effort exerted in organizing and understanding the many computing approaches to this field. We want to pave the way for a new data-centric era of discovery in health care.”
    Precision medicine, a technology still in its infancy, is an approach to treatment that uses information about an individual’s medical history and genetic profile and relates it to the information of many others to find patterns that can help prevent, diagnose or treat a disease. The AI-based approach rests on a high level of both computing power and machine-learning intelligence because of the enormous scope of medical and genetic information scoured and analyzed for patterns.
    The comparative and systematic review, believed by the authors to be one of the first of its kind, identified 32 of the most prevalent precision medicine AI approaches used to study preventive treatments for a range of diseases, including obesity, Alzheimer’s, inflammatory bowel disease, breast cancer and major depressive disorder. The bevy of AI approaches analyzed in the study — the researchers combed through five years of high-quality medical literature — suggest the field is advancing rapidly but is suffering from disorganization, Ahmed said.
    In AI, software programs simulate human intelligence processes. In machine learning, a subcategory of AI, programs are designed to “learn” as they process more and more data, becoming ever more accurate at predicting outcomes. The effort rests on algorithms, step-by-step procedures for solving a problem or performing a computation.
    Researchers such as Ahmed, who conducts studies on cardiovascular genomics at the Rutgers Institute for Health, Health Care Policy and Aging Research (IFH), are racing to collect and analyze complex biological data while also developing the computational systems that undergird the endeavor.
    Because the use of genetics is “arguably the most data-rich and complex component of precision medicine,” Ahmed said, the team focused especially on reviewing and comparing scientific objectives, methodologies, data sources, ethics and gaps in approaches used.
    Those interested in precision medicine, he said, can look to the paper for guidance as to which AI programs may be best suited for their research.
    To aid the advent of precision medicine, the study concluded that the scientific community needs to embrace several “grand challenges,” from addressing general issues such as improved data standardization and enhanced protection of personal identifying information to more technical issues such as correcting for errors in genomic and clinical data.
    “AI has the potential to play a vital role to achieve significant improvements in providing better individualized and population healthcare at lower costs,” Ahmed said. “We need to strive to address possible challenges that continue to slow the advancements of this breakthrough treatment approach.”
    Other Rutgers researchers involved in the study included Sreya Vadapalli and Habiba Abdelhalim, research assistants at the IFH, and Saman Zeeshan, a bioinformatics research scientist and former postdoctoral research associate at the Rutgers Cancer Institute of New Jersey.
    Story Source:
    Materials provided by Rutgers University. Original written by Kitta MacPherson. Note: Content may be edited for style and length. More

  • in

    Physics of high-temperature superconductors untangled

    When some materials are cooled to a certain temperature, they lose electric resistance, becoming superconductors.
    In this state, an electric charge can course through the material indefinitely, making superconductors a valuable resource for transmitting high volumes of electricity and other applications. Superconductors ferry electricity between Long Island and Manhattan. They’re used in medical imaging devices such as MRI machines, in particle accelerators and in magnets such as those used in maglev trains. Even unexpected materials, such as certain ceramic materials, can become superconductors when cooled sufficiently.
    But scientists previously have not understood what occurs in a material to make it a superconductor. Specifically, how high-temperature superconductivity, which occurs in some copper-oxide materials, works hasn’t been previously understood. A 1966 theory examining a different type of superconductors posited that electrons which spin in opposite directions bind together to form what’s called a Cooper pair and allow electric current to pass through the material freely.
    A pair of University of Michigan-led studies examined how superconductivity works, and found, in the first paper, that about 50% of superconductivity can be attributed to the 1966 theory — but the reality, examined in the second paper, is a bit more complicated. The studies, led by recent U-M doctoral graduate Xinyang Dong and U-M physicist Emanuel Gull, are published in Nature Physics and the Proceedings of the National Academy of Science.
    Electrons floating in a crystal need something to bind them together, Gull said. Once you have two electrons bound together, they build a superconducting state. But what ties these electrons together? Electrons typically repel each other, but the 1966 theory suggested that in a crystal with strong quantum effects, the electron-electron repulsion is being screened, or absorbed, by the crystals.
    While the electron repulsion is absorbed by the crystal, an opposite attraction emerges from the spinning properties of the electrons — and causes the electrons to bind in Cooper pairs. This underlies the lack of electronic resistivity. However, the theory doesn’t account for complex quantum effects in these crystals. More

  • in

    Scientists unravel 'Hall effect' mystery in search for next generation memory storage devices

    An advance in the use of antiferromagnetic materials in memory storage devices has been made by an international team of physicists.
    Antiferromagnets are materials that have an internal magnetism caused by the spin of electrons, but almost no external magnetic field. They are of interest because of their potential for data storage since absence of this external (or ‘long range’) magnetic field means the data units — bits — can be packed in more densely within the material.
    This is in contrast to ferromagnets, used in standard magnetic memory devices. The bits in these devices do generate long-range magnetic fields, which prevent them being packed too closely, because otherwise they would interact.
    The property that is measured to read out an antiferromagnetic bit is called the Hall effect, which is a voltage that appears perpendicular to the applied current direction. If the spins in the antiferromagnet are all flipped, the Hall voltage changes sign. So one sign of the Hall voltage corresponds to a ‘1’, and the other sign to a ‘0’ — the basis of binary code used in all computing systems.
    Although scientists have known about the Hall effect in ferromagnetic materials for a long time, the effect in antiferromagnets has only been recognised in the past decade or so and is still poorly understood.
    A team of researchers at the University of Tokyo, in Japan, Cornell and Johns Hopkins Universities in the USA and the University of Birmingham in the UK have suggested an explanation for the ‘Hall effect’ in a Weyl antiferromagnet (Mn3Sn), a material which has a particularly strong spontaneous Hall effect.
    Their results, published in Nature Physics, have implications for both ferromagnets and antiferromagnets — and therefore for next generation memory storage devices overall.
    The researchers were interested in Mn3Sn because it is not a perfect antiferromagnet, but does have a weak external magnetic field. The team wanted to find out if this weak magnetic field was responsible for the Hall effect.
    In their experiment, the team used a device invented by Doctor Clifford Hicks, at the University of Birmingham, who is also a co-author on the paper. The device can be used to apply a tunable stress to the material being tested. By applying this stress to this Weyl antiferromagnet, the researchers observed that the residual external magnetic field increased.
    If the magnetic field were driving the Hall effect, there would be a corresponding effect on the voltage across the material. The researchers showed that, in fact, the voltage does not change substantially, proving that the magnetic field is not important. Instead, they concluded, the arrangement of spinning electrons within the material is responsible for the Hall effect.
    Clifford Hicks, co-author on the paper at the University of Birmingham, said: “These experiments prove that the Hall effect is caused by the quantum interactions between conduction electrons and their spins. The findings are important for understanding — and improving — magnetic memory technology.”
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    Exploring quantum electron highways with laser light

    Topological insulators, or TIs, have two faces: Electrons flow freely along their surface edges, like cars on a superhighway, but can’t flow through the interior of the material at all. It takes a special set of conditions to create this unique quantum state — part electrical conductor, part insulator — which researchers hope to someday exploit for things like spintronics, quantum computing and quantum sensing. For now, they’re just trying to understand what makes TIs tick.
    In the latest advance along those lines, researchers at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University systematically probed the “phase transition” in which a TI loses its quantum properties and becomes just another ordinary insulator.
    They did this by using spiraling beams of laser light to generate harmonics — much like the vibrations of a plucked guitar string — from the material they were examining. Those harmonics make it easy to distinguish what’s happening in the superhighway layer from what’s happening in the interior and see how one state gives way to the other, they reported in Nature Photonics today.
    “The harmonics generated by the material amplify the effects we want to measure, making this a very sensitive way to see what’s going on in a TI,” said Christian Heide, a postdoctoral researcher with the Stanford PULSE Institute at SLAC, who led the experiments.
    “And since this light-based approach can be done in a lab with tabletop equipment, it makes exploring these materials easier and more accessible than some previous methods.”
    These results are exciting, added PULSE principal investigator Shambhu Ghimire, because they show the new method has potential for watching TIs flip back and forth between superhighway and insulating states as it happens and in fine detail — much like a using camera with a very fast shutter speed. More

  • in

    Swarms of microrobots could be solution to unblocking medical devices in body

    Swarms of microrobots injected into the human body could unblock internal medical devices and avoid the need for further surgery, according to new research from the University of Essex.
    The study is the first-time scientists have developed magnetic microrobotics to remove deposits in shunts — common internal medical devices used to treat a variety of conditions by draining excess fluid from organs.
    Shunts are prone to malfunctioning, often caused by blockages due to a build-up of sediment. The sediment not only narrows and obstructs liquid passing through the shunt, but it also affects the shunt’s flexibility. This leads to patients needing repeated, invasive surgeries throughout their lives either to replace the shunt or use a catheter to remove the blockage.
    However, this new research, led by microrobotics expert Dr Ali Hoshiar, from Essex’s School of Computer Science and Electronic Engineering, has shown there could be a wireless, non-invasive alternative to clearing the blockage in a shunt.
    Published in the IEEE Transaction on Biomedical Engineering journal, Dr Hoshiar and his team have shown that a swarm of hundreds of microrobots — made of nano size magnetic nanoparticles — injected into the shunt could remove the sediment instead.
    “Once the magnetic microrobots are injected into the shunt they can be moved along the tube to the affected area using a magnetic field, generated by a powerful magnet on the body’s surface,” explained Dr Hoshiar. “The swarm of microrobots can then be moved so they scrape away the sediment, clearing the tube.
    “The non-invasive nature of this method is a considerable advantage to existing methods as it will potentially eliminate the risk of surgery and a surgery-related infection, thereby decreasing recovery time.”
    With each microrobot smaller than the width of a human hair, once the swarm has done its job, it can either be guided to the stomach via a magnetic field or bodily fluid, so they leave the body naturally. Because the microrobots have very high biocompatibility they will not cause toxicity.
    The research also found a direct relation between the strength of the magnetic field and the success of scraping away the sediment in the shunt.
    This is the first proof-of-concept experiment using microswarms for opening a blockage in a shunt. The next stage of this research is to work with clinicians to carry out trials. The researchers are also looking at how the concept can be used to other applications.
    Story Source:
    Materials provided by University of Essex. Note: Content may be edited for style and length. More