More stories

  • in

    Global river database documents 40 years of change

    A first-ever database compiling movement of the largest rivers in the world over time could become a crucial tool for urban planners to better understand the deltas that are home to these rivers and a large portion of Earth’s population.
    The database, created by researchers at The University of Texas at Austin, uses publicly available remote sensing data to show how the river centerlines of the world’s 48 most threatened deltas have moved during the past 40 years. The data can be used to predict how rivers will continue to move over time and help governments manage population density and future development.
    “When we think about river management strategies, we have very little to no information about how rivers are moving over time,” said Paola Passalacqua, an associate professor in the Cockrell School of Engineering’s Department of Civil, Architectural and Environmental Engineering who leads the ongoing river analysis research.
    The research was published today in Proceedings of the National Academy of Sciences.
    The database includes three U.S. rivers, the Mississippi, the Colorado and the Rio Grande. Although some areas of these deltas are experiencing migration, overall, they are mostly stable, the data show. Aggressive containment strategies to keep those rivers in their place, especially near population centers, play a role in that, Passalacqua said.
    Average migration rates for each river delta help identify which areas are stable and which are experiencing major river shifts. The researchers also published more extensive data online that includes information about how different segments of rivers have moved over time. It could help planners see what’s going in rural areas vs. urban areas when making decisions about how to manage the rivers and what to do with development.
    The researchers leaned on techniques from a variety of disciplines to compile the data and published their methods online. Machine learning and image processing software helped them examine decades’ worth of images. The researchers worked with Alan Bovik of the Department of Electrical and Computer Engineering and doctoral student Leo Isikdogan to develop that technology. They also borrowed from fluid mechanics, using tools designed to monitor water particles in turbulence experiments to instead track changes to river locations over the years.
    “We got the idea to use tools from fluid mechanics while attending a weekly department seminar where other researchers at the university share their work,” said Tess Jarriel, a graduate research assistant in Passalacqua’s lab and lead author of the paper. “It just goes to show how important it is to collaborate across disciplines.”
    Rivers that have high sediment flux and flood frequency move more as it is in their nature and part of an important tradeoff that underpins Passalacqua’s research.
    By knowing more about these river deltas where millions of people live, planners can have a better idea of how best to balance these tradeoffs. Passalacqua, as well as researchers in her lab, have recently published research about these tradeoffs between the need for river freedom and humanity’s desire for stability.
    Passalacqua has been working on this topic for more than eight years. The team and collaborators are in the process of publishing another paper as part of this work that expands beyond the centerlines of rivers and will also look at riverbanks. That additional information will give an even clearer picture about river movement over time, with more nuance, because sides of the river can move in different directions and at different speeds.
    The research was funded through Passalacqua’s National Science Foundation CAREER award; grants from the NSF’s Ocean Sciences and Earth Sciences divisions; and Planet Texas 2050, a UT Austin initiative to support research to make communities more resilient. Co-authors on the paper are Jarriel and postdoctoral researcher John Swartz.
    Story Source:
    Materials provided by University of Texas at Austin. Note: Content may be edited for style and length. More

  • in

    Hidden behavior of supercapacitor materials

    Researchers from the University of Surrey’s Advanced Technology Institute (ATI) and the University of São Paulo have developed a new analysis technique that will help scientists improve renewable energy storage by making better supercapacitors. The team’s new approach enables researchers to investigate the complex inter-connected behaviour of supercapacitor electrodes made from layers of different materials.
    Improvements in energy storage are vital if countries are to deliver carbon reduction targets. The inherent unpredictability of energy from solar and wind means effective storage is required to ensure consistency in supply, and supercapacitors are seen as an important part of the solution.
    Supercapacitors could also be the answer to charging electric vehicles much faster than is possible using lithium-ion batteries. However, more supercapacitor development is needed to enable them to effectively store enough electricity.
    Surrey’s peer-reviewed paper, published in Electrochimica Acta, explains how the research team used a cheap polymer material called Polyaniline (PANI), which stores energy through a mechanism known as pseudocapacitance. PANI is conductive and can be used as the electrode in a supercapacitor device, storing charge by trapping ions. To maximise energy storage, the researchers have developed a novel method of depositing a thin layer of PANI onto a forest of conductive carbon nanotubes. This composite material makes an excellent supercapacitive electrode, but the fact that it is made up of different materials makes it difficult to separate and fully understand the complex processes which occur during charging and discharging. This is a problem across the field of pseudocapacitor development.
    To tackle this problem, the researchers adopted a technique known as the Distribution of Relaxation Times. This analysis method allows scientists to examine complex electrode processes to separate and identify them, making it possible to optimise fabrication methods to maximise useful reactions and reduce reactions that damage the electrode. The technique can also be applied to researchers using different materials in supercapacitor and pseudocapacitor development.
    Ash Stott, a postgraduate research student at the University of Surrey who was the lead scientist on the project, said:
    “The future of global energy use will depend on consumers and industry generating, storing and using energy more efficiently, and supercapacitors will be one of the leading technologies for intermittent storage, energy harvesting and high-power delivery. Our work will help make that happen more effectively.”
    Professor Ravi Silva, Director of the ATI and principal author, said:
    “Following on from world leaders pledging their support for green energy at COP26, our work shows researchers how to accelerate the development of high-performance materials for use as energy storage elements, a key component of solar or wind energy systems. This research brings us one step closer to a clean, cost-effective energy future.”
    Story Source:
    Materials provided by University of Surrey. Note: Content may be edited for style and length. More

  • in

    AI behind deepfakes may power materials design innovations

    The person staring back from the computer screen may not actually exist, thanks to artificial intelligence (AI) capable of generating convincing but ultimately fake images of human faces. Now this same technology may power the next wave of innovations in materials design, according to Penn State scientists.
    “We hear a lot about deepfakes in the news today — AI that can generate realistic images of human faces that don’t correspond to real people,” said Wesley Reinhart, assistant professor of materials science and engineering and Institute for Computational and Data Sciences faculty co-hire, at Penn State. “That’s exactly the same technology we used in our research. We’re basically just swapping out this example of images of human faces for elemental compositions of high-performance alloys.”
    The scientists trained a generative adversarial network (GAN) to create novel refractory high-entropy alloys, materials that can withstand ultra-high temperatures while maintaining their strength and that are used in technology from turbine blades to rockets.
    “There are a lot of rules about what makes an image of a human face or what makes an alloy, and it would be really difficult for you to know what all those rules are or to write them down by hand,” Reinhart said. “The whole principle of this GAN is you have two neural networks that basically compete in order to learn what those rules are, and then generate examples that follow the rules.”
    The team combed through hundreds of published examples of alloys to create a training dataset. The network features a generator that creates new compositions and a critic that tries to discern whether they look realistic compared to the training dataset. If the generator is successful, it is able to make alloys that the critic believes are real, and as this adversarial game continues over many iterations, the model improves, the scientists said.
    After this training, the scientists asked the model to focus on creating alloy compositions with specific properties that would be ideal for use in turbine blades. More

  • in

    Biodiversity ‘time machine’ uses artificial intelligence to learn from the past

    Experts can make crucial decisions about future biodiversity management by using artificial intelligence to learn from past environmental change, according to research at the University of Birmingham.
    A team, led by the University’s School of Biosciences, has proposed a ‘time machine framework’ that will help decision-makers effectively go back in time to observe the links between biodiversity, pollution events and environmental changes such as climate change as they occurred and examine the impacts they had on ecosystems.
    In a new paper, published in Trends in Ecology and Evolution, the team sets out how these insights can be used to forecast the future of ecosystem services such as climate change mitigation, food provisioning and clean water.
    Using this information, stakeholders can prioritise actions which will provide the greatest impact.
    Principal investigator, Dr Luisa Orsini, is an Associate Professor at the University of Birmingham and Fellow of The Alan Turing Institute. She explained: “Biodiversity sustains many ecosystem services. Yet these are declining at an alarming rate. As we discuss vital issues like these at the COP26 Summit in Glasgow, we might be more aware than ever that future generations may not be able to enjoy nature’s services if we fail to protect biodiversity.”
    Biodiversity loss happens over many years and is often caused by the cumulative effect of multiple environmental threats. Only by quantifying biodiversity before, during and after pollution events, can the causes of biodiversity and ecosystem service loss be identified, say the researchers.
    Managing biodiversity whilst ensuring the delivery of ecosystem services is a complex problem because of limited resources, competing objectives and the need for economic profitability. Protecting every species is impossible. The time machine framework offers a way to prioritize conservation approaches and mitigation interventions.
    Dr Orsini added: “We have already seen how a lack of understanding of the interlinked processes underpinning ecosystem services has led to mismanagement, with negative impacts on the environment, the economy and on our wellbeing. We need a whole-system, evidence-based approach in order to make the right decisions in the future. Our time-machine framework is an important step towards that goal.”
    Lead author, Niamh Eastwood, is a PhD student at the University of Birmingham. She said: “We are working with stakeholders (e.g. UK Environment Agency) to make this framework accessible to regulators and policy makers. This will support decision-making in regulation and conservation practices.”
    The framework draws on the expertise of biologists, ecologists, environmental scientists, computer scientists and economists. It is the result of a cross-disciplinary collaboration among the University of Birmingham, The Alan Turing Institute, The University of Leeds, the University of Cardiff, The University of California Berkeley, The American University of Paris and the Goethe University Frankfurt.
    Story Source:
    Materials provided by University of Birmingham. Note: Content may be edited for style and length. More

  • in

    How monitoring quantum Otto engine affects its performance

    Covering a broad spectrum of different modes of operations of engines with a working substance having just two quantum states, the researchers found that only for idealized cycles that perform infinitely slowly it makes no difference which monitoring scheme is applied. But all engines that run in finite time and hence are of practical interest work considerably better for their power output and reliability when they are monitored according to the repeated contact scheme. confined to a piston, which undergoes four subsequent strokes: it is first compressed, then heated up, expanded, and finally cooled down to its initial temperature.Today, with significant advancements in nano-fabrication, the quantum revolution is upon us, bringing quantum heat engines into the limelight. Like their classical counterparts, quantum heat engines could be operated in various protocols which might be continuous or cyclic. Unlike classical engine which uses a macroscopic amount of the working substance, the working substance of a quantum engine has pronounced quantum features. The most prominent of these is the discreteness of the possible energies it can take. Even more outlandish from the classical point of view is the fact that a quantum system may stay in two or more of its allowed energies at the same time. This property, which has no classical analog, is known as ‘coherence’. Otherwise, a quantum Otto engine is also characterized by four strokes like its classical counterpart.Determining the quantum Otto engine’s performance metrics, such as power output or efficiency is the key to improving design and tailoring better working substances. A direct diagnosis of such metrics requires measuring the energies of the engine at the beginning and end of each stroke. While a classical engine is only negligibly affected by measurements, in quantum engines the act of the measurement itself causes a bizarre measurement effect in which the engine’s quantum state is severely affected via quantum mechanics. Most importantly, any coherence in the system at the end of the cycle would be completely removed by the measurement effect.It has long been believed that these strange measurement-induced effects are irrelevant for the understanding of quantum engines and hence have been neglected in traditional quantum thermodynamics. Moreover, not much thought has been put into the design of monitoring protocols that yield a reliable diagnosis of the engine’s performance while minimally altering it.However, novel breakthrough research performed at the Center for Theoretical Physics of Complex Systems within the Institute for Basic Science, South Korea, may change this rigid perspective. The researchers investigated the impact of different measurement-based diagnostic schemes on the performance of a quantum Otto engine. In addition, they discovered a minimally invasive measurement method that preserves coherence across the cycles.The researchers utilized the so-called ‘repeated contacts scheme’, where they record the engine’s states using an ancillary probe, and measurements of the probe are performed only at the end of the engine’s working cycles. This bypasses the need to measure the engine repeatedly after each stroke and avoids undesirable measurement-induced quantum effects such as the removal of any coherence that was built up during the cycle.The preservation of coherence throughout the engine’s lifetime enhanced critical performance metrics like the maximum power output and reliability, making the engine more capable and dependable. According to Prof. Thingna, “this is the first example in which the influence of an experimenter, who wants to know whether the engine does what it is designed to do, has been properly considered.”
    Covering a broad spectrum of different modes of operations of engines with a working substance having just two quantum states, the researchers found that only for idealized cycles that perform infinitely slowly it makes no difference which monitoring scheme is applied. But all engines that run in finite time and hence are of practical interest work considerably better for their power output and reliability when they are monitored according to the repeated contact scheme.
    Overall, the researchers concluded that the nature of the measurement techniques can bring theory closer to experimental data. Hence, it is vital to take these factors into account when monitoring and testing quantum heat engines. This research was published in the Physical Review X Quantum.
    Story Source:
    Materials provided by Institute for Basic Science. Note: Content may be edited for style and length. More

  • in

    New insights into the structure of the neutron

    All known atomic nuclei and therefore almost all visible matter consists of protons and neutrons, yet many of the properties of these omnipresent natural building blocks remain unknown. As an uncharged particle, the neutron in particular resists many types of measurement and 90 years after its discovery there are still many unanswered questions regarding its size and lifetime, among other things. The neutron consists of three quarks which whirl around inside it, held together by gluons. Physicists use electromagnetic form factors to describe this dynamic inner structure of the neutron. These form factors represent an average distribution of electric charge and magnetization within the neutron and can be determined by means of experimentation.
    Blank space on the form factor map filled with precise data
    “A single form factor, measured at a certain energy level, does not say much at first,” explained Professor Frank Maas, a researcher at the PRISMA+ Cluster of Excellence in Mainz, the Helmholtz Institute Mainz (HIM), and GSI Helmholtzzentrum für Schwerionenforschung Darmstadt. “Measurements of the form factors at various energies are needed in order to draw conclusions on the structure of the neutron.” In certain energy ranges, which are accessible using standard electron-proton scattering experiments, form factors can be determined fairly accurately. However, so far this has not been the case with other ranges for which so-called annihilation techniques are needed that involve matter and antimatter mutually destroying each other.
    In the BESIII Experiment being undertaken in China, it has recently proved possible to precisely determine the corresponding data in the energy range of 2 to 3.8 gigaelectronvolts. As pointed out in an article published by the partnership in the current issue of Nature Physics, this is over 60 times more accurate compared to previous measurements. “With this new data, we have, so to speak, filled a blank space on the neutron form factor ‘map’, which until now was unknown territory,” Professor Frank Maas pointed out. “This data is now as precise as that obtained in corresponding scattering experiments. As a result, our knowledge of the form factors of the neutron will change dramatically and as such we will get a far more comprehensive picture of this important building block of nature.”
    Truly pioneering work in a difficult field of research
    To make inroads into completing the required fields of the form factor ‘map’, the physicists needed antiparticles. The international partnership therefore used the Beijing Electron-Positron Collider II for its measurements. Here, electrons and their positive antiparticles, i.e., positrons, are allowed to collide in an accelerator and destroy each other, creating other new particle pairs — a process known as ‘annihilation’ in physics. Using the BESIII detector, the researchers observed and analyzed the outcome, in which the electrons and positrons form neutrons and anti-neutrons. “Annihilation experiments like these are nowhere near as well-established as the standard scattering experiments,” added Maas. “Substantial development work was needed to carry out the current experiment — the intensity of the accelerator had to be improved and the detection method for the elusive neutron had to be practically reinvented in the analysis of the experimental data. This was by no means straightforward. Our partnership has done truly pioneering work here.”
    Other interesting phenomena
    As if this was not enough, the measurements showed the physicists that the results for the form factor do not produce a consistent slope relative to the energy level, but rather an oscillating pattern in which fluctuations become smaller as the energy level increases. They observed similar surprising behavior in the case of the proton — here, however, the fluctuations were mirrored, i.e., phase-shifted. “This new finding indicates first and foremost that nucleons do not have a simple structure,” Professor Frank Maas explained. “Now our colleagues on the theoretical side have been asked to develop models to account for this extraordinary behavior.”
    Finally, on the basis of their measurements, the BESIII partnership has modified how the relative ratio of the neutron to proton form factors needs to be viewed. Many years ago, the result produced in the FENICE experiment was a ratio greater than one, which means that the neutron must have a consistently larger form factor than the proton. “But as the proton is charged, you would expect it to be completely the other way round,” Maas asserted. “And that’s just what we see when we compare our neutron data with the proton data we’ve recently acquired through BESIII. So here we’ve rectified how we need to perceive the very smallest particles.”
    From the micro- to the macrocosm
    According to Maas, the new findings are especially important because they are so fundamental. “They provide new perspectives on the basic properties of the neutron. What’s more, by looking at the smallest building blocks of matter we can also understand phenomena that occur in the largest dimensions — such as the fusion of two neutron stars. This physics of extremes is already very fascinating.” More

  • in

    Earth’s lower atmosphere is rising due to climate change

    Global temperatures are rising and so, it seems, is part of the sky.

    Atmosphere readings collected by weather balloons in the Northern Hemisphere over the last 40 years reveal that climate change is pushing the upper boundary of the troposphere — the slice of sky closest to the ground — steadily upward at a rate of 50 to 60 meters per decade, researchers report November 5 in Science Advances.

    Temperature is the driving force behind this change, says Jane Liu, an environmental scientist at the University of Toronto. The troposphere varies in height around the world, reaching as high as 20 kilometers in the tropics and as low as seven kilometers near the poles. During the year, the upper boundary of the troposphere — called the tropopause — naturally rises and falls with the seasons as air expands in the heat and contracts in the cold. But as greenhouse gases trap more and more heat in the atmosphere, the troposphere is expanding higher into the atmosphere (SN: 10/26/21).

    Liu and her colleagues found that the tropopause rose an average of about 200 meters in height from 1980 to 2020. Nearly all weather occurs in the troposphere, but it’s unlikely that this shift will have on a big effect on weather, the researchers say. Still, this research is an important reminder of the impact of climate change on our world, Liu says.

    “We see signs of global warming around us, in retreating glaciers and rising sea levels,” she says. “Now, we see it in the height of the troposphere.” More

  • in

    Experts master defects in semiconductors

    Researchers at The City College of New York have discovered a novel way to manipulate defects in semiconductors. The study holds promising opportunities for novel forms of precision sensing, or the transfer of quantum information between physically separate qubits, as well as for improving the fundamental understanding of charge transport in semiconductors.
    Using laser optics and confocal microscopy, the researchers demonstrated that they could make one defect eject charges — holes — under laser illumination allowing the other defect several micrometers away to catch them. The charge state of the latter defect is then altered from a negative into a neutral one via a charge capture.
    The study utilized a special type of point defect — nitrogen-vacancy center in diamond. These color centers possess spin — an inherent form of angular momentum carried by elementary particles — making them attractive for quantum sensing and quantum information processing. The researchers used a specific protocol to filter out the charges originating solely from the nitrogen vacancy based on its spin projection.
    “The key was isolating the source defect, with only the nitrogen vacancy being present, which we achieved by making charge ejection conditional on the defect’s spin state” said Artur Lozovoi, physics postdoctoral researcher in CCNY’s Division of Science and the paper’s lead author. “Another crucial aspect was having a “clean” diamond with as few defects as possible. Then, the long-range attractive Coulombic interaction between a defect and a hole substantially increases the probability of the charge going towards the target, which ultimately made our observations possible.”
    The present study uncovered that in the clean material the charge transport efficiency is a thousand times higher than observed in previous experiments, a phenomenon characterized by the researchers as a “giant capture cross-section.” This discovery could pave the way towards establishing a quantum information bus between color center qubits in semiconductors.
    “This process of a charge capture by an individual defect has only been described theoretically before,” added Lozovoi. “There is now an experimental platform that enables us to look into how these defects interact with free charges in crystals and how we can use it for quantum information processing.”
    Story Source:
    Materials provided by City College of New York. Note: Content may be edited for style and length. More