More stories

  • in

    Civilization may need to 'forget the flame' to reduce CO2 emissions

    Just as a living organism continually needs food to maintain itself, an economy consumes energy to do work and keep things going. That consumption comes with the cost of greenhouse gas emissions and climate change, though. So, how can we use energy to keep the economy alive without burning out the planet in the process?
    In a paper in PLOS ONE, University of Utah professor of atmospheric sciences Tim Garrett, with mathematician Matheus Grasselli of McMaster University and economist Stephen Keen of University College London, report that current world energy consumption is tied to unchangeable past economic production. And the way out of an ever-increasing rate of carbon emissions may not necessarily be ever-increasing energy efficiency — in fact it may be the opposite.
    “How do we achieve a steady-state economy where economic production exists, but does not continually increase our size and add to our energy demands?” Garrett says. “Can we survive only by repairing decay, simultaneously switching existing fossil infrastructure to a non-fossil appetite? Can we forget the flame?”
    Thermoeconomics
    Garrett is an atmospheric scientist. But he recognizes that atmospheric phenomena, including rising carbon dioxide levels and climate change, are tied to human economic activity. “Since we model the earth system as a physical system,” he says, “I wondered whether we could model economic systems in a similar way.”
    He’s not alone in thinking of economic systems in terms of physical laws. There’s a field of study, in fact, called thermoeconomics. Just as thermodynamics describe how heat and entropy (disorder) flow through physical systems, thermoeconomics explores how matter, energy, entropy and information flow through human systems.

    advertisement

    Many of these studies looked at correlations between energy consumption and current production, or gross domestic product. Garrett took a different approach; his concept of an economic system begins with the centuries-old idea of a heat engine. A heat engine consumes energy at high temperatures to do work and emits waste heat. But it only consumes. It doesn’t grow.
    Now envision a heat engine that, like an organism, uses energy to do work not just to sustain itself but also to grow. Due to past growth, it requires an ever-increasing amount of energy to maintain itself. For humans, the energy comes from food. Most goes to sustenance and a little to growth. And from childhood to adulthood our appetite grows. We eat more and exhale an ever-increasing amount of carbon dioxide.
    “We looked at the economy as a whole to see if similar ideas could apply to describe our collective maintenance and growth,” Garrett says. While societies consume energy to maintain day to day living, a small fraction of consumed energy goes to producing more and growing our civilization.
    “We’ve been around for a while,” he adds. “So it is an accumulation of this past production that has led to our current size, and our extraordinary collective energy demands and CO2 emissions today.”
    Growth as a symptom
    To test this hypothesis, Garrett and his colleagues used economic data from 1980 to 2017 to quantify the relationship between past cumulative economic production and the current rate at which we consume energy. Regardless of the year examined, they found that every trillion inflation-adjusted year 2010 U.S. dollars of economic worldwide production corresponded with an enlarged civilization that required an additional 5.9 gigawatts of power production to sustain itself . In a fossil economy, that’s equivalent to around 10 coal-fired power plants, Garrett says, leading to about 1.5 million tons of CO2 emitted to the atmosphere each year. Our current energy usage, then, is the natural consequence of our cumulative previous economic production.

    advertisement

    They came to two surprising conclusions. First, although improving efficiency through innovation is a hallmark of efforts to reduce energy use and greenhouse gas emissions, efficiency has the side effect of making it easier for civilization to grow and consume more.
    Second, that the current rates of world population growth may not be the cause of rising rates of energy consumption, but a symptom of past efficiency gains.
    “Advocates of energy efficiency for climate change mitigation may seem to have a reasonable point,” Garrett says, “but their argument only works if civilization maintains a fixed size, which it doesn’t. Instead, an efficient civilization is able to grow faster. It can more effectively use available energy resources to make more of everything, including people. Expansion of civilization accelerates rather than declines, and so do its energy demands and CO2 emissions.”
    A steady-state decarbonized future?
    So what do those conclusions mean for the future, particularly in relation to climate change? We can’t just stop consuming energy today any more than we can erase the past, Garrett says. “We have inertia. Pull the plug on energy consumption and civilization stops emitting but it also becomes worthless. I don’t think we could accept such starvation.”
    But is it possible to undo the economic and technological progress that have brought civilization to this point? Can we, the species who harnessed the power of fire, now “forget the flame,” in Garrett’s words, and decrease efficient growth?
    “It seems unlikely that we will forget our prior innovations, unless collapse is imposed upon us by resource depletion and environmental degradation,” he says, “which, obviously, we hope to avoid.”
    So what kind of future, then, does Garrett’s work envision? It’s one in which the economy manages to hold at a steady state — where the energy we use is devoted to maintaining our civilization and not expanding it.
    It’s also one where the energy of the future can’t be based on fossil fuels. Those have to stay in the ground, he says.
    “At current rates of growth, just to maintain carbon dioxide emissions at their current level will require rapidly constructing renewable and nuclear facilities, about one large power plant a day. And somehow it will have to be done without inadvertently supporting economic production as well, in such a way that fossil fuel demands also increase.”
    It’s a “peculiar dance,” he says, between eliminating the prior fossil-based innovations that accelerated civilization expansion, while innovating new non-fossil fuel technologies. Even if this steady-state economy were to be implemented immediately, stabilizing CO2 emissions, the pace of global warming would be slowed — not eliminated. Atmospheric levels of CO2 would still reach double their pre-industrial level before equilibrating, the research found.
    By looking at the global economy through a thermodynamic lens, Garrett acknowledges that there are unchangeable realities. Any form of an economy or civilization needs energy to do work and survive. The trick is balancing that with the climate consequences.
    “Climate change and resource scarcity are defining challenges of this century,” Garrett says. “We will not have a hope of surviving our predicament by ignoring physical laws.”
    Future work
    This study marks the beginning of the collaboration between Garrett, Grasselli and Keen. They’re now working to connect the results of this study with a full model for the economy, including a systematic investigation of the role of matter and energy in production.
    “Tim made us focus on a pretty remarkable empirical relationship between energy consumption and cumulative economic output,” Grasselli says. “We are now busy trying to understand what this means for models that include notions that are more familiar to economists, such as capital, investment and the always important question of monetary value and inflation.” More

  • in

    Student research team develops hybrid rocket engine

    In a year defined by obstacles, a University of Illinois at Urbana-Champaign student rocket team persevered. Working together across five time zones, they successfully designed a hybrid rocket engine that uses paraffin and a novel nitrous oxide-oxygen mixture called Nytrox. The team has its sights set on launching a rocket with the new engine at the 2021 Intercollegiate Rocketry and Engineering Competition.
    “Hybrid propulsion powers Virgin Galactic’s suborbital tourist spacecraft and the development of that engine has been challenging. Our students are now experiencing those challenges first hand and learning how to overcome them,” said faculty adviser to the team Michael Lembeck.
    Last year the team witnessed a number of catastrophic failures with hybrid engines utilizing nitrous oxide. The propellant frequently overheated in the New Mexico desert, where the IREC competition is held. Lembeck said this motivated the team to find an alternative fuel that could remain stable at temperature. Nytrox surfaced as the solution to the problem.
    As the team began working on the engine this past spring semester, excitement to conduct hydrostatic testing of the ground oxidizer tank vessel quickly turned to frustration as the team lacked a safe test location.
    Team leader Vignesh Sella said, “We planned to conduct the test at the U of I’s Willard airport retired jet engine testing facility. But the Department of Aerospace Engineering halted all testing until safety requirements could be met.”
    Sella said they were disheartened at first, but rallied by creating a safety review meeting along with another student rocket group to examine their options.

    advertisement

    “As a result of that meeting, we came up with a plan to move the project forward. The hybrid team rigorously evaluated our safety procedures, and had our work reviewed by Dr. Dassou Nagassou, the Aerodynamics Research Lab manager. He became a great resource for us, and a very helpful mentor.”
    Sella and Andrew Larkey also approached Purdue University to draw from their extensive experience in the realm of rocket propulsion. They connected with Chris Nielson who is a graduate student and lab manager at Purdue. They did preliminary over-the-phone design reviews and were eventually invited to conduct their hydrostatic and cold-flow testing at Purdue’s Zucrow Laboratories, a facility dedicated to testing rocket propulsion with several experts in the field on-site.
    “We sent a few of the members there to scout the location and take notes before bringing the whole team there for a test,” Sella said. “These meetings, relationships, and advances, although they may sound smooth and easy to establish, were arduous and difficult to attain. It was a great relief to us to have the support from the department, a pressure vessel expert as our mentor, and Zucrow Laboratories available to our team.”
    The extended abstract, which the team had submitted much earlier to the AIAA Propulsion and Energy conference, assumed the engine would have been assembled and tested before the documentation process began. Team leader Vignesh Sella said they wanted to document hard test data but had to switch tactics in March. The campus move to online-only classes also curtailed all in-person activities, including those of registered student organizations like ISS.
    “As the disruptions caused by COVID-19 required us to work remotely, we pivoted the paper by focusing on documenting the design processes and decisions we made for the engine. This allowed us to work remotely and complete a paper that wasn’t too far from the original abstract. Our members, some of whom are international, met on Zoom and Discord to work on the paper together virtually, over five time zones,” Sella said.

    advertisement

    Sella said he and the entire team are proud of what they have accomplished and are “returning this fall with a vengeance.”
    The Illinois Space Society is a technical, professional, and educational outreach student organization at the U of I in the Department of Aerospace Engineering. The society consists of 150 active members. The hybrid rocket engine team consisted of 20 members and is one of the five technical projects within ISS. The project began in 2013 with the goal of constructing a subscale hybrid rocket engine before transitioning to a full-scale engine. The subscale hybrid rocket engine was successfully constructed and hot fired in the summer of 2018, yielding the positive test results necessary to move onto designing and manufacturing a full-scale engine.
    “After the engine completes its testing, the next task will be integrating the engine into the rocket vehicle,” said Sella “This will require fitting key flight hardware components within the geometric constraints of a rocket body tube and structurally securing the engine to the vehicle.”
    In June 2021, the rocket will be transported to Spaceport America in Truth or Consequences for its first launch.
    This work was supported by the U of I Student Sustainability Committee, the Office of Undergraduate Research, and the Illinois Space Society. Technical support was provided by the Department of Aerospace Engineering, the School of Chemical Sciences Machine Shop, Zucrow Laboratories and Christopher D. Nilsen at Purdue University, Stephen A. Whitmore of Utah State University, and Dassou Nagassou of the Aerodynamics Research Laboratory at Illinois. More

  • in

    Artificial intelligence learns continental hydrology

    Changes to water masses which are stored on the continents can be detected with the help of satellites. The data sets on the Earth’s gravitational field which are required for this, stem from the GRACE and GRACE-FO satellite missions. As these data sets only include the typical large-scale mass anomalies, no conclusions about small scale structures, such as the actual distribution of water masses in rivers and river branches, are possible. Using the South American continent as an example, the Earth system modellers at the German Research Centre for Geosciences GFZ, have developed a new Deep-Learning-Method, which quantifies small as well as large-scale changes to the water storage with the help of satellite data. This new method cleverly combines Deep-Learning, hydrological models and Earth observations from gravimetry and altimetry.
    So far it is not precisely known, how much water a continent really stores. The continental water masses are also constantly changing, thus affecting the Earth’s rotation and acting as a link in the water cycle between atmosphere and ocean. Amazon tributaries in Peru, for example, carry huge amounts of water in some years, but only a fraction of it in others. In addition to the water masses of rivers and other bodies of fresh water, considerable amounts of water are also found in soil, snow and underground reservoirs, which are difficult to quantify directly.
    Now the research team around primary author Christopher Irrgang developed a new method in order to draw conclusions on the stored water quantities of the South American continent from the coarsely-resolved satellite data. “For the so called down-scaling, we are using a convolutional neural network, in short CNN, in connection with a newly developed training method,” Irrgang says. “CNNs are particularly well suited for processing spatial Earth observations, because they can reliably extract recurrent patterns such as lines, edges or more complex shapes and characteristics.”
    In order to learn the connection between continental water storage and the respective satellite observations, the CNN was trained with simulation data of a numerical hydrological model over the period from 2003 until 2018. Additionally, data from the satellite altimetry in the Amazon region was used for validation. What is extraordinary, is that this CNN continuously self-corrects and self-validates in order to make the most accurate statements possible about the distribution of the water storage. “This CNN therefore combines the advantages of numerical modelling with high-precision Earth observation” according to Irrgang.
    The researchers’ study shows that the new Deep-Learning-Method is particularly reliable for the tropical regions north of the -20° latitude on the South American continent, where rain forests, vast surface waters and also large groundwater basins are located. Same as for the groundwater-rich, western part of South America’s southern tip. The down-scaling works less well in dry and desert regions. This can be explained by the comparably low variability of the already low water storage there, which therefore only have a marginal effect on the training of the neural network. However, for the Amazon region, the researchers were able to show that the forecast of the validated CNN was more accurate than the numerical model used.
    In future, large-scale as well as regional analysis and forecasts of the global continental water storage will be urgently needed. Further development of numerical models and the combination with innovative Deep-Learning-Methods will take up a more important role in this, in order to gain comprehensive insight into continental hydrology. Aside from purely geophysical investigations, there are many other possible applications, such as studying the impact of climate change on continental hydrology, the identification of stress factors for ecosystems such as droughts or floods, and the development of water management strategies for agricultural and urban regions.

    Story Source:
    Materials provided by GFZ GeoForschungsZentrum Potsdam, Helmholtz Centre. Note: Content may be edited for style and length. More

  • in

    How to make AI trustworthy

    One of the biggest impediments to adoption of new technologies is trust in AI.
    Now, a new tool developed by USC Viterbi Engineering researchers generates automatic indicators if data and predictions generated by AI algorithms are trustworthy. Their research paper, “There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks” by Mingxi Cheng, Shahin Nazarian and Paul Bogdan of the USC Cyber Physical Systems Group, was featured in Frontiers in Artificial Intelligence.
    Neural networks are a type of artificial intelligence that are modeled after the brain and generate predictions. But can the predictions these neural networks generate be trusted? One of the key barriers to adoption of self-driving cars is that the vehicles need to act as independent decision-makers on auto-pilot and quickly decipher and recognize objects on the road — whether an object is a speed bump, an inanimate object, a pet or a child — and make decisions on how to act if another vehicle is swerving towards it. Should the car hit the oncoming vehicle or swerve and hit what the vehicle perceives to be an inanimate object or a child? Can we trust the computer software within the vehicles to make sound decisions within fractions of a second — especially when conflicting information is coming from different sensing modalities such as computer vision from cameras or data from lidar? Knowing which systems to trust and which sensing system is most accurate would be helpful to determine what decisions the autopilot should make.
    Lead author Mingxi Cheng was driven to work on this project by this thought: “Even humans can be indecisive in certain decision-making scenarios. In cases involving conflicting information, why can’t machines tell us when they don’t know?”
    A tool the authors created named DeepTrust can quantify the amount of uncertainty,” says Paul Bogdan, an associate professor in the Ming Hsieh Department of Electrical and Computer Engineering and corresponding author, and thus, if human intervention is necessary.
    Developing this tool took the USC research team almost two years employing what is known as subjective logic to assess the architecture of the neural networks. On one of their test cases, the polls from the 2016 Presidential election, DeepTrust found that the prediction pointing towards Clinton winning had a greater margin for error.
    The other significance of this study is that it provides insights on how to test reliability of AI algorithms that are normally trained on thousands to millions of data points. It would be incredibly time-consuming to check if each one of these data points that inform AI predictions were labeled accurately. Rather, more critical, say the researchers, is that the architecture of these neural network systems has greater accuracy. Bogdan notes that if computer scientists want to maximize accuracy and trust simultaneously, this work could also serve as guidepost as to how much “noise” can be in testing samples.
    The researchers believe this model is the first of its kind. Says Bogdan, “To our knowledge, there is no trust quantification model or tool for deep learning, artificial intelligence and machine learning. This is the first approach and opens new research directions.” He adds that this tool has the potential to make “artificial intelligence aware and adaptive.”

    Story Source:
    Materials provided by University of Southern California. Original written by Amy Blumenthal. Note: Content may be edited for style and length. More

  • in

    A topography of extremes

    An international team of scientists from the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), the Max Planck Institute for Chemical Physics of Solids, and colleagues from the USA and Switzerland have successfully combined various extreme experimental conditions in a completely unique way, revealing exciting insights into the mysterious conducting properties of the crystalline metal CeRhIn5. In the journal Nature Communications, they report on their exploration of previously uncharted regions of the phase diagram of this metal, which is considered a promising model system for understanding unconventional superconductors.
    “First, we apply a thin layer of gold to a microscopically small single crystal. Then we use an ion beam to carve out tiny microstructures. At the ends of these structures, we attach ultra-thin platinum tapes to measure resistance along different directions under extremely high pressures, which we generate with a diamond anvil pressure cell. In addition, we apply very powerful magnetic fields to the sample at temperatures near absolute zero.”
    To the average person, this may sound like an overzealous physicist’s whimsical fancy, but in fact, it is an actual description of the experimental work conducted by Dr. Toni Helm from HZDR’s High Magnetic Field Laboratory (HLD) and his colleagues from Tallahassee, Los Alamos, Lausanne and Dresden. Well, at least in part, because this description only hints at the many challenges involved in combining such extremes concurrently. This great effort is, of course, not an end in itself: the researchers are trying to get to the bottom of some fundamental questions of solid state physics.
    The sample studied is cer-rhodium-indium-five (CeRhIn5), a metal with surprising properties that are not fully understood yet. Scientists describe it as an unconventional electrical conductor with extremely heavy charge carriers, in which, under certain conditions, electrical current can flow without losses. It is assumed that the key to this superconductivity lies in the metal’s magnetic properties. The central issues investigated by physicists working with such correlated electron systems include: How do heavy electrons organize collectively? How can this cause magnetism and superconductivity? And what is the relationship between these physical phenomena?
    An expedition through the phase diagram
    The physicists are particularly interested in the metal’s phase diagram, a kind of map whose coordinates are pressure, magnetic field strength, and temperature. If the map is to be meaningful, the scientists have to uncover as many locations as possible in this system of coordinates, just like a cartographer exploring unknown territory. In fact, the emerging diagram is not unlike the terrain of a landscape.
    As they reduce temperature to almost four degrees above absolute zero, the physicists observe magnetic order in the metal sample. At this point, they have a number of options: They can cool the sample down even further and expose it to high pressures, forcing a transition into the superconducting state. If, on the other hand, they solely increase the external magnetic field to 600,000 times the strength of the earth’s magnetic field, the magnetic order is also suppressed; however, the material enters a state called “electronically nematic.”
    This term is borrowed from the physics of liquid crystals, where it describes a certain spatial orientation of molecules with a long-range order over larger areas. The scientists assume that the electronically nematic state is closely linked to the phenomenon of unconventional superconductivity. The experimental environment at HLD provides optimum conditions for such a complex measurement project. The large magnets generate relatively long-lasting pulses and offer sufficient space for complex measurement methods under extreme conditions.
    Experiments at the limit afford a glimpse of the future
    The experiments have a few additional special characteristics. For example, working with high-pulsed magnetic fields creates eddy currents in the metallic parts of the experimental setup, which can generate unwanted heat. The scientists have therefore manufactured the central components from a special plastic material that suppresses this effect and functions reliably near absolute zero. Through the microfabrication by focused ion beams, they produce a sample geometry that guarantees a high-quality measurement signal.
    “Microstructuring will become much more important in future experiments. That’s why we brought this technology into the laboratory right away,” says Toni Helm, adding: “So we now have ways to access and gradually penetrate into dimensions where quantum mechanical effects play a major role.” He is also certain that the know-how he and his team have acquired will contribute to research on high-temperature superconductors or novel quantum technologies.

    Story Source:
    Materials provided by Helmholtz-Zentrum Dresden-Rossendorf. Original written by Dr. Bernd Schröder. Note: Content may be edited for style and length. More

  • in

    Photonics researchers report breakthrough in miniaturizing light-based chips

    Photonic integrated circuits that use light instead of electricity for computing and signal processing promise greater speed, increased bandwidth, and greater energy efficiency than traditional circuits using electricity.
    But they’re not yet small enough to compete in computing and other applications where electric circuits continue to reign.
    Electrical engineers at the University of Rochester believe they’ve taken a major step in addressing the problem. Using a material widely adopted by photonics researchers, the Rochester team has created the smallest electro-optical modulator yet. The modulator is a key component of a photonics-based chip, controlling how light moves through its circuits.
    In Nature Communications, the lab of Qiang Lin, professor of electrical and computer engineering, describes using a thin film of lithium niobate (LN) bonded on a silicon dioxide layer to create not only the smallest LN modulator yet, but also one that operates at high speed and is energy efficient.
    This “paves a crucial foundation for realizing large-scale LN photonic integrated circuits that are of immense importance for broad applications in data communication, microwave photonics, and quantum photonics,” writes lead author Mingxiao Li, a graduate student in Lin’s lab.
    Because of its outstanding electro-optic and nonlinear optic properties, lithium niobate has “become a workhorse material system for photonics research and development,” Lin says. “However current LN photonic devices, made upon either bulk crystal or thin-film platform require large dimensions and are difficult to scale down in size, which limits the modulation efficiency, energy consumption, and the degree of circuit integration. A major challenge lies in making high-quality nanoscopic photonic structures with high precision.”
    The modulator project builds upon the lab’s previous use of lithium niobate to create a photonic nanocavity — another key component in photonic chips. At only about a micron in size, the nanocavity can tune wavelengths using only two to three photons at room temperature — “the first time we know of that even two or three photons have been manipulated in this way at room temperatures,” Lin says. That device was described in a paper in Optica.
    The modulator could be used in conjunction with a nanocavity in creating a photonic chip at the nanoscale.
    The project was supported with funding from the National Science Foundation, Defense Threat Reduction Agency, and Defense Advanced Research Projects Agency (DARPA); fabrication of the device was done in part at the Cornell NanoScale Facility.

    Story Source:
    Materials provided by University of Rochester. Original written by Bob Marcotte. Note: Content may be edited for style and length. More

  • in

    Improved three-week weather forecasts could save lives from disaster

    Weather forecasters in the Philippines got the tip-off in the second week of November 2019. A precipitation forecast that peered further into the future than usual warned that the islands faced torrential rains more than three weeks away. The meteorologists alerted local and national governments, which sprang into action. Mobile phone and broadcast alerts advised people to prepare to evacuate.
    By the time the Category 4 Typhoon Kammuri lashed the Philippines with heavy rains in early December, the damage was much less than it could have been. Having so much time to prepare was key, says Andrew Robertson, a climate scientist at Columbia University’s International Research Institute for Climate and Society in Palisades, N.Y. “It’s a great example of how far we’ve come” in weather forecasting, he says. “But we still need to go further.”
    Such efforts, known as “subseasonal forecasting,” aim to fill a crucial gap in weather prediction. The approach fits between short-term forecasts that are good out to about 10 days in the future and seasonal forecasts that look months ahead.
    A subseasonal forecast predicts average weather conditions three to four weeks away. Each day of additional warning gives emergency managers that much more time to prepare for incoming heat waves, cold snaps, tornadoes or other wild weather. Groups such as the Red Cross are starting to use subseasonal forecasts to strategize for weather disasters, such as figuring out where to move emergency supplies when it looks like a tropical cyclone might hit a region. Farmers look to subseasonal forecasts to better plan when to plant and irrigate crops. And operators of dams and hydropower plants could use the information to get ready for extra water that may soon tax the systems.
    Subseasonal forecasting is improving slowly but steadily, thanks to better computer models and new insights about the atmospheric and oceanic patterns that drive weather over the long term. “This is a new frontier,” says Frédéric Vitart, a meteorologist at the European Centre for Medium-Range Weather Forecasts in Reading, England.

    The in-between
    Weather forecasters are always pushing to do better. They feed weather observations from around the world into the latest computer models, then wait to see what the models spit out as the most likely weather in the coming days. Then the researchers tweak the model and feed it more data, repeating the process again and again until the forecasts improve.
    But anyone who tells you it will be 73° Fahrenheit and sunny at 3 p.m. four weeks from Monday is lying. That’s just too far out in time to be accurate. Short-term forecasts like those in your smartphone’s weather app are based on the observations that feed into them, such as whether it is currently rainy in Northern California or whether there are strong winds over central Alaska. For forecasting further into the future, what the rain or winds were like many days ago becomes less and less relevant. Most operational weather forecasts are good to about 10 to 14 days but no further.
    Early warnings of Typhoon Kammuri’s approach enabled safe evacuations of many thousands of residents of the Philippines in early December 2019.Ezra Acayan/Stringer/Getty Images News
    A few times a year, forecasters draw up seasonal predictions, which rely on very different types of information than the current weather conditions that feed short-term forecasts. The long-term seasonal outlooks predict whether it will be hotter or colder, or wetter or drier, than normal over the next three months. Those broad-brush perspectives on how regional climate is expected to vary are based on slowly evolving planetary patterns that drive weather over the scale of months. Such patterns include the intermittent oceanic warming known as El Niño, the extent of sea ice in the Arctic Ocean and the amounts of moisture in soils across the continents.
    Between short-term and seasonal prediction lies the realm of subseasonal prediction. Making such forecasts is hard because the initial information that drives short-term forecasts is no longer useful, but the longer-term trends that drive seasonal forecasts have not yet become apparent. “That’s one of the reasons there’s so much work on this right now,” says Emily Becker, a climate scientist at the University of Miami in Florida. “We just ignored it for decades because it was so difficult.”

    A global impact
    Part of the challenge stems from the fact that many patterns influence weather on the subseasonal scale — and some of them aren’t predictable. One pattern that scientists have been targeting lately, hoping to improve predictions of it, is a phenomenon known as the Madden-Julian Oscillation, or MJO.
    The MJO isn’t as well-known as El Niño, but it is just as important in driving global weather. A belt of thunderstorms that typically starts in the Indian Ocean and travels eastward, the MJO can happen several times a year.
    An active MJO influences weather around the globe, including storminess in North America and Europe. Subseasonal forecasts are more likely to be accurate when an MJO is happening because there is a major global weather pattern that will affect weather elsewhere in the coming weeks.
    But there’s still a lot of room for prediction improvement. The computer models that simulate weather and climate aren’t very good at capturing all aspects of an MJO. In particular, models have a hard time reproducing what happens to an MJO when it hits Southeast Asia’s mix of islands and ocean known as the Maritime Continent. This realm — which includes Indonesia, the Philippines and New Guinea — is a complex interplay of land and sea that meteorologists struggle to understand. Models typically show an MJO stalling out there rather than continuing to travel eastward, when in reality, the storms usually keep going.

    At Stony Brook University in New York, meteorologist Hyemi Kim has been trying to understand why models fail around the Maritime Continent. Many of the models simulate too much light precipitation in the tropics, she found. That light drizzle dries out the lower atmosphere, contributing to the overly dry conditions favored in these models. As a result, when the MJO reaches the Maritime Continent, the dryness of most models prevents the system from marching eastward, Kim and colleagues reported in August 2019 in the Journal of Geophysical Research: Atmospheres. In real life, that rain doesn’t happen. With this better understanding of the difference between models and observations in this region, researchers hope to build better forecasts for how a particular MJO might influence weather around the world.
    “If you can predict the MJO better, then you can predict the weather better,” Becker says. Fortunately, scientists are already making those tweaks, by developing finer-grained computer models that do a better job capturing how the atmosphere churns in real life.
    Meteorologist Victor Gensini of Northern Illinois University in DeKalb led a recent project to use the MJO, among other factors, to forecast tornado outbreaks in the central and eastern United States two to three weeks in advance. As the MJO moves across and out of the Maritime Continent, it triggers stronger circulation patterns that push air toward higher latitudes. The jet stream strengthens over the Pacific Ocean, setting up long-range patterns that are ultimately conducive to tornadoes east of the Rocky Mountains. In the June Bulletin of the American Meteorological Society, Gensini’s team showed that it can predict broad patterns of U.S. tornado activity two to three weeks ahead of time.
    High above the poles
    Another weather pattern that might help improve subseasonal forecasts is a quick rise in temperature in the stratosphere, a layer of the upper atmosphere, above the Arctic or Antarctic. These “sudden stratospheric warming” events happen once every couple of years in the Northern Hemisphere and much less often in the Southern Hemisphere. But when one shows up, it affects weather worldwide. Shortly after a northern stratosphere warming, for instance, extreme storms often arrive in the United States.
    In August 2019, one of these rare southern warmings, the largest in 17 years, began over the South Pole. Temperatures soared by nearly 40 degrees Celsius, and wind speeds dropped dramatically. This event shifted lower-level winds around Antarctica toward the north, where they raised temperatures and dried out parts of eastern Australia. That helped set up the tinder-dry conditions that led to the devastating heat and fires across Australia in late 2019 and early 2020 (SN: 2/1/20, p. 8).
    Thanks to advanced computer models, forecasters at Australia’s Bureau of Meteorology in Melbourne saw the stratospheric warming coming nearly three weeks in advance. That allowed them to predict warm and dry conditions that were conducive to fire, says Harry Hendon, a meteorologist at the bureau.
    Stratospheric warming events last for several months. As with an MJO, a subseasonal forecast made while one of them is happening tends to be more accurate, because the stratospheric warming affects weather on the timescale of weeks to months. Meteorologists call such periods “forecasts of opportunity,” because they represent times when forecasts are likely to be more skillful. It’s like how it’s easier to predict your favorite baseball team’s chances for the season if you know they’ve just hired the best free agent around.

    Trustworthy journalism comes at a price.

    Scientists and journalists share a core belief in questioning, observing and verifying to reach the truth. Science News reports on crucial research and discovery across science disciplines. We need your financial support to make it happen – every contribution makes a difference.

    Subscribe or Donate Now

    A clearer picture
    Now, researchers are pushing wherever they can to eke out improvements in subseasonal forecasts. The European forecast center where Vitart is based has been issuing subseasonal predictions since 2004, which have been improving with time. The U.S. National Oceanic and Atmospheric Administration began issuing similar predictions in 2017; they are not as accurate as the European forecasts, but have been getting better over time. Meanwhile, scientists have launched two big efforts to compare the various forecasts.
    Vitart and Robertson lead one such project, under the auspices of the World Meteorological Organization in Geneva. Known as S2S, the meteorological shorthand for “subseasonal to seasonal,” the project collects subseasonal forecasts from 11 weather prediction agencies around the world, including the European center and NOAA. The forecasts go into an enormous database that researchers can study to see which ones performed well and why. Kim, for instance, used the database, among others, to understand why models have a hard time capturing the MJO’s march across the Maritime Continent.
    The second effort, known as SubX, for the Subseasonal Experiment, uses forecasts from seven models produced by U.S. and Canadian research groups. Unlike S2S, SubX operates in nearly real time, allowing forecasters to see how their subseasonal predictions pan out as weather develops.
    That proved useful in early 2019, when SubX forecasts foresaw, weeks before it happened, the severe cold snap that hit the United States in late January and early February. Temperatures dropped to the lowest in more than two decades in some places, and more than 20 people died in Wisconsin, Michigan and elsewhere.

    Having an extra week’s heads-up that extreme weather is coming can be huge, Robertson says. It gives decision makers the time they need to assess what to do — whether that’s watering crops, moving emergency supplies into place or prepping for disease outbreaks.
    In just one example, Robertson and colleagues recently developed detailed subseasonal forecasts of monsoon rains over northern India. He and Nachiketa Acharya, a climate scientist at Columbia University, described the work in January in the Journal of Geophysical Research: Atmospheres.
    C. Chang
    In 2018, the scientists focused on the Indian state of Bihar, where the regions north of the Ganges River are flood-prone and the regions to the south are drought-prone. Every week from June through September, the team worked with the India Meteorology Department in New Delhi to produce subseasonal rainfall forecasts for each of Bihar’s regions. The forecasts went to the state’s agricultural universities for distribution to local farmers. So when the summer monsoon rains arrived nearly 16 days later than usual, farmers were able to delay planting their rice and other crops until closer to the time of the monsoon, Acharya says. Such subseasonal forecasts can save farmers both time and money, since they don’t need to pay for irrigation when it’s not needed.
    Acharya is now working with meteorologists in Bangladesh to develop similar subseasonal forecasts for that country. There the monsoon rains typically start around the second week in June but can fluctuate — creating uncertainty for farmers trying to decide when to plant. “If we can predict the monsoon onset by around the mid or end of May, it will be huge,” Acharya says.
    Nachiketa Acharya (front row, white sweater), Andrew Robertson (behind Acharya) and other climate scientists work with farmers and other residents of Bihar, a state in northern India, to develop and disseminate longer-term weather forecasts so that residents can plan when to plant and irrigate their crops.N. Acharya
    Subseasonal forecasts can also help farmers improve productivity in regions such as western Africa, says Shraddhanand Shukla, a climate scientist at the University of California, Santa Barbara. He leads a new NASA-funded project that is kicking off to help farmers better time their crop planting and watering. The effort will combine satellite images of agricultural regions with subseasonal forecasts out to 45 days. If farmers in Senegal had such information in hand back in 2002, Shukla says, they could have better managed their plantings in the run-up to a drought that killed many crops.
    As global temperatures rise and climate changes, meteorologists need to keep pushing their models to predict weather as accurately as possible as far in advance as possible, Vitart says. He thinks that researchers may eventually be able to issue forecasts 45 to 50 days in the future — but it may take a decade or more to get to that point. New techniques, such as machine learning that can quickly winnow through multiple forecasts and pinpoint the most accurate one, may be able to accelerate that timeline.
    “There’s no single breakthrough,” Becker says. “But there are a lot of little breakthroughs to be made, all of which are going to help.” More

  • in

    Using math to examine the sex differences in dinosaurs

    Male lions typically have manes. Male peacocks have six-foot-long tail feathers. Female eagles and hawks can be about 30% bigger than males. But if you only had these animals’ fossils to go off of, it would be hard to confidently say that those differences were because of the animals’ sex. That’s the problem that paleontologists face: it’s hard to tell if dinosaurs with different features were separate species, different ages, males and females of the same species, or just varied in a way that had nothing to do with sex. A lot of the work trying to show differences between male and female dinosaurs has come back inconclusive. But in a new paper, scientists show how using a different kind of statistical analysis can often estimate the degree of sexual variation in a dataset of fossils.
    “It’s a whole new way of looking at fossils and judging the likelihood that the traits we see correlate with sex,” says Evan Saitta, a research associate at Chicago’s Field Museum and the lead author of the new paper in the Biological Journal of the Linnean Society. “This paper is part of a larger revolution of sorts about how to use statistics in science, but applied in the context of paleontology.”
    Unless you find a dinosaur skeleton that contains the fossilized eggs that it was about to lay, or a similar dead giveaway, it’s hard to be sure about an individual dinosaur’s sex. But many birds, the only living dinosaurs, vary a lot between males and females on average, a phenomenon called sexual dimorphism. Dinosaurs’ cousins, the crocodilians, show sexual dimorphism too. So it stands to reason that in many species of dinosaurs, males and females would differ from each other in a variety of traits.
    But not all differences in animals of the same species are linked to their sex. For example, in humans, average height is related to sex, but other traits like eye color and hair color don’t neatly map onto men versus women. We often don’t know precisely how the traits we see in dinosaurs relate to their sex, either. Since we don’t know if, say, larger dinosaurs were female, or dinosaurs with bigger crests on their heads were male, Saitta and his colleagues looked for patterns in the differences between individuals of the same species. To do that, they examined measurements from a bunch of fossils and modern species and did a lot of math.
    Other paleontologists have tried to look for sexual dimorphism in dinosaurs using a form of statistics (called significance testing, for all you stats nerds) where you collect all your data points and then calculate the probability that those results could have happened by pure chance rather than an actual cause (like how doctors determine whether a new medicine is more helpful than a placebo). This kind of analysis sometimes works for big, clean datasets. But, says Saitta, “with a lot of these dinosaur tests, our data is pretty bad” — there aren’t that many fossil specimens, or they’re incomplete or poorly preserved. Using significance testing in these cases, Saitta argues, results in a lot of false negatives: since the samples are small, it takes an extreme amount of variation between the sexes to trigger a positive test result. (Significance testing isn’t just a consideration for paleontologists — concerns over a “replication crisis” have plagued researchers in psychology and medicine, where certain studies are difficult to reproduce.)
    Instead, Saitta and his colleagues experimented with another form of stats, called effect size statistics. Effect size statistics is better for smaller datasets because it attempts to estimate the degree of sex differences and calculate the uncertainty in that estimate. This alternative statistical method takes natural variations into account without viewing dimorphism as black-or-white-many sexual dimorphisms can be subtle. Co-author Max Stockdale of the University of Bristol wrote the code to run the statistical simulations. Saitta and his colleagues uploaded measurements of dinosaur fossils to the program, and it yielded estimates of body mass dimorphism and error bars in those estimates that would have simply been dismissed using significance testing.
    “We showed that if you adopt this paradigm shift in statistics, where you attempt to estimate the magnitude of an effect and then put error bars around that, you can often produce a fairly accurate estimate of sexual variation even when the sexes of the individuals are unknown,” says Saitta.
    For instance, Saitta and his colleagues found that in the dinosaur Maiasaura, adult specimens vary a lot in size, and the analyses show that these are likelier to correspond to sexual variation than differences seen in other dinosaur species. But while the current data suggest that one sex was about 45% bigger than the other, they can’t tell if the bigger ones are males or females.
    While there’s a lot of work yet to be done, Saitta says he’s excited that the statistical simulations gave such consistent results despite the limits of the fossil data.
    “Sexual selection is such an important driver of evolution, and to limit ourselves to ineffective statistical approaches hurts our ability to understand the paleobiology of these animals,” he says. “We need to account for sexual variation in the fossil record.”
    “I’m happy to play a small part in this sort of statistical revolution,” he adds. “Effect size statistics has a major impact for psychological and medical research, so to apply it to dinosaurs and paleontology is really cool.”

    Story Source:
    Materials provided by Field Museum. Note: Content may be edited for style and length. More