More stories

  • in

    A carbon footprint life cycle assessment can cut down on greenwashing

    Today, you can buy a pair of sneakers partially made from carbon dioxide pulled out of the atmosphere. But measuring the carbon-reduction benefits of making that pair of sneakers with CO2 is complex. There’s the fossil fuel that stayed in the ground, a definite carbon savings. But what about the energy cost of cooling the CO2 into liquid form and transporting it to a production facility? And what about when your kid outgrows the shoes in six months and they can’t be recycled into a new product because those systems aren’t in place yet?

    As companies try to reduce their carbon footprint, many are doing life cycle assessments to quantify the full carbon cost of products, from procurement of materials to energy use in manufacturing to product transport to user behavior and end-of-life disposal. It’s a mind-bogglingly difficult metric, but such bean-counting is needed to hold the planet to a livable temperature, says low-carbon systems expert Andrea Ramirez Ramirez of the Delft University of Technology in the Netherlands.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Carbon accounting is easy to get wrong, she says. Differences in starting points for determining a product’s “lifetime” or assumptions about the energy sources can all affect the math.

    Carbon use can be reduced at many points along the production chain—by using renewable energy in the manufacturing process, for instance, or by adding atmospheric CO2 to the product. But if other points along the chain are energy-intensive or emit CO2, she notes, the final tally may show a positive rather than a negative number.

    A product is carbon negative only when its production actually removes carbon from the environment, temporarily or permanently. The Global CO2 Initiative, with European and American universities, has created a set of LCA guidelines to standardize measurement so that carbon accounting is consistent and terms such as “carbon neutral” or “carbon negative” have a verifiable meaning.

    In the rush to create products that can be touted as fighting climate change, however, some firms have been accused of “greenwashing” – making products or companies appear more environmentally friendly than they really are. Examples of greenwashing, according to a March 2022 analysis by mechanical engineers Grant Faber and Volker Sick of the University of Michigan in Ann Arbor include labeling plastic garbage bags as recyclable when their whole purpose is to be thrown away; using labels such as “eco-friendly” or “100% Natural” without official certification; and claiming a better carbon footprint without acknowledging the existence of even better choices. An example would be “fuel-efficient” sport utility vehicles, which are only fuel efficient when compared with other SUVs rather than with smaller cars, public transit or bicycles.

    Good LCA analysis, Sick says, can distinguish companies that are carbon-friendly in name only, from those that are truly helping the world clear the air.  More

  • in

    How to make recyclable plastics out of CO2 to slow climate change

    It’s morning and you wake on a comfortable foam mattress made partly from greenhouse gas. You pull on a T-shirt and sneakers containing carbon dioxide pulled from factory emissions. After a good run, you stop for a cup of joe and guiltlessly toss the plastic cup in the trash, confident it will fully biodegrade into harmless organic materials. At home, you squeeze shampoo from a bottle that has lived many lifetimes, then slip into a dress fashioned from smokestack emissions. You head to work with a smile, knowing your morning routine has made Earth’s atmosphere a teeny bit carbon cleaner.

    Sound like a dream? Hardly. These products are already sold around the world. And others are being developed. They’re part of a growing effort by academia and industry to reduce the damage caused by centuries of human activity that has sent CO2 and other heat-trapping gases into the atmosphere (SN: 3/12/22, p. 16).

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    The need for action is urgent. In its 2022 report, the United Nations Intergovernmental Panel on Climate Change, or IPCC, stated that rising temperatures have already caused irreversible damage to the planet and increased human death and disease (SN: 5/7/22 & 5/21/22, p. 8). Meanwhile, the amount of CO2 emitted continues to rise. The U.S. Energy Information Administration predicted last year that if current policy and growth trends continue, annual global CO2 emissions could rise from about 34 billion metric tons in 2020 to almost 43 billion by 2050.

    Carbon capture and storage, or CCS, is one strategy for mitigating climate change long noted by the IPCC as having “considerable” potential. A technology that has existed since the 1970s, CCS traps CO2 from smokestacks or ambient air and pumps it underground for permanent sequestration. Today, 27 CCS facilities operate around the world — 12 in the United States — storing an estimated 36 million tons of carbon per year, according to the Global CCS Institute. The 2021 Infrastructure Investment and Jobs Act includes $3.5 billion in funding for four additional U.S. direct capture facilities.

    But rather than just storing it, the captured carbon could be used to make things. This year for the first time, the IPCC added carbon capture and utilization, or CCU, to its list of options for drawing down atmospheric carbon. CCU captures CO2 and incorporates it into carbon-containing products like cement, jet fuel and the raw materials for making plastics. Still in early stages of development and commercialization, CCU could reduce annual greenhouse gas emissions by 20 billion tons in 2050 — more than half of the world’s global emissions today, the IPCC estimates.

    Such recognition was a big victory for a movement that has struggled to emerge from the shadow of its more established cousin, CCS, says chemist and global CCU expert Peter Styring of the University of Sheffield in England. Many CCU-related companies are springing up and collaborating with each other and with governments around the world, he adds.

    The potential of CCU is “enormous,” both in terms of its volume and monetary potential, said mechanical engineer Volker Sick at a CCU conference in Brussels in April. Sick, of the University of Michigan in Ann Arbor, directs the Global CO2 Initiative, which promotes CCU as a mainstream climate solution. “We’re not talking about something that’s nice to do but doesn’t move the needle,” he added. “It moves the needle in many, many aspects.”

    The plastics paradox

    The use of carbon dioxide in products is not new. CO2 is used to make soda fizzy, keep foods frozen (as dry ice) and convert ammonia to urea for fertilizer. What’s new is the focus on making products with CO2 as a strategy to slow climate change. Today’s CCU market, estimated at $2 billion, could mushroom to $550 billion by 2040, according to Lux Research, a Boston-based market research firm. Much of this market is driven by adding CO2 to cement — which can improve its properties as well as reduce atmospheric carbon — and to jet fuel, which can lower the industry’s large carbon footprint. CO2-to-plastics is a niche market today, but the field aims to battle two crises at once: climate change and plastic pollution.

    Plastics are made from fossil fuels, a mix of hydrocarbons formed by the remains of ancient organisms. Most plastics are produced by refining crude oil, which is then broken down into smaller molecules through a process called cracking. These smaller molecules, known as monomers, are the building blocks of polymers. Monomers such as ethylene, propylene, styrene and others are linked together to form plastics such as polyethylene (detergent bottles, toys, rigid pipes), polypropylene (water bottles, luggage, car parts) and polystyrene (plastic cutlery, CD cases, Styrofoam).

    But making plastics from fossil fuels is a carbon catastrophe. Each step in the plastics life cycle — extraction, transport, manufacture and disposal — emits massive amounts of greenhouse gases, mostly CO2, according to the Center for International Environmental Law, a nonprofit law firm based in Geneva and Washington, D.C. These emissions alone — more than 850 million tons of greenhouse gases in 2019 — are enough to threaten global climate targets.

    And the numbers are about to get much worse. A 2018 report by the Paris-based intergovernmental International Energy Agency projected that global demand for plastics will increase from about 400 million tons in 2020 to nearly 600 million by 2050. Future demand is expected to be concentrated in developing countries and will vastly outstrip global recycling efforts.

    Plastics are a serious crisis for the environment, from fossil fuel use to their buildup in landfills and oceans (SN: 1/16/21, p. 4). But we’re a society addicted to plastic and all it gives us — cell phones, computers, comfy Crocs. Is there a way to have our (plastic-wrapped) cake and eat it too?

    Yes, says Sick. First, he argues, cap the oil wells. Next, make plastics from aboveground carbon. Today, there are products made of 20 to over 40 percent CO2. Finally, he says, build a circular economy, one that reduces resource use, reuses products, then recycles them into other new products.

    “Not only can we eliminate the fossil carbon as a source so that we don’t add to the aboveground carbon budget, but in the process we can also rethink how we make plastics,” Sick says. He suggests they be specifically designed “to live very, very long so that they don’t have to be replaced … or that they decompose in a benign manner.”

     But creating plastics from thin air is not easy. CO2 needs to be extracted, from the atmosphere or smokestacks, for example, using specialized equipment. It often needs to be compressed into liquid form and transported, generally through pipelines. Finally, to meet the overall goal of reducing the amount of carbon in the air, the chemical reaction that turns CO2 into the building blocks of plastics must be run with as little extra energy as possible. Keeping energy use low is a special challenge when dealing with the carbon dioxide molecule.

    A bond that’s hard to break

    There’s a reason that carbon dioxide is such a potent greenhouse gas. It is incredibly stable and can linger in the atmosphere for 300 to 1,000 years. That stability makes CO2 hard to break apart and add to other chemicals. Lots of energy is typically needed for the reaction.

    “This is the fundamental energy problem of CO2,” says chemist Ian Tonks of the University of Minnesota in Minneapolis. “Energy is necessary to fix CO2 to plastics. We’re trying to find that energy in creative ways.”

    Catalysts offer a possible answer. These substances can increase the rate of a chemical reaction, and thus reduce the need for energy. Scientists in the CO2-to-plastics field have spent more than a decade searching for catalysts that can work at close to room temperature and pressure, and coax CO2 to form a new chemical identity. These efforts fall into two broad categories: chemical and biological conversion.

    First attempts

    Early experiments focused on adding CO2 to highly reactive monomers like epoxides to facilitate the reaction. Epoxides are three-membered rings composed of one oxygen atom and two carbon atoms. Like a spring under tension, they can easily pop open. In the early 2000s, industrial chemist Christoph Gürtler and chemist Walter Leitner of Aachen University in Germany found a zinc catalyst that allowed them to break open the epoxide ring of polypropylene oxide and combine it with CO2. Following the reaction, the CO2 was joined permanently to the polypropylene molecule and was no longer in gas form — something that is true of all CO2-to-plastic reactions. Their work resulted in one of the first commercial CO2 products — a polyurethane foam containing 20 percent captured CO2. Today, the German company Covestro, where Gürtler now works, sells 5,000 tons of the product annually in mattresses, car interiors, building insulation and sports flooring.

    More recent research has focused on other monomers to expand the variety of CO2-based plastics. Butadiene is a hydrocarbon monomer that can be used to make polyester for clothing, carpets, adhesives and other products.

    In 2020, chemist James Eagan at the University of Akron in Ohio mixed butadiene and CO2 with a series of catalysts developed at Stanford University. Eagan hoped to create a polyester that is carbon negative, meaning it has a net effect of removing CO2 from the atmosphere, rather than adding it. When he analyzed the contents of one vial, he discovered he had created something even better: a polyester made with 29 percent CO2 that degrades in high pH water into organic materials.

    Chemist James Eagan and colleagues created a degradable polyester made partially with waste CO2.THE UNIV. OF AKRON

    “Chemistry is like cooking,” Eagan says. “We took chocolate chips, flour, eggs, butter, mixed them up, and instead of getting cookies we opened the oven and found a chicken potpie.”

    Eagan’s invention has immediate applications in the recycling industry, where machines can often get gummed up from the nondegradable adhesives used in packaging, soda bottle labels and other products. An adhesive that easily breaks down may improve the efficiency of recycling facilities.

    Tonks, described by Eagan as a friendly competitor, took Eagan’s patented process a step further. By putting Eagan’s product through one more reaction, Tonks made the polymer fully degradable back to reusable CO2 — a circular carbon economy goal. Tonks created a start-up this year called LoopCO2 to produce a variety of biodegradable plastics.

    Microbial help

    Researchers have also harnessed microbes to help turn carbon dioxide into useful materials including dress fabric. Some of the planet’s oldest-living microbes emerged at a time when Earth’s atmosphere was rich in carbon dioxide. Known as acetogens and methanogens, the microbes developed simple metabolic pathways that use enzyme catalysts to convert CO2 and carbon monoxide into organic molecules. In the atmosphere, CO will react with oxygen to form CO2. In the last decade, researchers have studied the microbes’ potential to remove these gases from the atmosphere and turn them into useful products.

    LanzaTech, based in Skokie, Ill., uses the acetogenic bacterium Clostridium autoethanogenum to metabolize CO2and CO emissions into a variety of industrial chemicals, including ethanol. Last year, the clothing company Zara began using LanzaTech’s polyester fabric for a line of dresses.

    The ethanol used to create these products comes from LanzaTech’s two commercial facilities in China, the first to transform waste CO, a main emission from steel plants, into ethanol. The ethanol goes through two more steps to become polyester. LanzaTech partnered with steel mills near Beijing and in north-central China, feeding carbon monoxide into LanzaTech’s microbe-filled bioreactor.

    Steel production emits almost two tons of CO2 for every ton of steel made. By contrast, a life cycle assessment study found that LanzaTech’s ethanol production process lowered greenhouse gas emissions by approximately 80 percent compared with ethanol made from fossil fuels.

    In February, researchers from LanzaTech, Northwestern University in Evanston, Ill., and others reported in Nature Biotechnology that they had genetically modified the Clostridium bacterium to produce acetone and isopropanol, two other fossil fuel–based industrial chemicals. Company CEO Jennifer Holmgren says the only waste product is dead bacteria, which can be used as compost or animal feed.

    Other researchers are skipping the living microbes and just using their catalysts. More than a decade ago, chemist Charles Dismukes of Rutgers University in Piscataway, N.J., began looking at acetogens and methanogens as a way to use atmospheric carbon. He was intrigued by their ability to release energy when making carbon building blocks from CO2, a reaction that usually requires energy. He and his team focused on the bacteria’s nickel phosphide catalysts, which are responsible for the energy-releasing carbon reaction.

    Dismukes and colleagues developed six electrocatalysts that are able to make monomers at room temperature and pressure using only CO2, water and electricity. The energy­-releasing pathway of the nickel phosphide catalysts “lowers the required voltage to run the reaction, which lowers the energy consumption of the process and improves the carbon footprint,” says Karin Calvinho, a former student of Dismukes who is now chief technical officer at RenewCO2, the start-up Dismukes’ team formed in 2018.

    RenewCO2 plans to sell its monomers, including monoethylene glycol, to companies that want to reduce their carbon footprint. The group proved its concept works using CO2 brought into the lab. In the future, the company intends to obtain CO2 from biomass, industrial emissions or direct air capture.

    Barriers to change

    Yet researchers and companies face challenges in scaling up carbon capture and reuse. Some barriers lurk in the language of regulations written before CCU existed. An example is the U.S. Environmental Protection Agency’s program to provide tax credits to companies that make biofuels. The program is geared toward plant-based fuels like corn and sugar­cane. LanzaTech’s approach for making jet fuel doesn’t qualify for credits because bacteria are not plants.

    Other barriers are more fundamental. Styring points to the long-standing practice of fossil fuel subsidies, which in 2021 topped $440 billion worldwide. Global government subsidies to the oil and gas industry keep fossil fuel prices artificially low, making it hard for renewables to compete, according to the International Energy Agency. Styring advocates shifting those subsidies toward renewables.

    “We try to work on the principle that we recycle carbon and create a circular economy,” he says. “But current legislation is set up to perpetuate a linear economy.”

    The happy morning routine that makes the world carbon cleaner is theoretically possible. It’s just not the way the world works yet. Getting to that circular economy, where the amount of carbon above ground is finite and controlled in a never-ending loop of use and reuse will require change on multiple fronts. Government policy and investment, corporate practices, technological development and human behavior would need to align perfectly and quickly in the interests of the planet.

    In the meantime, researchers continue their work on the carbon dioxide molecule.

    “I try to plan for the worst-case scenario,” says Eagan, the chemist in Akron. “If legislation is never in place to curb emissions, how do we operate within our capitalist system to generate value in a renewable and responsible way? At the end of the day, we will need new chemistry.” More

  • in

    A coral pollution study unexpectedly helped explain Hurricane Maria’s fury

    Hurricane Maria struck the island of Puerto Rico early on September 20, 2017, with 250-kilometer-per-hour winds, torrential rains and a storm surge up to three meters high. In its wake: nearly 3,000 people dead, an almost yearlong power outage and over $90 billion in damages to homes, businesses and essential infrastructure, including roads and bridges.

    Geologist and diver Milton Carlo took shelter at his house in Cabo Rojo on the southwest corner of the island with his wife, daughter and infant grandson. He watched the raging winds of the Category 4 hurricane lift his neighbor’s SUV into the air, and remembers those hours as some of the worst of his life.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    For weeks, the rest of the world was in the dark about the full extent of the devastation, because Maria had destroyed the island’s main weather radar and almost all cell phone towers.

    Far away on the U.S. West Coast, in Santa Cruz, Calif., oceanographer Olivia Cheriton watched satellite radar images of Maria passing over the instruments she and her U.S. Geological Survey team had anchored a few kilometers southwest of Puerto Rico. The instruments, placed offshore from the seaside town of La Parguera, were there to track pollution circulating around some of the island’s endangered corals.

    More than half a year went by before she learned the improbable fate of those instruments: They had survived and had captured data revealing hurricane-related ocean dynamics that no scientist had ever recorded.

    The wind-driven coastal currents interacted with the seafloor in a way that prevented Maria from drawing cold water from the depths of the sea up to the surface. The sea surface stayed as warm as bathwater. Heat is a hurricane’s fuel source, so a warmer sea surface leads to a more intense storm. As Cheriton figured out later, the phenomenon she stumbled upon likely played a role in maintaining Maria’s Category 4 status as it raked Puerto Rico for eight hours.

    “There was absolutely no plan to capture the impact of a storm like Maria,” Cheriton says. “In fact, if we somehow could’ve known that a storm like that was going to occur, we wouldn’t have put hundreds of thousands of dollars’ worth of scientific instrumentation in the water.”

    A storm’s path is guided by readily observable, large-scale atmospheric features such as trade winds and high-pressure zones. Its intensity, on the other hand, is driven by weather events inside the hurricane and wave action deep below the ocean’s surface. The findings by Cheriton and colleagues, published May 2021 in Science Advances, help explain why hurricanes often get stronger before making landfall and can therefore help forecasters make more accurate predictions.

    Reef pollution

    Cheriton’s original research objective was to figure out how sea currents transport polluted sediments from Guánica Bay — where the Lajas Valley drains into the Caribbean Sea — to the pristine marine ecosystems 10 kilometers west in La Parguera Natural Reserve, famous for its bioluminescent waters.

    Endangered elkhorn and mountainous star corals, called “the poster children of Caribbean reef decline” by marine geologist Clark Sherman, live near shore in some of the world’s highest recorded concentrations of now-banned industrial chemicals. Those polychlorinated biphenyls, or PCBs, hinder coral reproduction, growth, feeding and defensive responses, says Sherman, of the University of Puerto Rico–Mayagüez.

    Elkhorn coral (left) and mountainous star coral (right) were once ubiquitous in the Caribbean. Their numbers have dropped greatly due to bleaching and disease. Pollution is partly to blame.  FROM LEFT: NICK HOBGOOD/WIKIMEDIA COMMONS (CC BY-SA 3.0); NOAA FISHERIES

    Half of corals in the Caribbean have died since monitoring began in the 1970s, and pollution is a major cause, according to an April 2020 study in Science Advances. Of particular interest to Cheriton, Sherman and their colleagues was whether the pollution had reached deepwater, or mesophotic, reefs farther offshore, which could be a refuge for coral species that were known to be dying in shallower areas.

    The main artery for this pollution is the Rio Loco — which translates to “Crazy River.” It spews a toxic runoff of eroded sediments from the Lajas Valley’s dirt roads and coffee plantations into Guánica Bay, which supports a vibrant fishing community. Other possible contributors to the pollution — oil spills, a fertilizer plant, sewage and now-defunct sugar mills — are the subject of investigations by public health researchers and the U.S. Environmental Protection Agency.

    In June 2017, the team convened in La Parguera to install underwater sensors to measure and track the currents in this threatened marine environment. From Sherman’s lab on a tiny islet overrun with iguanas the size of house cats, he and Cheriton, along with team leader and USGS research geologist Curt Storlazzi and USGS physical scientist Joshua Logan, launched a boat into choppy seas.

    Marine geologist Clark Sherman dives amid colonies of healthy great star corals, black corals, a large sea fan and a variety of sponges along the steep island shelf of southwest Puerto Rico. Sherman helped investigate whether pollution was reaching these deepwater reefs.E. TUOHY/UNIV. OF PUERTO RICO–MAYAGÜEZ

    At six sites near shore, Storlazzi, Sherman and Logan dove to the seafloor and used epoxy to anchor pressure gauges and batonlike current meters. Together the instruments measured hourly temperature, wave height and current speed. The team then moved farther offshore where the steep island shelf drops off at a 45-degree angle to a depth of 60 meters, but the heavy ocean chop scuttled their efforts to install instruments there.

    In June 2017, research geologist Curt Storlazzi (left) and physical scientist Joshua Logan (right) prepare to dive near Puerto Rico’s Guánica Bay to install instruments for monitoring currents suspected of delivering pollution to coral reefs.USGS

    For help working in the difficult conditions, Sherman enlisted two expert divers for a second attempt: Carlo, the geologist and diving safety officer, and marine scientist Evan Tuohy, both of the University of Puerto Rico–­Mayagüez. The two were able to install the most important and largest piece, a hydroacoustic instrument comprising several drums fastened to a metal grid, which tracked the direction and speed of currents every minute using pulsating sound waves. A canister containing temperature and salinity sensors took readings every two minutes. Above this equipment, an electric thermometer extended to within 12 meters of the surface, registering temperature every five meters vertically every few seconds.

    The instruments installed by Storlazzi, Logan and others collected unexpected underwater ocean observations during Hurricane Maria. An acoustic Doppler current profiler (left) used pulsating sound waves to measure the direction and speed of currents at the shelf break and slope site about 12 kilometers offshore of La Parguera. A Marotte current meter (right) measured wave height, current speed and temperature at six spots close to shore.USGS

    Working in concert, the instruments gave a high-resolution, seafloor-to-surface snapshot of the ocean’s hydrodynamics on a near-continuous basis. The equipment had to sit level on the sloping seafloor so as not to skew the measurements and remain firmly in place. Little did the researchers know that the instruments would soon be battered by one of the most destructive storms in history.

    Becoming Maria

    The word hurricane derives from the Caribbean Taino people’s Huricán, god of evil. Some of the strongest of these Atlantic tropical cyclones begin where scorching winds from the Sahara clash with moist subtropical air over the island nation of Cape Verde off western Africa. The worst of these atmospheric disturbances create severe thunderstorms with giant cumulonimbus clouds that flatten out against the stratosphere. Propelled by the Earth’s rotation, they begin to circle counterclockwise around each other — a phenomenon known as the Coriolis effect.

    Weather conditions that summer had already spawned two monster hurricanes: Harvey and Irma. By late September, the extremely warm sea surface — 29º Celsius or hotter in some places — gave up its heat energy by way of evaporation into Maria’s rushing winds. All hurricanes begin as an area of low pressure, which in turn sucks in more wind, accelerating the rise of hot air, or convection. Countervailing winds known as shear can sometimes topple the cone of moist air spiraling upward. But that didn’t happen, so Maria continued to grow in size and intensity.

    Meteorologists hoped that Maria would lose force as it moved across the Caribbean, weakened by the wake of cooler water Irma had churned up two weeks earlier. Instead, Maria tracked south, steaming toward the eastern Caribbean island of Dominica. Within 15 hours of making landfall, its maximum sustained wind speed doubled, reaching a house-leveling 260 kilometers per hour. That doubling intensified the storm from a milder (still dangerous) Category 1 to a strong Category 5.

    NOAA’s computer forecasting models did not anticipate such rapid intensification. Irma had also raged with unforeseen intensity.

    After striking Dominica hard, Maria’s eyewall broke down, replaced by an outer band of whipping thunderstorms. This slightly weakened Maria to 250 kilometers per hour before it hit Puerto Rico, while expanding the diameter of the storm’s eyewall — the area of strong winds and heaviest precipitation — to 52 kilometers. That’s close to the width of the island.

    Hurricane Maria made landfall on Puerto Rico early in the morning on September 20, 2017, and cut across the island diagonally toward the northwest. Its eyewall generated maximum sustained winds of  250 kilometers per hour and spanned almost the width of the island.CIRA/NOAA

    It’s still not fully understood why Maria had suddenly gone berserk. Various theories point to the influence of hot towers — convective bursts of heat energy from thunderclouds that punch up into the stratosphere — or deep warm pools, buoyant freshwater eddies spilling out of the Amazon and Orinoco rivers into the Atlantic, where currents carry these pockets of hurricane-fueling heat to the Gulf of Mexico and the Caribbean Sea.

    But even though these smaller-scale events may have a big impact on intensity, they aren’t fully accounted for in weather models, says Hua Leighton, a scientist at the National Oceanic and Atmospheric Administration’s hurricane research division and the University of Miami’s Cooperative Institute for Marine and Atmospheric Studies. Leighton develops forecasting models and investigates rapid intensification of hurricanes.

    “We cannot measure everything in the atmosphere,” Leighton says.

    Without accurate data on all the factors that drive hurricane intensity, computer models can’t easily predict when the catalyzing events will occur, she says. Nor can models account for everything that happens inside the ocean during a hurricane. They don’t have the data.

    Positioning instruments just before a hurricane hits is a major challenge. But NOAA is making progress. It has launched a new generation of hurricane weather buoys in the western North Atlantic and remote control surface sensors called Saildrones that examine the air-sea interface between hurricanes and the ocean (SN: 6/8/19, p. 24).

    Underwater, NOAA uses other drones, or gliders, to profile the vast areas regularly traversed by tropical storms. These gliders collected 13,200 temperature and salinity readings in 2020. By contrast, the instruments that the team set in Puerto Rico’s waters in 2017 collected over 250 million data points, including current velocity and direction — a rare and especially valuable glimpse of hurricane-induced ocean dynamics at a single location.

    A different view

    After the storm passed, Storlazzi was sure the hurricane had destroyed his instruments. They weren’t designed to take that kind of punishment. The devices generally work in much calmer conditions, not the massive swells generated by Maria, which could increase water pressure to a level that would almost certainly crush instrument sensors.

    But remarkably, the instruments were battered but not lost. Sherman, Carlo and Touhy retrieved them after Maria passed and put them in crates awaiting the research group’s return.

    Milton Carlo (left) and Evan Tuohy (right), shown in an earlier deepwater dive, helped  place the current-monitoring instruments at the hard-to-reach sites where hurricane data were collected.MIKE ECHEVARRIA

    When Storlazzi and USGS oceanographer Kurt Rosenberger pried open the instrument casings in January 2018, no water gushed out. Good sign. The electronics appeared intact. And the lithium batteries had powered the rapid-fire sampling enterprise for the entire six-month duration. The researchers quickly downloaded a flood of data, backed it up and started transmitting it to Cheriton, who began sending back plots and graphs of what the readings showed.

    Floodwaters from the massive rains brought by Maria had pushed a whole lot of polluted sediment to the reefs outside Guánica Bay, spiking PCB concentrations and threatening coral health. As of a few months after the storm, the pollution hadn’t reached the deeper reefs.

    Then the researchers realized that their data told another story: what happens underwater during a massive hurricane. They presumed that other researchers had previously captured a profile of the churning ocean depths beneath a hurricane at the edge of a tropical island.

    Remarkably, that was not the case.

    “Nobody’s even measured this, let alone reported it in any published literature,” Cheriton says. The team began to explore the hurricane data not knowing where it might lead.

    “What am I looking at here?” Cheriton kept asking herself as she plotted and analyzed temperature, current velocity and salinity values using computer algorithms. The temperature gradient that showed the ocean’s internal or underwater waves was different than anything she’d seen before.

    Oceanographer Olivia Cheriton realized that data on ocean currents told a new story about Hurricane Maria.O.M. CHERITON

    During the hurricane, the top 20 meters of the Caribbean Sea had consistently remained at or above 26º C, a few degrees warmer than the layers beneath. But the surface waters should have been cooled if, as expected, Maria’s winds had acted like a big spoon, mixing the warm surface with cold water stirred up from the seafloor 50 to 80 meters below. Normally, the cooler surface temperature restricts the heat supply, weakening the hurricane. But the cold water wasn’t reaching the surface.

    To try to make sense of what she was seeing, Cheriton imagined herself inside the data, in a protective bubble on the seafloor with the instruments as Maria swept over. Storlazzi worked alongside her analyzing the data, but focused on the sediments circulating around the coral reefs.

    Cheriton was listening to “An Awesome Wave” by indie-pop band Alt-J and getting goosebumps while the data swirled before them. Drawing on instincts from her undergraduate astronomy training, she focused her mind’s eye on a constellation of data overhead and told Storlazzi to do the same.

    “Look up Curt!” she said.

    Up at the crest of the island shelf, where the seafloor drops off, the current velocity data revealed a broad stream of water gushing from the shore at almost 1 meter per second, as if from a fire hose. Several hours before Maria arrived, the wind-driven current had reversed direction and was now moving an order of magnitude faster. The rushing surface water thus became a barrier, trapping the cold water beneath it.

    As a result, the surface stayed warm, increasing the force of the hurricane. The cooler layers below then started to pile up vertically into distinct layers, one on top of the other, beneath the gushing waters above.

    Cheriton calculated that with the fire hose phenomenon the contribution from coastal waters in this area to Maria’s intensity was, on average, 65 percent greater, compared with what it would have been otherwise.

    Oceanographer Travis Miles of Rutgers University in New Brunswick, N.J., who was not involved in the research, calls Cheriton and the team’s work a “frontier study” that draws researchers’ attention to near-shore processes. Miles can relate to Cheriton and her team’s accidental hurricane discovery from personal experience: When his water quality–sampling gliders wandered into Hurricane Irene’s path in 2011, they revealed that the ocean off the Jersey Shore had cooled in front of the storm. Irene’s onshore winds had induced seawater mixing across the broad continental shelf and lowered sea surface temperatures.

    The Puerto Rico data show that offshore winds over a steep island shelf produced the opposite effect and should help researchers better understand storm-induced mixing of coastal areas, says NOAA senior scientist Hyun-Sook Kim, who was not involved in the research. It can help with identifying deficiencies in the computer models she relies on when providing guidance to storm-tracking meteorologists at the National Hurricane Center in Miami and the Joint Typhoon Warning Center in Hawaii.

    And the unexpected findings also could help scientists get a better handle on coral reefs and the role they play in protecting coastlines. “The more we study the ocean, especially close to the coast,” Carlo says, “the more we can improve conditions for the coral and the people living on the island.” More

  • in

    Humans may not be able to handle as much heat as scientists thought

    More than 2,000 people dead from extreme heat and wildfires raging in Portugal and Spain. High temperature records shattered from England to Japan. Overnights that fail to cool.

    Brutal heat waves are quickly becoming the hallmark of the summer of 2022.

    And even as climate change continues to crank up the temperature, scientists are working fast to understand the limits of humans’ resilience to heat extremes. Recent research suggests that heat stress tolerance in people may be lower than previously thought. If true, millions more people could be at risk of succumbing to dangerous temperatures sooner than expected.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “Bodies are capable of acclimating over a period of time” to temperature changes, says Vivek Shandas, an environmental planning and climate adaptation researcher at Portland State University in Oregon. Over geologic time, there have been many climate shifts that humans have weathered, Shandas says. “[But] we’re in a time when these shifts are happening much more quickly.”

    Just halfway through 2022, heat waves have already ravaged many countries. The heat arrived early in southern Asia: In April, Wardha, India, saw a high of 45° Celsius (113° Fahrenheit); in Nawabshah, Pakistan, in May recorded temperatures rose to 49.5° C (121.1° F).

    Extreme heat alerts blared across Europe beginning in June and continuing through July, the rising temperatures exacerbating drought and sparking wildfires. The United Kingdom shattered its hottest-ever record July 19 when temperatures reached 40.3° C in the English village of Coningsby. The heat fueled fires in France, forcing thousands to evacuate from their homes. 

    And the litany goes on: Japan experienced its worst June heat wave since record-keeping began in 1875, leading to the country’s highest-ever recorded June temperature of 40.2° C.  China’s coastal megacities, from Shanghai to Chengdu, were hammered by heat waves in July as temperatures in the region also rose above 40° C. And in the United States, a series of heat waves gripped the Midwest, the South and the West in June and July. Temperatures soared to 42° C in North Platte, Neb., and to 45.6° C in Phoenix.

    The current global rate of warming on Earth is unprecedented (SN: 7/24/19). And scientists have long predicted that human-caused climate change will increase the occurrence of heat waves. Globally, humans’ exposure to extreme heat tripled from 1983 to 2016, particularly in South Asia.

    The heat already is taking an increasing toll on human health. It can cause heat cramps, heat exhaustion and heat stroke, which is often fatal. Dehydration can lead to kidney and heart disease. Extreme heat can even change how we behave, increasing aggression and decreasing our ability to focus (SN: 8/18/21).

    Staying cool

    The human body has various ways to shed excess heat and keep the core of the body at an optimal temperature of about 37° C (98.6° F). The heart pumps faster, speeding up blood flow that carries heat to the skin (SN: 4/3/18). Air passing over the skin can wick away some of that heat. Evaporative cooling — sweating — also helps.

    But there’s a limit to how much heat humans can endure. In 2010, scientists estimated that theoretical heat stress limit to be at a “wet bulb” temperature of 35° C. Wet bulb temperatures depend on a combination of humidity and “dry bulb” air temperature measured by a thermometer. Those variables mean a place could hit a wet bulb temperature of 35° C in different ways — for instance, if the air is that temperature and there’s 100 percent humidity, or if the air temperature is 45° C and there’s 50 percent humidity. The difference is due to evaporative cooling.

    When water evaporates from the skin or another surface, it steals away energy in the form of heat, briefly cooling that surface. That means that in drier regions, the wet bulb temperature — where that ephemeral cooling effect happens readily — will be lower than the actual air temperature. In humid regions, however, wet and dry bulb temperatures are similar, because the air is so moist it’s difficult for sweat to evaporate quickly.

    So when thinking about heat stress on the body, scientists use wet bulb temperatures because they are a measure of how much cooling through evaporation is possible in a given climate, says Daniel Vecellio, a climate scientist at Penn State.

    “Both hot/dry and warm/humid environments can be equally dangerous,” Vecellio says — and this is where the body’s different cooling strategies come into play. In hot, dry areas, where the outside temperature may be much hotter than skin temperature, human bodies rely entirely on sweating to cool down, he says. In warm, humid areas, where the air temperature may actually be cooler than skin temperatures (but the humidity makes it seem warmer than it is), the body can’t sweat as efficiently. Instead, the cooler air passing over the skin can draw away the heat.

    How hot is too hot?

    Given the complexity of the body’s cooling system, and the diversity of human bodies, there isn’t really a one-size-fits-all threshold temperature for heat stress for everybody. “No one’s body runs at 100 percent efficiency,” Vecellio says. Different body sizes, the ability to sweat, age and acclimation to a regional climate all have a role.

    Still, for the last decade, that theoretical wet bulb 35° C number has been considered to be the point beyond which humans can no longer regulate their bodies’ temperatures. But recent laboratory-based research by Vecellio and his colleagues suggests that a general, real-world threshold for human heat stress is much lower, even for young and healthy adults.

    The researchers tracked heat stress in two dozen subjects ranging in age from 18 to 34, under a variety of controlled climates. In the series of experiments, the team varied humidity and temperature conditions within an environmental chamber, sometimes holding temperature constant while varying the humidity, and sometimes vice versa.

    The subjects exerted themselves within the chamber just enough to simulate minimal outdoor activity, walking on a treadmill or pedaling slowly on a bike with no resistance. During these experiments, which lasted for 1.5 to two hours, the researchers measured the subjects’ skin temperatures using wireless probes and assessed their core temperatures using a small telemetry pill that the subjects swallowed.

    In warm and humid conditions, the subjects in the study were unable to tolerate heat stress at wet bulb temperatures closer to 30° or 31° C, the team estimates. In hot and dry conditions, that wet bulb temperature was even lower, ranging from 25° to 28° C, the researchers reported in the February Journal of Applied Physiology. For context, in a very dry environment at about 10 percent humidity, a wet bulb temperature of 25° C would correspond to an air temperature of about 50° C (122° F).

    These results suggest that there is much more work to be done to understand what humans can endure under real-world heat and humidity conditions, but that the threshold may be much lower than thought, Vecellio says. The 2010 study’s theoretical finding of 35° C may still be “the upper limit,” he adds. “We’re showing the floor.”

    And that’s for young, healthy adults doing minimal activity. Thresholds for heat stress are expected to be lower for outdoor workers required to exert themselves, or for the elderly or children. Assessing laboratory limits for more at-risk people is the subject of ongoing work for Vecellio and his colleagues.

    A worker wipes away sweat in Toulouse, France, on July 13. An intense heat wave swept across Europe in mid-July, engulfing Spain, Portugal, France, England and other countries.VALENTINE CHAPUIS/AFP via Getty Images

    If the human body’s tolerance for heat stress is generally lower than scientists have realized, that could mean millions more people will be at risk from the deadliest heat sooner than scientists have realized. As of 2020, there were few reports of wet bulb temperatures around the world reaching 35° C, but climate simulations project that limit could be regularly exceeded in parts of South Asia and the Middle East by the middle of the century.

    Some of the deadliest heat waves in the last two decades were at lower wet bulb temperatures: Neither the 2003 European heat wave, which caused an estimated 30,000 deaths, nor the 2010 Russian heat wave, which killed over 55,000 people, exceeded wet bulb temperatures of 28° C.

    Protecting people

    How best to inform the public about heat risk is “the part that I find to be tricky,” says Shandas, who wasn’t involved in Vecellio’s research. Shandas developed the scientific protocol for the National Integrated Heat Health Information System’s Urban Heat Island mapping campaign in the United States.

    It’s very useful to have this physiological data from a controlled, precise study, Shandas says, because it allows us to better understand the science behind humans’ heat stress tolerance. But physiological and environmental variability still make it difficult to know how best to apply these findings to public health messaging, such as extreme heat warnings, he says. “There are so many microconsiderations that show up when we’re talking about a body’s ability to manage [its] internal temperature.”

    One of those considerations is the ability of the body to quickly acclimate to a temperature extreme. Regions that aren’t used to extreme heat may experience greater mortality, even at lower temperatures, simply because people there aren’t used to the heat. The 2021 heat wave in the Pacific Northwest wasn’t just extremely hot — it was extremely hot for that part of the world at that time of year, which makes it more difficult for the body to adapt, Shandas says (SN: 6/29/21).

    Heat that arrives unusually early and right on the heels of a cool period can also be more deadly, says Larry Kalkstein, a climatologist at the University of Miami and the chief heat science advisor for the Washington, D.C.–based nonprofit Adrienne Arsht-Rockefeller Foundation Resilience Center. “Often early season heat waves in May and June are more dangerous than those in August and September.”

    One way to improve communities’ resilience to the heat may be to treat heat waves like other natural disasters — including give them names and severity rankings (SN: 8/14/20). As developed by an international coalition known as the Extreme Heat Resilience Alliance, those rankings form the basis for a new type of heat wave warning that explicitly considers the factors that impact heat stress, such as wet bulb temperature and acclimation, rather than just temperature extremes.

    The rankings also consider factors such as cloud cover, wind and how hot the temperatures are overnight. “If it’s relatively cool overnight, there’s not as much negative health outcome,” says Kalkstein, who created the system. But overnight temperatures aren’t getting as low as they used to in many places. In the United States, for example, the average minimum temperatures at nighttime are now about 0.8° C warmer than they were during the first half of the 20th century, according to the country’s Fourth National Climate Assessment, released in 2018 (SN: 11/28/18).

    By naming heat waves like hurricanes, officials hope to increase citizens’ awareness of the dangers of extreme heat. Heat wave rankings could also help cities tailor their interventions to the severity of the event. Six cities are currently testing the system’s effectiveness: four in the United States and in Athens, Greece, and Seville, Spain. On July 24, with temperatures heading toward 42° C, Seville became the first city in the world to officially name a heat wave, sounding the alarm for Heat Wave Zoe.

    As 2022 continues to smash temperature records around the globe, such warnings may come not a moment too soon. More

  • in

    Ancient penguin bones reveal unprecedented shrinkage in key Antarctic glaciers

    Antarctica’s Pine Island and Thwaites glaciers are losing ice more quickly than they have at any time in the last few thousand years, ancient penguin bones and limpet shells suggest.

    Scientists are worried that the glaciers, two of Antarctica’s fastest-shrinking ones, are in the process of unstable, runaway retreat. By reconstructing the history of the glaciers using the old bones and shells, researchers wanted to find out whether these glaciers have ever been smaller than they are today.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    “If the ice has been smaller in the past, and did readvance, that shows that we’re not necessarily in runaway retreat” right now, says glacial geologist Brenda Hall of the University of Maine in Orono. The new result, described June 9 in Nature Geoscience, “doesn’t give us any comfort,” Hall says. “We can’t refute the hypothesis of a runaway retreat.”

    Pine Island and Thwaites glaciers sit in a broad ocean basin shaped like a bowl, deepening toward the middle. This makes the ice vulnerable to warm currents of dense, salty water that hug the ocean floor (SN: 4/9/21). Scientists have speculated that as the glaciers retreat farther inland, they could tip into an irreversible collapse (SN: 12/13/21).  That collapse could play out over centuries and raise the sea level by roughly a meter.

    Researchers dated ancient shorelines (seen here as the series of small ridges in the rocky terrain between the foreground boulders and background snow) on islands roughly 100 kilometers from Pine Island and Thwaites glaciers in Antarctica to help figure out if the glaciers are in the process of unstable, runaway retreat.James Kirkham

    To reconstruct how the glaciers have changed over thousands of years, the researchers turned to old penguin bones and shells, collected by Scott Braddock, a glacial geologist in Hall’s lab, during a research cruise in 2019 on the U.S. icebreaker Nathaniel B. Palmer.

    One afternoon, Braddock clambered from a bobbing inflatable boat onto the barren shores of Lindsey 1 — one of a dozen or more rocky islands that sit roughly 100 kilometers from where Pine Island Glacier terminates in the ocean. As he climbed the slope, his boots slipped over rocks covered in penguin guano and dotted with dingy white feathers. Then, he came upon a series of ridges — rocks and pebbles that were piled up by waves during storms thousands of years before — that marked ancient shorelines.

    Twelve thousand years ago, just as the last ice age was ending, this island would have been entirely submerged in the ocean. But as nearby glaciers shed billions of metric tons of ice, the removal of that weight allowed Earth’s crust to spring up like a bed mattress — pushing Lindsey 1 and other nearby islands out of the water, a few millimeters per year.

    As Lindsey 1 rose, a series of shorelines formed on the edges of the island — and then were lifted, one after another, out of reach of the waves. By measuring the ages and heights of those stranded shorelines, the researchers could tell how quickly the island had risen. Because the rate of uplift is determined by the amount of ice being lost from nearby glaciers, this would reveal how quickly Pine Island and Thwaites glaciers had retreated — and whether they had gotten smaller than they are today and then readvanced.

    Braddock dug into the pebbly ridges, collecting ancient cone-shaped limpet shells and marble-sized fragments of penguin bones deposited when the shorelines formed. Back in Maine, he and his colleagues radiocarbon dated those objects to estimate the ages of the shorelines. Ultimately, the researchers dated nearly two dozen shorelines, spread across several islands in the region.

    These dates showed that the oldest and highest beach formed 5,500 years ago. Since that time, up until the last few decades, the islands have risen at a steady rate of about 3.5 millimeters per year. This is far slower than the 20 to 40 millimeters per year that the land around Pine Island and Thwaites is currently rising, suggesting that the rate of ice loss from nearby glaciers has skyrocketed due to the onset of rapid human-caused warming, after thousands of years of relative stability.

    “We’re going into unknown territory,” Braddock says. “We don’t have an analog to compare what’s going on today with what happened in the past.”

    Slawek Tulaczyk, a glaciologist at the University of California, Santa Cruz, sees the newly dated shorelines as “an important piece of information.” But he cautions against overinterpreting the results. While these islands are 100 kilometers from Pine Island and Thwaites, they are less than 50 kilometers from several smaller glaciers — and changes in these closer glaciers might have obscured whatever was happening at Pine Island and Thwaites long ago. He suspects that Pine Island and Thwaites could still have retreated and then readvanced a few dozen kilometers: “I don’t think this study settles it.” More

  • in

    Scientists hope to mimic the most extreme hurricane conditions

    Winds howl at over 300 kilometers per hour, battering at a two-story wooden house and ripping its roof from its walls. Then comes the water. A 6-meter-tall wave engulfs the structure, knocking the house off its foundation and washing it away.

    That’s the terrifying vision of researchers planning a new state-of-the-art facility to re-create the havoc wreaked by the most powerful hurricanes on Earth. In January, the National Science Foundation awarded a $12.8 million grant to researchers to design a facility that can simulate wind speeds of at least 290 km/h — and can, at the same time, produce deadly, towering storm surges.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    No facility exists that can produce such a one-two punch of extreme wind and water. But it’s an idea whose time has come — and not a moment too soon.

    “It’s a race against time,” says disaster researcher Richard Olson, director of extreme events research at Florida International University, or FIU, in Miami.

    Hurricanes are being made worse by human-caused climate change: They’re getting bigger, wetter, stronger and slower (SN: 9/13/18; SN: 11/11/20). Scientists project that the 2022 Atlantic Ocean hurricane season, spanning June 1 to November 30, will be the seventh straight season with more storms than average. Recent seasons have been marked by an increase in rapidly intensifying hurricanes linked to warming ocean waters (SN: 12/21/20).

    Those trends are expected to continue as the Earth heats up further, researchers say. And coastal communities around the world need to know how to prepare: how to build structures — buildings, bridges, roads, water and energy systems — that are resilient to such punishing winds and waves.

    To help with those preparations, FIU researchers are leading a team of wind and structural engineers, coastal and ocean engineers, computational modelers and resilience experts from around the United States to work out how best to simulate these behemoths. Combining extreme wind and water surges into one facility is uncharted territory, says Ioannis Zisis, a wind engineer at FIU. “There is a need to push the envelope,” Zisis says. But as for how exactly to do it, “the answer is simple: We don’t know. That’s what we want to find out.”

    Prepping for “Category 6”

    It’s not that such extreme storms haven’t been seen on Earth. Just in the last few years, Hurricanes Dorian (2019) and Irma (2017) in the Atlantic Ocean and super Typhoon Haiyan (2013) in the Pacific Ocean have brought storms with wind speeds well over 290 km/h. Such ultraintense storms are sometimes referred to as “category 6” hurricanes, though that’s not an official designation.

    The National Oceanic and Atmospheric Administration, or NOAA, rates hurricanes in the Atlantic and eastern Pacific oceans on a scale of 1 to 5, based on their wind speeds and how much damage those winds might do. Each category spans an increment of roughly 30 km/h.  

    Category 1 hurricanes, with wind speeds of 119 to 153 km/h, produce “some damage,” bringing down some power lines, toppling trees and perhaps knocking roof shingles or vinyl siding off a house. Category 5 storms, with winds starting at 252 km/h, cause “catastrophic damage,” bulldozing buildings and potentially leaving neighborhoods uninhabitable for weeks to months.

    But 5 is as high as it gets on the official scale; after all, what could be more devastating than catastrophic damage? That means that even monster storms like 2019’s Hurricane Dorian, which flattened the Bahamas with wind speeds of up to nearly 300 km/h, are still considered category 5 (SN: 9/3/19).

    “Strictly speaking, I understand that [NOAA doesn’t] see the need for a category 6,” Olson says. But there is a difference in public perception, he says. “I see it as a different type of storm, a storm that is simply scarier.”

    And labels aside, the need to prepare for these stronger storms is clear, Olson says. “I don’t think anybody wants to be explaining 20 years from now why we didn’t do this,” he says. “We have challenged nature. Welcome to payback.”

    Superstorm simulation

    FIU already hosts the Wall of Wind, a huge hurricane simulator housed in a large hangar anchored at one end by an arc of 12 massive yellow fans. Even at low wind speeds — say, around 50 km/h — the fans generate a loud, unsettling hum. At full blast, those fans can generate wind speeds of up to 252 km/h — equivalent to a low-grade category 5 hurricane.

    Inside, researchers populate the hangar with structures mimicking skyscrapers, houses and trees, or shapes representing the bumps and dips of the ground surface. Engineers from around the world visit the facility to test out the wind resistance of their own creations, watching as the winds pummel at their structural designs.

    Twelve fans tower over one end of the Wall of Wind, a large experimental facility at Florida International University in Miami. There, winds as fast as 252 kilometers per hour let researchers re-create conditions experienced during a low-grade category 5 hurricane.NSF-NHERI Wall of Wind/FIU

    It’s one of eight facilities in a national network of laboratories that study the potential impacts of wind, water and earthquake hazards, collectively called the U.S. Natural Hazards Engineering Research Infrastructure, or NHERI.

    The Wall of Wind is designed for full-scale wind testing of entire structures. Another wind machine, hosted at the University of Florida in Gainesville, can zoom in on the turbulent behavior of winds right at the boundary between the atmosphere and ground. Then there are the giant tsunami- and storm surge–simulating water wave tanks at Oregon State University in Corvallis.

    The new facility aims to build on the shoulders of these giants, as well as on other experimental labs around the country. The design phase is projected to take four years, as the team ponders how to ramp up wind speeds — possibly with more, or more powerful fans than the Wall of Wind’s — and how to combine those gale-force winds and massive water tanks in one experimental space.

    Existing labs that study wind and waves together, albeit on a much smaller scale, can offer some insight into that aspect of the design, says Forrest Masters, a wind engineer at the University of Florida and the head of that institution’s NHERI facility.

    This design phase will also include building a scaled-down version of the future lab as proof of concept. Building the full-scale facility will require a new round of funding and several more years.

    Past approaches to studying the impacts of strong wind storms tend to use one of three approaches: making field observations of the aftermath of a given storm; building experimental facilities to re-create storms; and using computational simulations to visualize how those impacts might play out over large geographical regions. Each of these approaches has strengths and limitations, says Tracy Kijewski-Correa, a disaster risk engineer at the University of Notre Dame in Indiana.

    “In this facility, we want to bring together all of these methodologies,” to get as close as possible to recreating what Mother Nature can do, Kijewski-Correa says.  

    It’s a challenging engineering problem, but an exciting one. “There’s a lot of enthusiasm for this in the broader scientific community,” Masters says. “If it gets built, nothing like it will exist.” More

  • in

    Farmers in India cut their carbon footprint with trees and solar power

    In 2007, 22-year-old P. Ramesh’s groundnut farm was losing money. As was the norm in most of India (and still is), Ramesh was using a cocktail of pesticides and fertilizers across his 2.4 hectares in the Anantapur district of southern India. In this desert-like area, which gets less than 600 millimeters of rainfall most years, farming is a challenge.

    “I lost a lot of money growing groundnuts through chemical farming methods,” says Ramesh, who goes by the first letter of his father’s name followed by his first name, as is common in many parts of southern India. The chemicals were expensive and his yields low.

    Then in 2017, he dropped the chemicals. “Ever since I took up regenerative agricultural practices like agroforestry and natural farming, both my yield and income have increased,” he says.

    Agroforestry involves planting woody perennials (trees, shrubs, palms, bamboos, etc.) alongside agricultural crops (SN: 7/3/21 & 7/17/21, p. 30). One natural farming method calls for replacing all chemical fertilizers and pesticides with organic matter such as cow dung, cow urine and jaggery, a type of solid dark sugar made from sugarcane, to boost soil nutrient levels. Ramesh also expanded his crops, originally groundnuts and some tomatoes, by adding papaya, millets, okra, eggplant (called brinjal locally) and other crops.

    Farmers in Anantapur, India, pose with the natural fertilizer they use on their crops. Called Ghanajeevamritam, it contains jaggery, cow dung, cow urine and sometimes flour from dried beans. M. Shaikshavali

    With help from the nonprofit Accion Fraterna Ecology Centre in Anantapur, which works with farmers who want to try sustainable farming, Ramesh increased his profits enough to buy more land, expanding his parcel to about four hectares. Like the thousands of other farmers practicing regenerative farming across India, Ramesh has managed to nourish his depleted soil, while his new trees help keep carbon out of the atmosphere, thus playing a small but important role in reducing India’s carbon footprint. Recent studies have shown that the carbon sequestration potential of agroforestry is as much as 34 percent higher than standard forms of agriculture.

    In western India, more than 1,000 kilometers from Anantapur, in Dhundi village in Gujarat, 36-year-old Pravinbhai Parmar is using his rice farm for climate change mitigation. By installing solar panels, he no longer uses diesel to power his groundwater pumps. And he has an incentive to pump only the water he needs because he can sell the electricity he doesn’t use.

    If all farmers like Parmar shifted to solar, India’s carbon emissions, which are 2.88 billion metric tons per year, could drop by between 45 million and 62 million tons annually, according to a 2020 report in Carbon Management. So far, the country has about 250,000 solar irrigation pumps out of an estimated 20 million to 25 million total groundwater pumps.

    For a nation that has to provide for what will soon be the world’s largest population, growing food while trying to bring down already high greenhouse gas emissions from agricultural practices is difficult. Today, agriculture and livestock account for 14 percent of India’s gross national greenhouse gas emissions. Adding in the electricity used by the agriculture sector brings this figure up to 22 percent.

    Ramesh and Parmar are part of a small but growing group of farmers getting assistance from government and nongovernmental programs to change how they farm. There’s still a ways to go to reach the estimated 146 million others who cultivate 160 million hectares of arable land in India. But these farmers’ success stories are testimony that one of India’s largest emitting sectors can change.

    Pravinbhai Parmar (center) poses with fellow farmers who are part of the solar irrigation program in Dhundi village, Gujarat.IWMI-TATA Program, Shashwat Cleantech and Dhundi Saur Urja Utpadak Sahkari Mandali

    Feeding the soil, sustaining farmers

    India’s farmers are already deeply feeling the effects of climate change, coping with dry spells, erratic rainfall and increasingly frequent heat waves and tropical cyclones. “When we talk about climate-smart agriculture, we are largely talking about how it has reduced emissions,” says Indu Murthy, sector head for climate, environment and sustainability at the Center for Study of Science, Technology and Policy, a think tank in Bengaluru. But such a system should also help farmers “cope with unexpected changes and weather patterns,” she says.

    This, in many ways, is the philosophy driving a variety of sustainable and regenerative agricultural practices under the agroecology umbrella. Natural farming and agroforestry are two components of this system that are finding more and more takers across India’s varied landscapes, says Y.V. Malla Reddy, director of Accion Fraterna Ecology Centre.

    “For me, the important change is the change in attitude of people towards trees and vegetation in the last few decades,” Reddy says. “In the ’70s and ’80s, people were not really conscious of the value of the trees, but now they consider trees, especially fruit and utilitarian trees, as also a source of income.” Reddy has advocated for sustainable farming in India for close to 50 years. Certain types of trees, such as pongamia, subabul and avisa, have economic benefits apart from their fruits; they provide fodder for livestock and biomass for fuel.

    Reddy’s organization has provided assistance to more than 60,000 Indian farming families to practice natural farming and agroforestry on almost 165,000 hectares. Calculation of the soil carbon sequestration potential of their work is ongoing. But a 2020 report by India’s Ministry of Environment, Forest and Climate Change notes that these farming practices can help India reach its goal of having 33 percent forest and tree cover to meet its carbon sequestration commitments under the Paris climate agreement by 2030.

    Regenerative agriculture is a relatively inexpensive way to reduce carbon dioxide in the atmosphere, as compared with other solutions. Regenerative farming costs $10 to $100 per ton of carbon dioxide removed from the atmosphere, compared with $100 to $1,000 per ton of carbon dioxide for technologies that mechanically remove carbon from the air, according to a 2020 analysis in Nature Sustainability. Such farming not only makes sense for the environment, but chances are the farmers’ earnings will also increase as they shift to regenerative agriculture, Reddy says.

    Farms in Kanumpalli village in Antanapur district grow multiple crops using natural farming methods.M. Shaikshavali

    Farmers from the Baiga and Gondh tribal communities in Dholbajja panchayat, India, harvest chiraita, or Andrographis paniculata, a plant used for medicinal purposes. Their Indigenous community recently took up agroforestry and sustainable farming methods.Elsa Remijn photographer, provided by Commonland

    Growing solar

    Establishing agroecology practices to see an effect on carbon sequestration can take years or decades. But using renewable energy in farming can quickly reduce emissions. For this reason, the nonprofit International Water Management Institute, IWMI, launched the program Solar Power as Remunerative Crop in Dhundi village in 2016.

    “The biggest threat climate change presents, specifically to farmers, is the uncertainty that it brings,” says Shilp Verma, an IWMI researcher of water, energy and food policies based in Anand. “Any agricultural practice that will help farmers cope with uncertainty will improve resilience to climate change.” Farmers have more funds to deal with insecure conditions when they can pump groundwater in a climate-friendly way that also provides incentives for keeping some water in the ground. “If you pump less, then you can sell the surplus energy to the grid,” he says. Solar power becomes an income source.

    Growing rice, especially lowland rice, which is grown on flooded land, requires a lot of water. On average it takes about 1,432 liters of water to produce one kilogram of rice, according to the International Rice Research Institute. The organization says that irrigated rice receives an estimated 34 to 43 percent of the world’s total irrigation water. India is the largest extractor of groundwater in the world, accounting for 25 percent of global extraction. When diesel pumps do the extracting, carbon is emitted into the atmosphere. Parmar and his fellow farmers used to have to buy that fuel to keep their pumps going.

    “We used to spend 25,000 rupees [about $330] a year for running our diesel-powered water pumps. This used to really cut into our profits,” Parmar says. When IWMI asked him in 2015 to participate in a pilot solar-powered irrigation project with zero carbon emissions, Parmar was all ears.

    Since then, Parmar and six fellow farmers in Dhundi have sold more than 240,000 kilowatt-hours to the state and earned more than 1.5 million rupees ($20,000). Parmar’s annual income has doubled from 100,000–150,000 rupees on average to 200,000–250,000 rupees.

    The boost is helping him educate his children, one of whom is pursuing a degree in agriculture — an encouraging sign in a country where farming is out of vogue with the younger generation. As Parmar says, “Solar power is timely, less polluting and also provides us an additional income. What is not to like about it?”

    This aerial image shows solar panels installed among crops to power groundwater pumps and offer a new income source for farmers in western India’s Dhundi village.IWMI-TATA Program, Shashwat Cleantech and Dhundi Saur Urja Utpadak Sahkari Mandali

    Parmar has learned to maintain and fix the panels and the pumps himself. Neighboring villages now ask for his help when they want to set up solar-powered pumps or need pump repairs. “I am happy that others are also following our lead. Honestly, I feel quite proud that they call me to help them with their solar pump systems.”

    IWMI’s project in Dhundi has been so successful that the state of Gujarat started replicating the scheme in 2018 for all interested farmers under an initiative called Suryashakti Kisan Yojana, which translates to solar power project for farmers. And India’s Ministry of New and Renewable Energy now subsidizes and provides low-interest loans for solar-powered irrigation among farmers.

    “The main thing about climate-smart agriculture is that everything we do has to have less carbon footprint,” says Aditi Mukherji, Verma’s colleague and an author of February’s report from the Intergovernmental Panel on Climate Change (SN: 3/26/22, p. 7). “That is the biggest challenge. How do you make something with a low carbon footprint, without having a negative impact on income and productivity?” Mukherji is the regional project leader for Solar Irrigation for Agricultural Resilience in South Asia, an IWMI project looking at various solar irrigation solutions in South Asia.

    Back in Anantapur, “there is also a visible change in the vegetation in our district,” Reddy says. “Earlier, there might not be any trees till the eye can see in many parts of the district. Now there is no place which doesn’t have at least 20 trees in your line of sight. It’s a small change, but extremely significant for our dry region.” And Ramesh and other farmers now enjoy a stable, sustainable income from farming.

    A family in the village of Muchurami in Anantapur district, India, display vegetables harvested through natural farming methods. The vegetables include pumpkins, peas, spinach, and bottle gourds.M. Shaikshavali

    “When I was growing groundnuts, I used to sell it to the local markets,” Ramesh says. He now sells directly to city dwellers through WhatsApp groups. And one of India’s largest online grocery stores, bigbasket.com, and others have started purchasing directly from him to meet a growing demand for organic and “clean” fruits and vegetables.

    “I’m confident now that my children too can take up farming and make a good living if they want to,” Ramesh says. “I didn’t feel the same way before discovering these nonchemical farming practices.” More

  • in

    Replacing some meat with microbial protein could help fight climate change

    “Fungi Fridays” could save a lot of trees — and take a bite out of greenhouse gas emissions. Eating one-fifth less red meat and instead munching on microbial proteins derived from fungi or algae could cut annual deforestation in half by 2050, researchers report May 5 in Nature.

    Raising cattle and other ruminants contributes methane and nitrous oxide to the atmosphere, while clearing forests for pasture lands adds carbon dioxide (SN: 4/4/22; SN: 7/13/21). So the hunt is on for environmentally friendly substitutes, such as lab-grown hamburgers and cricket farming (SN: 9/20/18; SN: 5/2/19).

    Another alternative is microbial protein, made from cells cultivated in a laboratory and nurtured with glucose. Fermented fungal spores, for example, produce a dense, doughy substance called mycoprotein, while fermented algae produce spirulina, a dietary supplement.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    Cell-cultured foods do require sugar from croplands, but studies show that mycoprotein produces fewer greenhouse gas emissions and uses less land and water than raising cattle, says Florian Humpenöder, a climate modeler at Potsdam Institute for Climate Impact Research in Germany. However, a full comparison of foods’ future environmental impacts also requires accounting for changes in population, lifestyle, dietary patterns and technology, he says.

    So Humpenöder and colleagues incorporated projected socioeconomic changes into computer simulations of land use and deforestation from 2020 through 2050. Then they simulated four scenarios, substituting microbial protein for 0 percent, 20 percent, 50 percent or 80 percent of the global red meat diet by 2050.

    A little substitution went a long way, the team found: Just 20 percent microbial protein substitution cut annual deforestation rates — and associated CO2 emissions — by 56 percent from 2020 to 2050.

    Eating more microbial proteins could be part of a portfolio of strategies to address the climate and biodiversity crises — alongside measures to protect forests and decarbonize electricity generation, Humpenöder says. More