More stories

  • in

    More than 57 billion tons of soil have eroded in the U.S. Midwest

    With soils rich for cultivation, most land in the Midwestern United States has been converted from tallgrass prairie to agricultural fields. Less than 0.1 percent of the original prairie remains.

    This shift over the last 160 years has resulted in staggering — and unsustainable — soil erosion rates for the region, researchers report in the March Earth’s Future. The erosion is estimated to be double the rate that the U.S. Department of Agriculture says is sustainable. If it continues unabated, it could significantly limit future crop production, the scientists say.

    In the new study, the team focused on erosional escarpments — tiny cliffs formed through erosion — lying at boundaries between prairie and agricultural fields (SN: 1/20/96). “These rare prairie remnants that are scattered across the Midwest are sort of a preservation of the pre-European-American settlement land surface,” says Isaac Larsen, a geologist at the University of Massachusetts Amherst.

    At 20 sites in nine Midwestern states, with most sites located in Iowa, Larsen and colleagues used a specialized GPS system to survey the altitude of the prairie and farm fields. That GPS system “tells you where you are within about a centimeter on Earth’s surface,” Larsen says. This enables the researchers to detect even small differences between the height of the prairie and the farmland.

    At each site, the researchers took these measurements at 10 or more spots. The team then measured erosion by comparing the elevation differences of the farmed and prairie land. The researchers found that the agricultural fields were 0.37 meters below the prairie areas, on average.

    Geologist Isaac Larsen stands at an erosional escarpment, a meeting point of farmland and prairie, in Stinson Prairie, Iowa. Studying these escarpments shows there’s been a startling amount of erosion in the U.S. Midwest since farming started there more than 150 years ago.University of Massachusetts Amherst

    This corresponds to the loss of roughly 1.9 millimeters of soil per year from agricultural fields since the estimated start of traditional farming at these sites more than a century and a half ago, the researchers calculate. That rate is nearly double the maximum of one millimeter per year that the USDA considers sustainable for these locations.  

    There are two main ways that the USDA currently estimates the erosion rate in the region. One way estimates the rate to be about one-third of that reported by the researchers. The other estimates the rate to be just one-eighth of the researchers’ rate. Those USDA estimates do not include tillage, a conventional farming process in which machinery is used to turn the soil and prepare it for planting. By disrupting the soil structure, tilling increases surface runoff and erosion due to soil moving downslope.

    Larsen and colleagues say that they would like to see tillage incorporated into the USDA’s erosion estimates. Then, the USDA numbers might better align with the whopping 57.6 billion metric tons of soil that the researchers estimate has been lost across the entire region in the last 160 years.

    This massive “soil loss is already causing food production to decline,” Larsen says. As soil thickness decreases, the amount of corn successfully grown in Iowa is reduced, research shows. And disruption to the food supply could continue or worsen if the estimated rate of erosion persists.

    Not everyone is convinced that the average amount of soil lost each year has remained steady since farming in the region started. Much of the erosion that the researchers measured could have been caused in the earlier histories of these sites, dating back to when farmers “began to break prairies and/or forests and clear things,” says agronomist Michael Kucera.

    Perhaps current erosion rates have slowed, says Kucera, who is the steward of the National Erosion Database at the USDA’s National Soil Survey Center in Lincoln, Neb.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To help reduce future erosion, farmers can use no-till farming and plant cover crops, the researchers note. By planting cover crops during off-seasons, farmers reduce the amount of time the soil is bare, making it less vulnerable to wind and water erosion.

    In the United States, no-till and similar practices to help limit erosion have been implemented at least sometimes by 51 percent of corn, cotton, soybean and wheat farmers, according to the USDA. But cover crops are only used in about 5 percent of cases where they could be, says Bruno Basso, a sustainable agriculture researcher at Michigan State University in East Lansing who wasn’t involved with the study. “It costs $40 to $50 per acre to plant a cover crop,” he says. Though some government grant funding is available, “the costs of cover crops are not supported,” and there is a need for additional incentives, he says.

    To implement no-till strategies, “the farmer has to be a better manager,” says Keith Berns, a farmer who co-owns and operates Green Cover Seed, which is headquartered in Bladen, Neb. His company provides cover crop seeds and custom seed mixtures. He has also been using no-till practices for decades.

    To succeed, farmers must decide what particular cover crops are most suitable for their land, when to grow them and when to kill them. Following these regimens, which can be more complicated than traditional farming, can be “difficult to do on large scales,” Berns says.

    Cover crops can confer benefits such as helping farmers repair erosion and control weeds within the first year of planting. But it can take multiple years for the crops’ financial benefits to exceed their cost. Some farmers don’t even own the land they work, making it even less lucrative for them to invest in cover crops, Berns notes. 

    Building soil health can take half a decade, Basso says. “Agriculture is really always facing this dilemma [of] short-sighted, economically driven decisions versus longer-term sustainability of the whole enterprise.” More

  • in

    Engineering team develops new AI algorithms for high accuracy and cost effective medical image diagnostics

    Medical imaging is an important part of modern healthcare, enhancing both the precision, reliability and development of treatment for various diseases. Artificial intelligence has also been widely used to further enhance the process.
    However, conventional medical image diagnosis employing AI algorithms require large amounts of annotations as supervision signals for model training. To acquire accurate labels for the AI algorithms — radiologists, as part of the clinical routine, prepare radiology reports for each of their patients, followed by annotation staff extracting and confirming structured labels from those reports using human-defined rules and existing natural language processing (NLP) tools. The ultimate accuracy of extracted labels hinges on the quality of human work and various NLP tools. The method comes at a heavy price, being both labour intensive and time consuming.
    An engineering team at the University of Hong Kong (HKU) has developed a new approach “REFERS” (Reviewing Free-text Reports for Supervision), which can cut human cost down by 90%, by enabling the automatic acquisition of supervision signals from hundreds of thousands of radiology reports at the same time. It attains a high accuracy in predictions, surpassing its counterpart of conventional medical image diagnosis employing AI algorithms.
    The innovative approach marks a solid step towards realizing generalized medical artificial intelligence. The breakthrough was published in Nature Machine Intelligence in the paper titled “Generalized radiograph representation learning via cross-supervision between images and free-text radiology reports.”
    “AI-enabled medical image diagnosis has the potential to support medical specialists in reducing their workload and improving the diagnostic efficiency and accuracy, including but not limited to reducing the diagnosis time and detecting subtle disease patterns,” said Professor YU Yizhou, leader of the team from HKU’s Department of Computer Science under the Faculty of Engineering.
    “We believe abstract and complex logical reasoning sentences in radiology reports provide sufficient information for learning easily transferable visual features. With appropriate training, REFERS directly learns radiograph representations from free-text reports without the need to involve manpower in labelling.” Professor Yu remarked.
    For training REFERS, the research team uses a public database with 370,000 X-Ray images, and associated radiology reports, on 14 common chest diseases including atelectasis, cardiomegaly, pleural effusion, pneumonia and pneumothorax. The researchers managed to build a radiograph recognition model using 100 radiographs only, and attains 83% accuracy in predictions. When the number was increased to 1,000, their model exhibits amazing performance with an accuracy of 88.2%, which surpasses its counterpart trained with 10,000 radiologist annotations (accuracy at 87.6%). When 10,000 radiographs were used, the accuracy is at 90.1%. In general, an accuracy above 85% in predictions is useful in real-world clinical applications.
    REFERS achieves the goal by accomplishing two report-related tasks, i.e., report generation and radiograph-report matching. In the first task, REFERS translates radiographs into text reports by first encoding radiographs into an intermediate representation, which is then used to predict text reports via a decoder network. A cost function is defined to measure the similarity between predicted and real report texts, based on which gradient-based optimization is employed to train the neural network and update its weights.
    As for the second task, REFERS first encodes both radiographs and free-text reports into the same semantic space, where representations of each report and its associated radiographs are aligned via contrastive learning.
    “Compared to conventional methods that heavily rely on human annotations, REFERS has the ability to acquire supervision from each word in the radiology reports. We can substantially reduce the amount of data annotation by 90% and the cost to build medical artificial intelligence. It marks a significant step towards realizing generalized medical artificial intelligence, ” said the paper’s first author Dr ZHOU Hong-Yu.
    Story Source:
    Materials provided by The University of Hong Kong. Note: Content may be edited for style and length. More

  • in

    The ethics of research on 'conscious' artificial brains

    One way in which scientists are studying how the human body grows and ages is by creating artificial organs in the laboratory. The most popular of these organs is currently the organoid, a miniaturized organ made from stem cells. Organoids have been used to model a variety of organs, but brain organoids are the most clouded by controversy.
    Current brain organoids are different in size and maturity from normal brains. More importantly, they do not produce any behavioral output, demonstrating they are still a primitive model of a real brain. However, as research generates brain organoids of higher complexity, they will eventually have the ability to feel and think. In response to this anticipation, Associate Professor Takuya Niikawa of Kobe University and Assistant Professor Tsutomu Sawai of Kyoto University’s Institute for the Advanced Study of Human Biology (WPI-ASHBi), in collaboration with other philosophers in Japan and Canada, have written a paper on the ethics of research using conscious brain organoids. The paper can be read in the academic journal Neuroethics.
    Working regularly with both bioethicists and neuroscientists who have created brain organoids, the team has been writing extensively about the need to construct guidelines on ethical research. In the new paper, Niikawa, Sawai and their coauthors lay out an ethical framework that assumes brain organoids already have consciousness rather than waiting for the day when we can fully confirm that they do.
    “We believe a precautionary principle should be taken,” Sawai said. “Neither science nor philosophy can agree on whether something has consciousness. Instead of arguing about whether brain organoids have consciousness, we decided they do as a precaution and for the consideration of moral implications.”
    To justify this assumption, the paper explains what brain organoids are and examines what different theories of consciousness suggest about brain organoids, inferring that some of the popular theories of consciousness permit them to possess consciousness.
    Ultimately, the framework proposed by the study recommends that research on human brain organoids follows the ethical principles similar to those for animal experiments. Therefore, recommendations include using the minimum number of organoids possible and doing the upmost to prevent pain and suffering while considering the interests of the public and patients.
    “Our framework was designed to be simple and is based on valence experiences and the sophistication of those experiences,” said Niikawa.
    This, the paper explains, provides guidance on how strict the conditions for experiments should be. These conditions should be decided based upon several criteria, which include the physiological state of the organoid, the stimuli to which it responds, the neural structures it possesses, and its cognitive functions.
    Moreover, the paper argues that this framework is not exclusive to brain organoids. It can be applied to anything that is perceived to hold consciousness, such as fetuses, animals and even robots.
    “Our framework depends on the precautionary principle. Something that we believe does not have consciousness today may, through the development of consciousness studies, be found to have consciousness in the future. We can consider how we ought to treat these entities based on our ethical framework,” conclude Niikawa and Sawai.
    Story Source:
    Materials provided by Kyoto University. Note: Content may be edited for style and length. More

  • in

    Climate change intensified deadly storms in Africa in early 2022

    Climate change amped up the rains that pounded southeastern Africa and killed hundreds of people during two powerful storms in early 2022.

    But a dearth of regional data made it difficult to pinpoint just how large of a role climate change played, scientists said April 11 at a news conference.

    The findings were described in a study, published online April 11, by a consortium of climate scientists and disaster experts called the World Weather Attribution network.

    A series of tropical storms and heavy rain events battered southeast Africa in quick succession from January through March. For this study, the researchers focused on two events: Tropical Storm Ana, which led to flooding in northern Madagascar, Malawi and Mozambique in January and killed at least 70 people; and Cyclone Batsirai, which inundated southern Madagascar in February and caused hundreds more deaths.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Thank you for signing up!

    There was a problem signing you up.

    To search for the fingerprints of climate change, the team first selected a three-day period of heavy rain for each storm. Then the researchers tried to amass observational data from the region to reconstruct historical daily rainfall records from 1981 to 2022.

    Only four weather stations, all in Mozambique, had consistent, high-quality data spanning those decades. But, using the data on hand, the team was able to construct simulations for the region that represented climate with and without human-caused greenhouse gas emissions.

    The aggregate of those simulations revealed that climate change did play a role in intensifying the rains, Izidine Pinto, a climatologist at the University of Cape Town in South Africa, said at the news event. But with insufficient historical rainfall data, the team “could not quantify the precise contribution” of climate change, Pinto said.

    The study highlights how information on extreme weather events “is very much biased toward the Global North … [whereas] there are big gaps in the Global South,” said climate scientist Friedericke Otto of Imperial College London.

    That’s an issue also highlighted by the Intergovernmental Panel on Climate Change. The IPCC cites insufficient Southern Hemisphere data as a barrier to assessing the likelihood of increasing frequency and intensity of tropical cyclones beyond the North Atlantic Ocean (SN: 8/9/21). More

  • in

    New transistor could cut 5% from world’s digital energy budget

    A new spin on one of the 20th century’s smallest but grandest inventions, the transistor, could help feed the world’s ever-growing appetite for digital memory while slicing up to 5% of the energy from its power-hungry diet.
    Following years of innovations from the University of Nebraska-Lincoln’s Christian Binek and University at Buffalo’s Jonathan Bird and Keke He, the physicists recently teamed up to craft the first magneto-electric transistor.
    Along with curbing the energy consumption of any microelectronics that incorporate it, the team’s design could reduce the number of transistors needed to store certain data by as much as 75%, said Nebraska physicist Peter Dowben, leading to smaller devices. It could also lend those microelectronics steel-trap memory that remembers exactly where its users leave off, even after being shut down or abruptly losing power.
    “The implications of this most recent demonstration are profound,” said Dowben, who co-authored a recent paper on the work that graced the cover of the journal Advanced Materials.
    Many millions of transistors line the surface of every modern integrated circuit, or microchip, which itself is manufactured in staggering numbers — roughly 1 trillion in 2020 alone — from the industry-favorite semiconducting material, silicon. By regulating the flow of electric current within a microchip, the tiny transistor effectively acts as a nanoscopic on-off switch that’s essential to writing, reading and storing data as the 1s and 0s of digital technology.
    But silicon-based microchips are nearing their practical limits, Dowben said. Those limits have the semiconductor industry investigating and funding every promising alternative it can. More

  • in

    Innovative technology will use smart sensors to ensure vaccine safety

    A new study from Tel Aviv University enables developers, for the first time in the world, to determine vaccine safety via smart sensors that measure objective physiological parameters. According to the researchers, most clinical trials testing the safety of new vaccines. including COVID-19 vaccines, rely on participants’ subjective reports, which can lead to biased results. In contrast, objective physiological data, obtained through sensors attached to the body, is clear and unambiguous.
    The study was led by Dr. Yftach Gepner of the Department of Epidemiology and Preventive Medicine at TAU’s Sackler Faculty of Medicine, together with Dr. Dan Yamin and Dr. Erez Shmueli from TAU’s Fleischman Faculty of Engineering. The paper was published in Communications Medicine, a journal from the Nature portfolio.
    Dr. Gepner: “In most methods used today, clinical trials designed to evaluate the safety of a new drug or vaccine employ self-report questionnaires, asking participants how they feel before and after receiving the treatment. This is clearly a totally subjective report. Even when Pfizer and Moderna developed their vaccines for the new COVID-19 virus, they used self-reports to prove their safety.”
    In the current study, researchers from Tel Aviv University demonstrated that smart sensors can be used to test new vaccines. The study was conducted when many Israelis received their second dose of the COVID-19 vaccine. The researchers equipped volunteers with innovative, FDA-approved sensors developed by the Israeli company Biobeat. Attached to their chests, these sensors measured physiological reactions from one day before to three days after receiving the vaccine. The innovative sensors monitored 13 physiological parameters, such as: heart rate, breathing rate, saturation (blood oxygen levels), heartbeat volume, temperature, cardiac output, and blood pressure.
    The surprising results: a significant discrepancy was found between subjective self-reports about side effects and actual measurements. That is, in nearly all objective measures, significant changes were identified after vaccination, even for subjects who reported having no reaction at all.
    In addition, the study found that side effects escalate over the first 48 hours, and then parameters return to the level measured before vaccination. In other words: a direct assessment of the vaccine’s safety identified physiological reactions during the first 48 hours, with levels restabilizing afterwards.
    “The message from our study is clear,” says Dr. Gepner. “In 2022 the time has come to conduct continual, sensitive, objective testing of the safety of new vaccines and therapies. There is no reason to rely on self-reports or wait for the occurrence of rare side effects like myocarditis, an inflammation of the heart muscle, which occurs in one of 10,000 patients. Preliminary signs that predict such conditions can be detected with advanced sensors, identifying normal vs. extreme alterations in physiological parameters and any risk of inflammation. Today trial participants are invited to the clinic for blood pressure testing, but often their blood pressure rises just because the situation is stressful. Continual monitoring at home solves these problems with simple, convenient, inexpensive, and accurate means. This is the kind of medicine we should strive for in 2022.”
    Story Source:
    Materials provided by Tel-Aviv University. Note: Content may be edited for style and length. More

  • in

    Trainee teachers made sharper assessments about learning difficulties after receiving feedback from AI

    A trial in which trainee teachers who were being taught to identify pupils with potential learning difficulties had their work ‘marked’ by artificial intelligence has found the approach significantly improved their reasoning.
    The study, with 178 trainee teachers in Germany, was carried out by a research team led by academics at the University of Cambridge and Ludwig-Maximilians-Universität München (LMU Munich). It provides some of the first evidence that artificial intelligence (AI) could enhance teachers’ ‘diagnostic reasoning’: the ability to collect and assess evidence about a pupil, and draw appropriate conclusions so they can be given tailored support.
    During the trial, trainees were asked to assess six fictionalised ‘simulated’ pupils with potential learning difficulties. They were given examples of their schoolwork, as well as other information such as behaviour records and transcriptions of conversations with parents. They then had to decide whether or not each pupil had learning difficulties such as dyslexia or Attention Deficit Hyperactivity Disorder (ADHD), and explain their reasoning.
    Immediately after submitting their answers, half of the trainees received a prototype ‘expert solution’, written in advance by a qualified professional, to compare with their own. This is typical of the practice material student teachers usually receive outside taught classes. The others received AI-generated feedback, which highlighted the correct parts of their solution and flagged aspects they might have improved.
    After completing the six preparatory exercises, the trainees then took two similar follow-up tests — this time without any feedback. The tests were scored by the researchers, who assessed both their ‘diagnostic accuracy’ (whether the trainees had correctly identified cases of dyslexia or ADHD), and their diagnostic reasoning: how well they had used the available evidence to make this judgement.
    The average score for diagnostic reasoning among trainees who had received AI feedback during the six preliminary exercises was an estimated 10 percentage points higher than those who had worked with the pre-written expert solutions. More

  • in

    From computer to benchtop: Researchers find clues to new mechanisms for coronaviruses infections

    A group of bat viruses related to SARS-CoV-2 can also infect human cells but uses a different and unknown entryway.
    While researchers are still honing in on how these viruses infect cells, the findings could help in the development of new vaccines that prevent coronaviruses from causing another pandemic.
    Publishing in the journal, eBioMedicine, a team of Washington State University researchers used a computational approach based on network science to distinguish between a group of coronaviruses that can infect human cells from those that can’t. The researchers then confirmed their computational results in the laboratory, showing that a specific cluster of viruses can infect both human and bat cells.
    “What we find with these viruses is that they’re able to get into the cells through another mechanism or receptor, and that has a lot of implications for how, and if, they would be able to infect us,” said Michael Letko, co-senior author and assistant professor in the Paul Allen School of Global Health.
    Cross-species transmission of coronaviruses poses a serious threat to global health. While numerous coronaviruses have been discovered in wildlife, researchers haven’t been able to predict which pose the greatest threat to humans and are left scrambling to develop vaccines after viruses spill over.
    “As we encroach more and more on places where there are human and animal interactions, it’s quite likely that there will be many viruses that will need to be examined,” said Shira Broschat, professor in the School of Electrical Engineering and Computer Science, also co-senior author on the paper. More