More stories

  • in

    Rivers might not be as resilient to drought as once thought

    Rivers ravaged by a lengthy drought may not be able to recover, even after the rains return. Seven years after the Millennium drought baked southeastern Australia, a large fraction of the region’s rivers still show no signs of returning to their predrought water flow, researchers report in the May 14 Science.

    There’s “an implicit assumption that no matter how big a disturbance is, the water will always come back — it’s just a matter of how long it takes,” says Tim Peterson, a hydrologist at Monash University in Melbourne, Australia. “I’ve never been satisfied with that.”

    The years-long drought in southeastern Australia, which began sometime between 1997 and 2001 and lasted until 2010, offered a natural experiment to test this assumption, he says. “It wasn’t the most severe drought” the region has ever experienced, but it was the longest period of low rainfall in the region since about 1900.

    Peterson and colleagues analyzed annual and seasonal streamflow rates in 161 river basins in the region from before, during and after the drought. By 2017, they found, 37 percent of those river basins still weren’t seeing the amount of water flow that they had predrought. Furthermore, of those low-flow rivers, the vast majority — 80 percent — also show no signs that they might recover in the future, the team found.

    Sign Up For the Latest from Science News

    Headlines and summaries of the latest Science News articles, delivered to your inbox

    Many of southeastern Australia’s rivers had bounced back from previous droughts, including a severe but brief episode in 1983. But even heavy rains in 2010, marking the end of the Millennium drought, weren’t enough to return these basins to their earlier state. That suggests that there is, after all, a limit to rivers’ resilience.

    What’s changed in these river basins isn’t yet clear, Peterson says. The precipitation post drought was similar to predrought precipitation, and the water isn’t ending up in the streamflow, so it must be going somewhere else. The team examined various possibilities: The water infiltrated into the ground and was stored as groundwater, or it never made it to the ground at all — possibly intercepted by leaves, and then evaporating back to the air.

    But none of these explanations were borne out by studies of these sites, the researchers report. The remaining, and most probable, possibility is that the environment has changed: Water is evaporating from soils and transpiring from plants more quickly than it did predrought.

    Peterson has long suggested that under certain conditions rivers might not, in fact, recover — and this study confirms that theoretical work, says Peter Troch, a hydrologist at the University of Arizona in Tucson. Enhanced soil evaporation and plant transpiration are examples of such positive feedbacks, processes that can enhance the impacts of a drought. “Until his work, this lack of resilience was not anticipated, and all hydrological models did not account for such possibility,” Troch says.

    “This study will definitely inspire other researchers to undertake such work,” he notes. “Hopefully we can gain more insight into the functioning of [river basins’] response to climate change.”

    Indeed, the finding that rivers have “finite resilience” to drought is of particular concern as the planet warms and lengthier droughts become more likely, writes hydrologist Flavia Tauro in a commentary in the same issue of Science. More

  • in

    New evidence for electron's dual nature found in a quantum spin liquid

    A new discovery led by Princeton University could upend our understanding of how electrons behave under extreme conditions in quantum materials. The finding provides experimental evidence that this familiar building block of matter behaves as if it is made of two particles: one particle that gives the electron its negative charge and another that supplies its magnet-like property, known as spin.
    “We think this is the first hard evidence of spin-charge separation,” said Nai Phuan Ong, Princeton’s Eugene Higgins Professor of Physics and senior author on the paper published this week in the journal Nature Physics.
    The experimental results fulfill a prediction made decades ago to explain one of the most mind-bending states of matter, the quantum spin liquid. In all materials, the spin of an electron can point either up or down. In the familiar magnet, all of the spins uniformly point in one direction throughout the sample when the temperature drops below a critical temperature.
    However, in spin liquid materials, the spins are unable to establish a uniform pattern even when cooled very close to absolute zero. Instead, the spins are constantly changing in a tightly coordinated, entangled choreography. The result is one of the most entangled quantum states ever conceived, a state of great interest to researchers in the growing field of quantum computing.
    To describe this behavior mathematically, Nobel prize-winning Princeton physicist Philip Anderson (1923-2020), who first predicted the existence of spin liquids in 1973, proposed an explanation: in the quantum regime an electron may be regarded as composed of two particles, one bearing the electron’s negative charge and the other containing its spin. Anderson called the spin-containing particle a spinon.
    In this new study, the team searched for signs of the spinon in a spin liquid composed of ruthenium and chlorine atoms. At temperatures a fraction of a Kelvin above absolute zero (or roughly -452 degrees Fahrenheit) and in the presence of a high magnetic field, ruthenium chloride crystals enter the spin liquid state. More

  • in

    Quantum machine learning hits a limit

    A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.
    “Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes,” said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published today in Physical Review Letters.
    “Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results don’t condemn quantum machine learning, but rather highlight the importance of understanding its limits,” Holmes said.
    In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bob’s ability to learn the particulars of a given black hole’s physics means that reconstructing the information in the book is going to be very difficult or even impossible.
    “Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm can’t learn scrambling processes,” said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Center’s algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory.
    Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.
    “Recent work has identified the potential for quantum machine learning to be a formidable tool in our attempts to understand complex systems,” said Andreas Albrecht, a co-author of the research. Albrecht is Director of the Center for Quantum Mathematics and Physics (QMAP) and Distinguished Professor, Department of Physics and Astronomy, at UC Davis. “Our work points out fundamental considerations that limit the capabilities of this tool.”
    In the Hayden-Preskill thought experiment, Alice attempts to destroy a secret, encoded in a quantum state, by throwing it into nature’s fastest scrambler, a black hole. Bob and Alice are the fictitious quantum dynamic duo typically used by physicists to represent agents in a thought experiment.
    “You might think that this would make Alice’s secret pretty safe,” Holmes said, “but Hayden and Preskill argued that if Bob knows the unitary dynamics implemented by the black hole, and share a maximally entangled state with the black hole, it is possible to decode Alice’s secret by collecting a few additional photons emitted from the black hole. But this prompts the question, how could Bob learn the dynamics implemented by the black hole? Well, not by using quantum machine learning, according to our findings.”
    A key piece of the new theorem developed by Holmes and her coauthors assumes no prior knowledge of the quantum scrambler, a situation unlikely to occur in real-world science.
    “Our work draws attention to the tremendous leverage even small amounts of prior information may play in our ability to extract information from complex systems and potentially reduce the power of our theorem,” Albrecht said. “Our ability to do this can vary greatly among different situations (as we scan from theoretical consideration of black holes to concrete situations controlled by humans here on earth). Future research is likely to turn up interesting examples, both of situations where our theorem remains fully in force, and others where it can be evaded. More

  • in

    How AIs ask for personal information is important for gaining user trust

    People may be reluctant to give their personal information to artificial intelligence (AI) systems even though it is needed by the systems for providing more accurate and personalized services, but a new study reveals that the manner in which the systems ask for information from users can make a difference.
    In a study, Penn State researchers report that users responded differently when AIs either offered to help the user, or asked for help from the user. This response influenced whether the user trusted the AI with their personal information. They added that these introductions from the AI could be designed in a way to both increase users’ trust, as well as raise their awareness about the importance of personal information.
    The researchers, who presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems, the premier international conference of human-computer interaction research, found that people who are familiar with technology — power users — preferred AIs that are in need of help, or help-seeking, while non-expert users were more likely to prefer AIs that introduce themselves as simultaneously help-seekers and help-providers.
    As AIs become increasingly ubiquitous, developers need to create systems that can better relate to humans, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory.
    “There’s a need for us to re-think how AI systems talk to human users,” said Sundar. “This has come to the surface because there are rising concerns about how AI systems are starting to take over our lives and know more about us than we realize. So, given these concerns, it may be better if we start to switch from the traditional dialogue scripts into a more collaborative, cooperative communication that acknowledges the agency of the user.”
    Here to help?
    The researchers said that traditional AI dialogues usually offer introductions that frame their role as a helper. More

  • in

    New research may explain shortages in STEM careers

    A new study by the University of Georgia revealed that more college students change majors within the STEM pipeline than leave the career path of science, technology, engineering and mathematics altogether.
    Funded by a National Institutes of Health grant and a National Science Foundation Postdoctoral Fellowship and done in collaboration with the University of Wisconsin, the study examined interviews, surveys and institutional data from 1,193 students at a U.S. midwestern university for more than six years to observe a single area of the STEM pipeline: biomedical fields of study.
    Out of 921 students who stayed in the biomedical pipeline through graduation, almost half changed their career plans within the biomedical fields.
    “This was almost double the number of students who left biomedical fields altogether,” said Emily Rosenzweig, co-author of the study and assistant professor in the Mary Frances Early College of Education’s department of educational psychology. “This suggests that if we want to fully understand why there are shortages in certain STEM careers, we need to look at those who change plans within the pipeline, not just those who leave it.”
    Rosenzweig examined students’ motivations for changing career plans and found that students were more often inspired to make a change because a new field seemed more attractive.
    This finding pointed to an underexplored research area that educators, policymakers and administrators should devote more attention to in the future. Rather than focusing only on what makes students disenchanted with a particular career, factors that make alternative career paths seem valuable to students need to be considered.
    “The sheer number of changes made by students who remained in the biomedical pipeline highlights the divergence of paths students take in their career decision-making,” Rosenzweig said. “We should not simply assume that students are staying on course and progressing smoothly toward intended careers just because they have not left the [STEM] pipeline.”
    Ultimately, the research provides new insights about students’ motivations for choosing various careers inside the STEM pipeline and demonstrates the importance of understanding this group if schools are to promote retention in particular STEM careers.
    Story Source:
    Materials provided by University of Georgia. Original written by Lauren Leathers. Note: Content may be edited for style and length. More

  • in

    Interactive typeface for digital text

    AdaptiFont has recently been presented at CHI, the leading Conference on Human Factors in Computing.
    Language is without doubt the most pervasive medium for exchanging knowledge between humans. However, spoken language or abstract text need to be made visible in order to be read, be it in print or on screen.
    How does the way a text looks affect its readability, that is, how it is being read, processed, and understood? A team at TU Darmstadt’s Centre for Cognitive Science investigated this question at the intersection of perceptual science, cognitive science, and linguistics. Electronic text is even more complex. Texts are read on different devices under different external conditions. And although any digital text is formatted initially, users might resize it on screen, change brightness and contrast of the display, or even select a different font when reading text on the web.
    The team of researchers from TU Darmstadt now developed a system that leaves font design to the user’s visual system. First, they needed to come up with a way of synthesizing new fonts. This was achieved by using a machine learning algorithm, which learned the structure of fonts analysing 25 popular and classic typefaces. The system is capable of creating an infinite number of new fonts that are any intermediate form of others — for example, visually halfway between Helvetica and Times New Roman.
    Since some fonts may make it more difficult to read the text, they may slow the reader down. Other fonts may help the user read more fluently. Measuring reading speed, a second algorithm can now generate more typefaces that increase the reading speed.
    In a laboratory experiment, in which users read texts over one hour, the research team showed that their algorithm indeed generates new fonts that increase individual user’s reading speed. Interestingly all readers had their own personalized font that made reading especially easy for them. However: This individual favorite typeface does not necessarily fit in all situations. “AdaptiFont therefore can be understood as a system which creates fonts for an individual dynamically and continuously while reading, which maximizes the reading speed at the time of use. This may depend on the content of the text, whether you are tired, or perhaps are using different display devices,” explains Professor Constantin A. Rothkopf, Centre for Cognitive Science und head of the institute of Psychology of Information Processing at TU Darmstadt.
    The AdaptiFont system was recently presented to the scientific community at the Conference on Human Factors in Computing Systems (CHI). A patent application has been filed. Future possible applications are with all electronic devices on which text is read.
    Story Source:
    Materials provided by Technische Universitat Darmstadt. Note: Content may be edited for style and length. More

  • in

    Brain computer interface turns mental handwriting into text on screen

    Scientists are exploring a number of ways for people with disabilities to communicate with their thoughts. The newest and fastest turns back to a vintage means for expressing oneself: handwriting.
    For the first time, researchers have deciphered the brain activity associated with trying to write letters by hand. Working with a participant with paralysis who has sensors implanted in his brain, the team used an algorithm to identify letters as he attempted to write them. Then, the system displayed the text on a screen — in real time.
    The innovation could, with further development, let people with paralysis rapidly type without using their hands, says study coauthor Krishna Shenoy, a Howard Hughes Medical Institute Investigator at Stanford University who jointly supervised the work with Jaimie Henderson, a Stanford neurosurgeon.
    By attempting handwriting, the study participant typed 90 characters per minute — more than double the previous record for typing with such a “brain-computer interface,” Shenoy and his colleagues report in the journal Nature on May 12, 2021.
    This technology and others like it have the potential to help people with all sorts of disabilities, says Jose Carmena, a neural engineer at the University of California, Berkeley, who was not involved in the study. Though the findings are preliminary, he says, “it’s a big advancement in the field.”
    Brain-computer interfaces convert thought into action, Carmena says. “This paper is a perfect example: the interface decodes the thought of writing and produces the action.”
    Thought-powered communication More

  • in

    How smartphones can help detect ecological change

    Leipzig/Jena/Ilmenau. Mobile apps like Flora Incognita that allow automated identification of wild plants cannot only identify plant species, but also uncover large scale ecological patterns. These patterns are surprisingly similar to the ones derived from long-term inventory data of the German flora, even though they have been acquired over much shorter time periods and are influenced by user behaviour. This opens up new perspectives for rapid detection of biodiversity changes. These are the key results of a study led by a team of researchers from Central Germany, which has recently been published in Ecography.
    With the help of Artificial Intelligence, plant species today can be classified with high accuracy. Smartphone applications leverage this technology to enable users to easily identify plant species in the field, giving laypersons access to biodiversity at their fingertips. Against the backdrop of climate change, habitat loss and land-use change, these applications may serve another use: by gathering information on the locations of identified plant species, valuable datasets are created, potentially providing researchers with information on changing environmental conditions.
    But is this information reliable — as reliable as the information provided by data collected over long time periods? A team of researchers from the German Centre for Integrative Biodiversity Research (iDiv), the Remote Sensing Centre for Earth System Research (RSC4Earth) of Leipzig University (UL) and Helmholtz Centre for Environmental Research (UFZ), the Max Planck Institute for Biogeochemistry (MPI-BGC) and Technical University Ilmenau wanted to find an answer to this question. The researchers analysed data collected with the mobile app Flora Incognita between 2018 and 2019 in Germany and compared it to the FlorKart database of the German Federal Agency for Nature Conservation (BfN). This database contains long-term inventory data collected by over 5,000 floristic experts over a period of more than 70 years.
    Mobile app uncovers macroecological patterns in Germany
    The researchers report that the Flora Incognita data, collected over only two years, allowed them to uncover macroecological patterns in Germany similar to those derived from long-term inventory data of German flora. The data was therefore also a reflection of the effects of several environmental drivers on the distribution of different plant species.
    However, directly comparing the two datasets revealed major differences between the Flora Incognita data and the long-term inventory data in regions with a low human population density. “Of course, how much data is collected in a region strongly depends on the number of smartphone users in that region,” said last author Dr. Jana Wäldchen from MPI-BGC, one of the developers of the mobile app. Deviations in the data were therefore more pronounces in rural areas, except for well-known tourist destinations such as the Zugspitze, Germany’s highest mountain, or Amrum, an island on the North Sea coast.
    User behaviour also influences which plant species are recorded by the mobile app. “The plant observations carried out with the app reflect what users see and what they are interested in,” said Jana Wäldchen. Common and conspicuous species were recorded more often than rare and inconspicuous species. Nonetheless, the large quantity of plant observations still allows a reconstruction of familiar biogeographical patterns. For their study, the researchers had access to more than 900,000 data entries created within the first two years after the app had been launched.
    Automated species recognition bears great potential
    The study shows the potential of this kind of data collection for biodiversity and environmental research, which could soon be integrated in strategies for long-term inventories. “We are convinced that automated species recognition bears much greater potential than previously thought and that it can contribute to a rapid detection of biodiversity changes,” said first author Miguel Mahecha, professor at UL and iDiv Member. In the future, a growing number of users of apps like Flora Incognita could help detect and analyse ecosystem changes worldwide in real time.
    The Flora Incognita mobile app was developed jointly by the research groups of Dr. Jana Wäldchen at MPI-BGC and the group of Professor Patrick Mäder at TU Ilmenau. It is the first plant identification app in Germany using deep neural networks (deep learning) in this context. Fed by thousands of plant images, that have been identified by experts, it can already identify over 4,800 plant species.
    “When we developed Flora Incognita, we realized there was a huge potential and growing interest in improved technologies for the detection of biodiversity data. As computer scientists we are happy to see how our technologies make an important contribution to biodiversity research,” said co-author Patrick Mäder, professor at TU Ilmenau. More