More stories

  • in

    Artificial intelligence enhances monitoring of threatened marbled murrelet

    Artificial intelligence analysis of data gathered by acoustic recording devices is a promising new tool for monitoring the marbled murrelet and other secretive, hard-to-study species, research by Oregon State University and the U.S. Forest Service has shown.
    The threatened marbled murrelet is an iconic Pacific Northwest seabird that’s closely related to puffins and murres, but unlike those birds, murrelets raise their young as far as 60 miles inland in mature and old-growth forests.
    “There are very few species like it,” said co-author Matt Betts of the OSU College of Forestry. “And there’s no other bird that feeds in the ocean and travels such long distances to inland nest sites. This behavior is super unusual and it makes studying this bird really challenging.”
    A research team led by Adam Duarte of the U.S. Forest Service’s Pacific Northwest Research Station used data from acoustic recorders, originally placed to assist in monitoring northern spotted owl populations, at thousands of locations in federally managed forests in the Oregon Coast Range and Washington’s Olympic Peninsula.
    Researchers developed a machine learning algorithm known as a convolutional neural network to mine the recordings for murrelet calls.
    Findings, published in Ecological Indicators, were tested against known murrelet population data and determined to be correct at a rate exceeding 90%, meaning the recorders and AI are able to provide an accurate look at how much murrelets are calling in a given area.
    “Next, we’re testing whether murrelet sounds can actually predict reproduction and occupancy in the species, but that is still a few steps off,” Betts said.

    The dove-sized marbled murrelet spends most of its time in coastal waters eating krill, other invertebrates and forage fish such as herring, anchovies, smelt and capelin. Murrelets can only produce one offspring per year, if the nest is successful, and their young require forage fish for proper growth and development.
    The birds typically lay their single egg high in a tree on a horizontal limb at least 4 inches in diameter. Steller’s jays, crows and ravens are the main predators of murrelet nests.
    Along the West Coast, marbled murrelets are found regularly from Santa Cruz, California, to the Aleutian Islands. The species is listed as threatened under the U.S. Endangered Species Act in Washington, Oregon and California.
    “The greatest number of detections in our study typically occurred where late-successional forest dominates, and nearer to ocean habitats,” Duarte said.
    Late-successional refers to mature and old-growth forests.
    “Our results offer considerable promise for species distribution modeling and long-term population monitoring for rare species,” Duarte said. “Monitoring that’s far less labor intensive than nest searching via telemetry, ground-based nest searches or traditional audio/visual techniques.”
    Matthew Weldy of the College of Forestry, Zachary Ruff of the OSU College of Agricultural Sciences and Jonathon Valente, a former Oregon State postdoctoral researcher now at the U.S. Geological Survey, joined Betts and Duarte in the study, along with Damon Lesmeister and Julianna Jenkins of the Forest Service.
    Funding was provided by the Forest Service, the Bureau of Land Management and the National Park Service. More

  • in

    Science has an AI problem: This group says they can fix it

    AI holds the potential to help doctors find early markers of disease and policymakers to avoid decisions that lead to war. But a growing body of evidence has revealed deep flaws in how machine learning is used in science, a problem that has swept through dozens of fields and implicated thousands of erroneous papers.
    Now an interdisciplinary team of 19 researchers, led by Princeton University computer scientists Arvind Narayanan and Sayash Kapoor, has published guidelines for the responsible use of machine learning in science.
    “When we graduate from traditional statistical methods to machine learning methods, there are a vastly greater number of ways to shoot oneself in the foot,” said Narayanan, director of Princeton’s Center for Information Technology Policy and a professor of computer science. “If we don’t have an intervention to improve our scientific standards and reporting standards when it comes to machine learning-based science, we risk not just one discipline but many different scientific disciplines rediscovering these crises one after another.”
    The authors say their work is an effort to stamp out this smoldering crisis of credibility that threatens to engulf nearly every corner of the research enterprise. A paper detailing their guidelines appeared May 1 in the journal Science Advances.
    Because machine learning has been adopted across virtually every scientific discipline, with no universal standards safeguarding the integrity of those methods, Narayanan said the current crisis, which he calls the reproducibility crisis, could become far more serious than the replication crisis that emerged in social psychology more than a decade ago.
    The good news is that a simple set of best practices can help resolve this newer crisis before it gets out of hand, according to the authors, who come from computer science, mathematics, social science and health research.
    “This is a systematic problem with systematic solutions,” said Kapoor, a graduate student who works with Narayanan and who organized the effort to produce the new consensus-based checklist.

    The checklist focuses on ensuring the integrity of research that uses machine learning. Science depends on the ability to independently reproduce results and validate claims. Otherwise, new work cannot be reliably built atop old work, and the entire enterprise collapses. While other researchers have developed checklists that apply to discipline-specific problems, notably in medicine, the new guidelines start with the underlying methods and apply them to any quantitative discipline.
    One of the main takeaways is transparency. The checklist calls on researchers to provide detailed descriptions of each machine learning model, including the code, the data used to train and test the model, the hardware specifications used to produce the results, the experimental design, the project’s goals and any limitations of the study’s findings. The standards are flexible enough to accommodate a wide range of nuance, including private datasets and complex hardware configurations, according to the authors.
    While the increased rigor of these new standards might slow the publication of any given study, the authors believe wide adoption of these standards would increase the overall rate of discovery and innovation, potentially by a lot.
    “What we ultimately care about is the pace of scientific progress,” said sociologist Emily Cantrell, one of the lead authors, who is pursuing her Ph.D. at Princeton. “By making sure the papers that get published are of high quality and that they’re a solid base for future papers to build on, that potentially then speeds up the pace of scientific progress. Focusing on scientific progress itself and not just getting papers out the door is really where our emphasis should be.”
    Kapoor concurred. The errors hurt. “At the collective level, it’s just a major time sink,” he said. That time costs money. And that money, once wasted, could have catastrophic downstream effects, limiting the kinds of science that attract funding and investment, tanking ventures that are inadvertently built on faulty science, and discouraging countless numbers of young researchers.
    In working toward a consensus about what should be included in the guidelines, the authors said they aimed to strike a balance: simple enough to be widely adopted, comprehensive enough to catch as many common mistakes as possible.
    They say researchers could adopt the standards to improve their own work; peer reviewers could use the checklist to assess papers; and journals could adopt the standards as a requirement for publication.
    “The scientific literature, especially in applied machine learning research, is full of avoidable errors,” Narayanan said. “And we want to help people. We want to keep honest people honest.” More

  • in

    Physicists build new device that is foundation for quantum computing

    Scientists led by the University of Massachusetts Amherst have adapted a device called a microwave circulator for use in quantum computers, allowing them for the first time to precisely tune the exact degree of nonreciprocity between a qubit, the fundamental unit of quantum computing, and a microwave-resonant cavity. The ability to precisely tune the degree of nonreciprocity is an important tool to have in quantum information processing. In doing so, the team, including collaborators from the University of Chicago, derived a general and widely applicable theory that simplifies and expands upon older understandings of nonreciprocity so that future work on similar topics can take advantage of the team’s model, even when using different components and platforms. The research was published recently in Science Advances.
    Quantum computing differs fundamentally from the bit-based computing we all do every day. A bit is a piece of information typically expressed as a 0 or a 1. Bits are the basis for all the software, websites and emails that make up our electronic world.
    By contrast, quantum computing relies on “quantum bits,” or “qubits,” which are like regular bits except that they are represented by the “quantum superposition” of two states of a quantum object. Matter in a quantum state behaves very differently, which means that qubits aren’t relegated to being only 0s or 1s — they can be both at the same time in a way that sounds like magic, but which is well defined by the laws of quantum mechanics. This property of quantum superposition leads to the increased power capabilities of quantum computers.
    Furthermore, a property called “nonreciprocity” can create additional avenues for quantum computing to leverage the potential of the quantum world.
    “Imagine a conversation between two people,” says Sean van Geldern, graduate student in physics at UMass Amherst and one of the paper’s authors. “Total reciprocity is when each of the people in that conversation is sharing an equal amount of information. Nonreciprocity is when one person is sharing a little bit less than the other.”
    “This is desirable in quantum computing,” says senior author Chen Wang, assistant professor of physics at UMass Amherst, “because there are many computing scenarios where you want to give plenty of access to data without giving anyone the power to alter or degrade that data.”
    To control nonreciprocity, lead author Ying-Ying Wang, graduate student in physics at UMass Amherst, and her co-authors ran a series of simulations to determine the design and properties their circulator would need to have in order for them to vary its nonreciprocity. They then built their circulator and ran a host of experiments not just to prove their concept, but to understand exactly how their device enabled nonreciprocity. In the course of doing so, they were able to revise their model, which contained 16 parameters detailing how to build their specific device, to a simpler and more general model of only six parameters. This revised, more general model is much more useful than the initial, more specific one, because it is widely applicable to a range of future research efforts.

    The “integrated nonreciprocal device” that the team built looks like a “Y.” At the center of the “Y” is the circulator, which is like a traffic roundabout for the microwave signals mediating the quantum interactions. One of the legs is the cavity port, a resonant superconducting cavity hosting an electromagnetic field. Another leg of the “Y” holds the qubit, printed on a sapphire chip. The final leg is the output port.
    “If we vary the superconducting electromagnetic field by bombarding it with photons,” says Ying-Ying Wang, “we see that that qubit reacts in a predictable and controllable way, which means that we can adjust exactly how much reciprocity we want. And the simplified model that we produced describes our system in such a way that the external parameters can be calculated to tune an exact degree of nonreciprocity.”
    “This is the first demonstration of embedding nonreceptivity into a quantum computing device,” says Chen Wang, “and it opens the door to engineering more sophisticated quantum computing hardware.”
    Funding for this research was provided by the U.S. Department of Energy, the Army Research Office, Simons Foundation, Air Force Office of Scientific Research, the U.S. National Science Foundation, and the Laboratory for Physical Sciences Qubit Collaboratory. More

  • in

    Researchers unlock potential of 2D magnetic devices for future computing

    Imagine a future where computers can learn and make decisions in ways that mimic human thinking, but at a speed and efficiency that are orders of magnitude greater than the current capability of computers.
    A research team at the University of Wyoming created an innovative method to control tiny magnetic states within ultrathin, two-dimensional (2D) van der Waals magnets — a process akin to how flipping a light switch controls a bulb.
    “Our discovery could lead to advanced memory devices that store more data and consume less power or enable the development of entirely new types of computers that can quickly solve problems that are currently intractable,” says Jifa Tian, an assistant professor in the UW Department of Physics and Astronomy and interim director of UW’s Center for Quantum Information Science and Engineering.
    Tian was corresponding author of a paper, titled “Tunneling current-controlled spin states in few-layer van der Waals magnets,” that was published today (May 1) in Nature Communications, an open access, multidisciplinary journal dedicated to publishing high-quality research in all areas of the biological, health, physical, chemical, Earth, social, mathematical, applied and engineering sciences.
    Van der Waals materials are made up of strongly bonded 2D layers that are bound in the third dimension through weaker van der Waals forces. For example, graphite is a van der Waals material that isbroadly used in industry in electrodes, lubricants, fibers, heat exchangers and batteries. The nature of the van der Waals forces between layers allows researchers to use Scotch tape to peel the layers into atomic thickness.
    The team developed a device known as a magnetic tunnel junction, which uses chromium triiodide — a 2D insulating magnet only a few atoms thick — sandwiched between two layers of graphene. By sending a tiny electric current — called a tunneling current — through this sandwich, the direction of the magnet’s orientation of the magnetic domains (around 100 nanometers in size) can be dictated within the individual chromium triiodide layers, Tian says.
    Specifically, “this tunneling current not only can control the switching direction between two stable spin states, but also induces and manipulates switching between metastable spin states, called stochastic switching,” says ZhuangEn Fu, a graduate student in Tian’s research lab and now a postdoctoral fellow at the University of Maryland.

    “This breakthrough is not just intriguing; it’s highly practical. It consumes three orders of magnitude smaller energy than traditional methods, akin to swapping an old lightbulb for an LED, marking it a potential game-changer for future technology,” Tian says. “Our research could lead to the development of novel computing devices that are faster, smaller and more energy-efficient and powerful than ever before. Our research marks a significant advancement in magnetism at the 2D limit and sets the stage for new, powerful computing platforms, such as probabilistic computers.”
    Traditional computers use bits to store information as 0’s and 1’s. This binary code is the foundation of all classic computing processes. Quantum computers use quantum bits that can represent both “0” and “1” at the same time, increasing processing power exponentially.
    “In our work, we’ve developed what you might think of as a probabilistic bit, which can switch between ‘0’ and ‘1’ (two spin states) based on the tunneling current controlled probabilities,” Tian says. “These bits are based on the unique properties of ultrathin 2D magnets and can be linked together in a way that is similar to neurons in the brain to form a new kind of computer, known as a probabilistic computer.
    “What makes these new computers potentially revolutionary is their ability to handle tasks that are incredibly challenging for traditional and even quantum computers, such as certain types of complex machine learning tasks and data processing problems,” Tian continues. “They are naturally tolerant to errors, simple in design and take up less space, which could lead to more efficient and powerful computing technologies.”
    Hua Chen, an associate professor of physics at Colorado State University, and Allan MacDonald, a professor of physics at the University of Texas-Austin, collaborated to develop a theoretical model that elucidates how tunneling currents influence spin states in the 2D magnetic tunnel junctions. Other contributors were from Penn State University, Northeastern University and the National Institute for Materials Science in Namiki, Tsukuba, Japan.
    The study was funded through grants from the U.S. Department of Energy; Wyoming NASA EPSCoR (Established Program to Stimulate Competitive Research); the National Science Foundation; and the World Premier International Research Center Initiative and the Ministry of Education, Culture, Sports, Science and Technology, both in Japan. More

  • in

    New computer algorithm supercharges climate models and could lead to better predictions of future climate change

    Earth System Models — complex computer models which describe Earth processes and how they interact — are critical for predicting future climate change. By simulating the response of our land, oceans and atmosphere to manmade greenhouse gas emissions, these models form the foundation for predictions of future extreme weather and climate event scenarios, including those issued by the UN Intergovernmental Panel on Climate Change (IPCC).
    However, climate modellers have long faced a major problem. Because Earth System Models integrate many complicated processes, they cannot immediately run a simulation; they must first ensure that it has reached a stable equilibrium representative of real-world conditions before the industrial revolution. Without this initial settling period — referred to as the “spin-up” phase — the model can “drift,” simulating changes that may be erroneously attributed to manmade factors.
    Unfortunately, this process is extremely slow as it requires running the model for many thousands of model years which, for IPCC simulations, can take as much as two years on some of the world’s most powerful supercomputers.
    However, a study in Science Advances by a University of Oxford scientist funded by the Agile Initiative describes a new computer algorithm which can be applied to Earth System Models to drastically reduce spin-up time. During tests on models used in IPCC simulations, the algorithm was on average 10 times faster at spinning up the model than currently-used approaches, reducing the time taken to achieve equilibrium from many months to under a week.
    Study author Samar Khatiwala, Professor of Earth Sciences at the University of Oxford’s Department of Earth Sciences, who devised the algorithm, said: ‘Minimising model drift at a much lower cost in time and energy is obviously critical for climate change simulations, but perhaps the greatest value of this research may ultimately be to policy makers who need to know how reliable climate projections are.’
    Currently, the lengthy spin-up time of many IPCC models prevents climate researchers from running their model at a higher resolution and defining uncertainty through carrying out repeat simulations. By drastically reducing the spin-up time, the new algorithm will enable researchers to investigate how subtle changes to the model parameters can alter the output — which is critical for defining the uncertainty of future emission scenarios.
    Professor Khatiwala’s new algorithm employs a mathematical approach known as sequence acceleration, which has its roots with the famous mathematician Euler. In the 1960s this idea was applied by D. G. Anderson to speed-up the solution of Schrödinger’s equation, which predicts how matter behaves at the microscopic level. So important is this problem that more than half the world’s supercomputing power is currently devoted to solving it, and ‘Anderson Acceleration’, as it is now known, is one of the most commonly used algorithms employed for it.

    Professor Khatiwala realised that Anderson Acceleration might also be able to reduce model spin-up time since both problems are of an iterative nature: an output is generated and then fed back into the model many times over. By retaining previous outputs and combining them into a single input using Anderson’s scheme, the final solution is achieved much more quickly.
    Not only does this make the spin-up process much faster and less computationally expensive, but the concept can be applied to the huge variety of different models that are used to investigate, and inform policy on, issues ranging from ocean acidification to biodiversity loss. With research groups around the world beginning to spin-up their models for the next IPCC report, due in 2029, Professor Khatiwala is working with a number of them, including the UK Met Office, to trial his approach and software in their models.
    Professor Helene Hewitt OBE, Co-chair for the Coupled Model Intercomparison Project (CMIP) Panel, which will inform the next IPCC report, commented: ‘Policymakers rely on climate projections to inform negotiations as the world tries to meet the Paris Agreement. This work is a step towards reducing the time it takes to produce those critical climate projections.’
    Professor Colin Jones Head of the NERC/Met Office sponsored UK Earth system modelling, commented on the findings: ‘Spin-up has always been prohibitively expensive in terms of computational cost and time. The new approaches developed by Professor Khatiwala have the promise to break this logjam and deliver a quantum leap in the efficiency of spinning up such complex models and, as a consequence, greatly increase our ability to deliver timely, robust estimates of global climate change.’ More

  • in

    Virtual reality environment for teens may offer an accessible, affordable way to reduce stress

    Social media. The climate crisis. Political polarization. The tumult of a pandemic and online learning. Teens today are dealing with unprecedented stressors, and over the past decade their mental health has been in sustained decline. Levels of anxiety and depression rose after the onset of the COVID-19 pandemic. Compounding the problem is a shortage of mental health providers — for every 100,000 children in the U.S., there are only 14 child and adolescent psychiatrists.
    In response to this crisis, University of Washington researchers studied whether virtual reality might help reduce stress for teens and boost mental health. Working with adolescents, the team designed a snowy virtual world with six activities — such as stacking rocks and painting — based on practices shown to improve mental health.
    In a 3-week study of 44 Seattle teens, researchers found that teens used the technology an average of twice a week without being prompted and reported lower stress levels and improved mood while using it, though their levels of anxiety and depression didn’t decline overall.
    The researchers published their findings April 22 in the journal JMIR XR and Spatial Computing. The system is not publicly available.
    “We know what works to help support teens, but a lot of these techniques are inaccessible because they’re locked into counseling, which can be expensive, or the counselors just aren’t available,” said lead author Elin Björling, a UW senior research scientist in the human centered design and engineering department. “So we tried to take some of these evidence-based practices, but put them in a much more engaging environment, like VR, so the teens might want to do them on their own.”
    The world of Relaxation Environment for Stress in Teens, or RESeT, came from conversations the researchers had with groups of teens over two years at Seattle Public Library sites. From these discussions, the team built RESeT as an open winter world with a forest that users could explore by swinging their arms (a behavior known to boost mood) to move their avatar. A signpost with six arrows on it sent users to different activities, each based on methods shown to improve mental health, such as dialectical behavior therapy and mindfulness-based stress reduction.
    In one exercise, “Riverboat,” users put negative words in paper boats and send them down a river. Another, “Rabbit Hole,” has players stand by a stump; the longer they’re still, the more rabbits appear.

    “In the co-design process, we learned some teens were really afraid of squirrels, which I wouldn’t have thought of,” Björling said. “So we removed all the squirrels. I still have a Post-It in my office that says ‘delete squirrels.’ But all ages and genders loved rabbits, so we designed Rabbit Hole, where the reward for being calm and paying attention is a lot of rabbits surrounding you.”
    To test the potential effects of RESeT on teens’ mental health, the team enrolled 44 teens between ages 14 and 18 in the study. Each teen was given a Meta Quest 2 headset and asked to use RESeT three to five times a week. Because the researchers were trying to see if teens would use RESeT regularly on their own, they did not give prompts or incentives to use the headsets after the start of the study. Teens were asked to complete surveys gauging their stress and mood before and after each session.
    On average, the teens used RESeT twice a week for 11.5 minutes at a time. Overall, they reported feeling significantly less stressed while using RESeT, and also reported smaller improvements in mood. They said they liked using the headset in general. However, the study found no significant effects on anxiety and depression.
    “Reduced stress and improved mood are our key findings and exactly what we hoped for,” said co-author Jennifer Sonney, an associate professor in the UW School of Nursing who works with children and families. “We didn’t have a big enough participant group or a design to study long-term health impacts, but we have promising signals that teens liked using RESeT and could administer it themselves, so we absolutely want to move the project forward.”
    The researchers aim to conduct a larger, longer-term study with a control group to see if a VR system could impart lasting effects on mood and stress. They’re also interested in incorporating artificial intelligence to personalize the VR experience and in exploring offering VR headsets in schools or libraries to improve community access.
    Additional co-authors were Himanshu Zade, a UW lecturer and researcher at Microsoft; Sofia Rodriguez, a senior manager at Electronic Arts who completed this research as a UW master’s student in human centered design and engineering; Michael D. Pullmann, a research professor in psychiatry and behavioral sciences at the UW School of Medicine; and Soo Hyun Moon, a senior product designer at Statsig who completed this research as a UW master’s student in human centered design and engineering. This research was funded by the National Institute of Mental Health through the UW ALACRITY Center, which supports UW research on mental health. More

  • in

    Scientists show that there is indeed an ‘entropy’ of quantum entanglement

    Bartosz Regula from the RIKEN Center for Quantum Computing and Ludovico Lami from the University of Amsterdam have shown, through probabilistic calculations, that there is indeed, as had been hypothesized, a rule of “entropy” for the phenomenon of quantum entanglement. This finding could help drive a better understanding of quantum entanglement, which is a key resource that underlies much of the power of future quantum computers. Little is currently understood about the optimal ways to make an effective use of it, despite it being the focus of research in quantum information science for decades.
    The second law of thermodynamics, which says that a system can never move to a state with lower “entropy,” or order, is one of the most fundamental laws of nature, and lies at the very heart of physics. It is what creates the “arrow of time,” and tells us the remarkable fact that the dynamics of general physical systems, even extremely complex ones such as gases or black holes, are encapsulated by a single function, its “entropy.”
    There is a complication, however. The principle of entropy is known to apply to all classical systems, but today we are increasingly exploring the quantum world. We are now going through a quantum revolution, and it becomes crucially important to understand how we can extract and transform the expensive and fragile quantum resources. In particular, quantum entanglement, which allows for significant advantages in communication, computation, and cryptography, is crucial, but due to its extremely complex structure, efficiently manipulating it and even understanding its basic properties is typically much more challenging than in the case of thermodynamics.
    The difficulty lies in the fact that such a “second law” for quantum entanglement would require us to show that entanglement transformations can be made reversible, just like work and heat can be interconverted in thermodynamics. It is known that reversibility of entanglement is much more difficult to ensure than the reversibility of thermodynamic transformations, and all previous attempts at establishing any form of a reversible theory of entanglement have failed. It was even suspected that entanglement might actually be irreversible, making the quest an impossible one.
    In their new work, published in Nature Communications, the authors solve this long-standing conjecture by using “probabilistic” entanglement transformations, which are only guaranteed to be successful some of the time, but which, in return, provide an increased power in converting quantum systems. Under such processes, the authors show that it is indeed possible to establish a reversible framework for entanglement manipulation, thus identifying a setting in which a unique “entropy of entanglement” emerges and all entanglement transformations are governed by a single quantity. The methods they used could be applied more broadly, showing similar reversibility properties also for more general quantum resources.
    According to Regula, “Our findings mark significant progress in understanding the basic properties of entanglement, revealing fundamental connections between entanglement and thermodynamics, and crucially, providing a major simplification in the understanding of entanglement conversion processes. This not only has immediate and direct applications in the foundations of quantum theory, but it will also help with understanding the ultimate limitations on our ability to efficiently manipulate entanglement in practice.”
    Looking toward the future, he continues, “Our work serves as the very first evidence that reversibility is an achievable phenomenon in entanglement theory. However, even stronger forms of reversibility have been conjectured, and there is hope that entanglement can be made reversible even under weaker assumptions than we have made in our work — notably, without having to rely on probabilistic transformations. The issue is that answering these questions appears significantly more difficult, requiring the solution of mathematical and information-theoretic problems that have evaded all attempts at solving them thus far. Understanding the precise requirements for reversibility to hold thus remains a fascinating open problem.” More

  • in

    Improved AI process could better predict water supplies

    A new computer model uses a better artificial intelligence process to measure snow and water availability more accurately across vast distances in the West, information that could someday be used to better predict water availability for farmers and others.
    Publishing in the Proceedings of the AAAI Conference on Artificial Intelligence, the interdisciplinary group of Washington State University researchers predict water availability from areas in the West where snow amounts aren’t being physically measured.
    Comparing their results to measurements from more than 300 snow measuring stations in the Western U.S., they showed that their model outperformed other models that use the AI process known as machine learning. Previous models focused on time-related measures, taking data at different time points from only a few locations. The improved model uses both time and space into account, resulting in more accurate predictions.
    The information is critically important for water planners throughout the West because “every drop of water” is appropriated for irrigation, hydropower, drinking water, and environmental needs, said Krishu Thapa, a Washington State University computer science graduate student who led the study.
    Water management agencies throughout the West every spring make decisions on how to use water based on how much snow is in the mountains.
    “This is a problem that’s deeply related to our own way of life continuing in this region in the Western U.S.,” said co-author Kirti Rajagopalan, professor in WSU’s Department of Biological Systems Engineering. “Snow is definitely key in an area where more than half of the streamflow comes from snow melt. Understanding the dynamics of how that’s formed and how that changes, and how it varies spatially is really important for all decisions.”
    There are 822 snow measurement stations throughout the Western U.S. that provide daily information on the potential water availability at each site, a measurement called the snow-water equivalent (SWE). The stations also provide information on snow depth, temperature, precipitation and relative humidity.

    However, the stations are sparsely distributed with approximately one every 1,500 square miles. Even a short distance away from a station, the SWE can change dramatically depending on factors like the area’s topography.
    “Decision makers look at a few stations that are nearby and make a decision based on that, but how the snow melts and how the different topography or the other features are playing a role in between, that’s not accounted for, and that can lead to over predicting or under predicting water supplies,” said co-author Bhupinderjeet Singh, a WSU graduate student in biological systems engineering. “Using these machine learning models, we are trying to predict it in a better way.”
    The researchers developed a modeling framework for SWE prediction and adapted it to capture information in space and time, aiming to predict the daily SWE for any location, whether or not there is a station there. Earlier machine learning models could only focus on the one temporal variable, taking data for one location for multiple days and using that data, making predictions for the other days.
    “Using our new technique, we’re using both and spatial and temporal models to make decisions, and we are using the additional information to make the actual prediction for the SWE value,” said Thapa. “With our work, we’re trying to transform that physically sparse network of stations to dense points from which we can predict the value of SWE from those points that don’t have any stations.”
    While this work won’t be used for directly informing decisions yet, it is a step in helping with future forecasting and improving the inputs for models for predicting stream flows, said Rajagopalan. The researchers will be working to extend the model to make it spatially complete and eventually make it into a real-world forecasting model.
    The work was conducted through the AI Institute for Transforming Workforce and Decision Support (AgAID Institute) and supported by the USDA’s National Institute of Food and Agriculture. More