More stories

  • in

    Virtual reality environment for teens may offer an accessible, affordable way to reduce stress

    Social media. The climate crisis. Political polarization. The tumult of a pandemic and online learning. Teens today are dealing with unprecedented stressors, and over the past decade their mental health has been in sustained decline. Levels of anxiety and depression rose after the onset of the COVID-19 pandemic. Compounding the problem is a shortage of mental health providers — for every 100,000 children in the U.S., there are only 14 child and adolescent psychiatrists.
    In response to this crisis, University of Washington researchers studied whether virtual reality might help reduce stress for teens and boost mental health. Working with adolescents, the team designed a snowy virtual world with six activities — such as stacking rocks and painting — based on practices shown to improve mental health.
    In a 3-week study of 44 Seattle teens, researchers found that teens used the technology an average of twice a week without being prompted and reported lower stress levels and improved mood while using it, though their levels of anxiety and depression didn’t decline overall.
    The researchers published their findings April 22 in the journal JMIR XR and Spatial Computing. The system is not publicly available.
    “We know what works to help support teens, but a lot of these techniques are inaccessible because they’re locked into counseling, which can be expensive, or the counselors just aren’t available,” said lead author Elin Björling, a UW senior research scientist in the human centered design and engineering department. “So we tried to take some of these evidence-based practices, but put them in a much more engaging environment, like VR, so the teens might want to do them on their own.”
    The world of Relaxation Environment for Stress in Teens, or RESeT, came from conversations the researchers had with groups of teens over two years at Seattle Public Library sites. From these discussions, the team built RESeT as an open winter world with a forest that users could explore by swinging their arms (a behavior known to boost mood) to move their avatar. A signpost with six arrows on it sent users to different activities, each based on methods shown to improve mental health, such as dialectical behavior therapy and mindfulness-based stress reduction.
    In one exercise, “Riverboat,” users put negative words in paper boats and send them down a river. Another, “Rabbit Hole,” has players stand by a stump; the longer they’re still, the more rabbits appear.

    “In the co-design process, we learned some teens were really afraid of squirrels, which I wouldn’t have thought of,” Björling said. “So we removed all the squirrels. I still have a Post-It in my office that says ‘delete squirrels.’ But all ages and genders loved rabbits, so we designed Rabbit Hole, where the reward for being calm and paying attention is a lot of rabbits surrounding you.”
    To test the potential effects of RESeT on teens’ mental health, the team enrolled 44 teens between ages 14 and 18 in the study. Each teen was given a Meta Quest 2 headset and asked to use RESeT three to five times a week. Because the researchers were trying to see if teens would use RESeT regularly on their own, they did not give prompts or incentives to use the headsets after the start of the study. Teens were asked to complete surveys gauging their stress and mood before and after each session.
    On average, the teens used RESeT twice a week for 11.5 minutes at a time. Overall, they reported feeling significantly less stressed while using RESeT, and also reported smaller improvements in mood. They said they liked using the headset in general. However, the study found no significant effects on anxiety and depression.
    “Reduced stress and improved mood are our key findings and exactly what we hoped for,” said co-author Jennifer Sonney, an associate professor in the UW School of Nursing who works with children and families. “We didn’t have a big enough participant group or a design to study long-term health impacts, but we have promising signals that teens liked using RESeT and could administer it themselves, so we absolutely want to move the project forward.”
    The researchers aim to conduct a larger, longer-term study with a control group to see if a VR system could impart lasting effects on mood and stress. They’re also interested in incorporating artificial intelligence to personalize the VR experience and in exploring offering VR headsets in schools or libraries to improve community access.
    Additional co-authors were Himanshu Zade, a UW lecturer and researcher at Microsoft; Sofia Rodriguez, a senior manager at Electronic Arts who completed this research as a UW master’s student in human centered design and engineering; Michael D. Pullmann, a research professor in psychiatry and behavioral sciences at the UW School of Medicine; and Soo Hyun Moon, a senior product designer at Statsig who completed this research as a UW master’s student in human centered design and engineering. This research was funded by the National Institute of Mental Health through the UW ALACRITY Center, which supports UW research on mental health. More

  • in

    Scientists show that there is indeed an ‘entropy’ of quantum entanglement

    Bartosz Regula from the RIKEN Center for Quantum Computing and Ludovico Lami from the University of Amsterdam have shown, through probabilistic calculations, that there is indeed, as had been hypothesized, a rule of “entropy” for the phenomenon of quantum entanglement. This finding could help drive a better understanding of quantum entanglement, which is a key resource that underlies much of the power of future quantum computers. Little is currently understood about the optimal ways to make an effective use of it, despite it being the focus of research in quantum information science for decades.
    The second law of thermodynamics, which says that a system can never move to a state with lower “entropy,” or order, is one of the most fundamental laws of nature, and lies at the very heart of physics. It is what creates the “arrow of time,” and tells us the remarkable fact that the dynamics of general physical systems, even extremely complex ones such as gases or black holes, are encapsulated by a single function, its “entropy.”
    There is a complication, however. The principle of entropy is known to apply to all classical systems, but today we are increasingly exploring the quantum world. We are now going through a quantum revolution, and it becomes crucially important to understand how we can extract and transform the expensive and fragile quantum resources. In particular, quantum entanglement, which allows for significant advantages in communication, computation, and cryptography, is crucial, but due to its extremely complex structure, efficiently manipulating it and even understanding its basic properties is typically much more challenging than in the case of thermodynamics.
    The difficulty lies in the fact that such a “second law” for quantum entanglement would require us to show that entanglement transformations can be made reversible, just like work and heat can be interconverted in thermodynamics. It is known that reversibility of entanglement is much more difficult to ensure than the reversibility of thermodynamic transformations, and all previous attempts at establishing any form of a reversible theory of entanglement have failed. It was even suspected that entanglement might actually be irreversible, making the quest an impossible one.
    In their new work, published in Nature Communications, the authors solve this long-standing conjecture by using “probabilistic” entanglement transformations, which are only guaranteed to be successful some of the time, but which, in return, provide an increased power in converting quantum systems. Under such processes, the authors show that it is indeed possible to establish a reversible framework for entanglement manipulation, thus identifying a setting in which a unique “entropy of entanglement” emerges and all entanglement transformations are governed by a single quantity. The methods they used could be applied more broadly, showing similar reversibility properties also for more general quantum resources.
    According to Regula, “Our findings mark significant progress in understanding the basic properties of entanglement, revealing fundamental connections between entanglement and thermodynamics, and crucially, providing a major simplification in the understanding of entanglement conversion processes. This not only has immediate and direct applications in the foundations of quantum theory, but it will also help with understanding the ultimate limitations on our ability to efficiently manipulate entanglement in practice.”
    Looking toward the future, he continues, “Our work serves as the very first evidence that reversibility is an achievable phenomenon in entanglement theory. However, even stronger forms of reversibility have been conjectured, and there is hope that entanglement can be made reversible even under weaker assumptions than we have made in our work — notably, without having to rely on probabilistic transformations. The issue is that answering these questions appears significantly more difficult, requiring the solution of mathematical and information-theoretic problems that have evaded all attempts at solving them thus far. Understanding the precise requirements for reversibility to hold thus remains a fascinating open problem.” More

  • in

    Improved AI process could better predict water supplies

    A new computer model uses a better artificial intelligence process to measure snow and water availability more accurately across vast distances in the West, information that could someday be used to better predict water availability for farmers and others.
    Publishing in the Proceedings of the AAAI Conference on Artificial Intelligence, the interdisciplinary group of Washington State University researchers predict water availability from areas in the West where snow amounts aren’t being physically measured.
    Comparing their results to measurements from more than 300 snow measuring stations in the Western U.S., they showed that their model outperformed other models that use the AI process known as machine learning. Previous models focused on time-related measures, taking data at different time points from only a few locations. The improved model uses both time and space into account, resulting in more accurate predictions.
    The information is critically important for water planners throughout the West because “every drop of water” is appropriated for irrigation, hydropower, drinking water, and environmental needs, said Krishu Thapa, a Washington State University computer science graduate student who led the study.
    Water management agencies throughout the West every spring make decisions on how to use water based on how much snow is in the mountains.
    “This is a problem that’s deeply related to our own way of life continuing in this region in the Western U.S.,” said co-author Kirti Rajagopalan, professor in WSU’s Department of Biological Systems Engineering. “Snow is definitely key in an area where more than half of the streamflow comes from snow melt. Understanding the dynamics of how that’s formed and how that changes, and how it varies spatially is really important for all decisions.”
    There are 822 snow measurement stations throughout the Western U.S. that provide daily information on the potential water availability at each site, a measurement called the snow-water equivalent (SWE). The stations also provide information on snow depth, temperature, precipitation and relative humidity.

    However, the stations are sparsely distributed with approximately one every 1,500 square miles. Even a short distance away from a station, the SWE can change dramatically depending on factors like the area’s topography.
    “Decision makers look at a few stations that are nearby and make a decision based on that, but how the snow melts and how the different topography or the other features are playing a role in between, that’s not accounted for, and that can lead to over predicting or under predicting water supplies,” said co-author Bhupinderjeet Singh, a WSU graduate student in biological systems engineering. “Using these machine learning models, we are trying to predict it in a better way.”
    The researchers developed a modeling framework for SWE prediction and adapted it to capture information in space and time, aiming to predict the daily SWE for any location, whether or not there is a station there. Earlier machine learning models could only focus on the one temporal variable, taking data for one location for multiple days and using that data, making predictions for the other days.
    “Using our new technique, we’re using both and spatial and temporal models to make decisions, and we are using the additional information to make the actual prediction for the SWE value,” said Thapa. “With our work, we’re trying to transform that physically sparse network of stations to dense points from which we can predict the value of SWE from those points that don’t have any stations.”
    While this work won’t be used for directly informing decisions yet, it is a step in helping with future forecasting and improving the inputs for models for predicting stream flows, said Rajagopalan. The researchers will be working to extend the model to make it spatially complete and eventually make it into a real-world forecasting model.
    The work was conducted through the AI Institute for Transforming Workforce and Decision Support (AgAID Institute) and supported by the USDA’s National Institute of Food and Agriculture. More

  • in

    Scientists solve chemical mystery at the interface of biology and technology

    Researchers who want to bridge the divide between biology and technology spend a lot of time thinking about translating between the two different “languages” of those realms.
    “Our digital technology operates through a series of electronic on-off switches that control the flow of current and voltage,” said Rajiv Giridharagopal, a research scientist at the University of Washington. “But our bodies operate on chemistry. In our brains, neurons propagate signals electrochemically, by moving ions — charged atoms or molecules — not electrons.”
    Implantable devices from pacemakers to glucose monitors rely on components that can speak both languages and bridge that gap. Among those components are OECTs — or organic electrochemical transistors — which allow current to flow in devices like implantable biosensors. But scientists long knew about a quirk of OECTs that no one could explain: When an OECT is switched on, there is a lag before current reaches the desired operational level. When switched off, there is no lag. Current drops almost immediately.
    A UW-led study has solved this lagging mystery, and in the process paved the way to custom-tailored OECTs for a growing list of applications in biosensing, brain-inspired computation and beyond.
    “How fast you can switch a transistor is important for almost any application,” said project leader David Ginger, a UW professor of chemistry, chief scientist at the UW Clean Energy Institute and faculty member in the UW Molecular Engineering and Sciences Institute. “Scientists have recognized the unusual switching behavior of OECTs, but we never knew its cause — until now.”
    In a paper published April 17 in Nature Materials, Ginger’s team at the UW — along with Professor Christine Luscombe at the Okinawa Institute of Science and Technology in Japan and Professor Chang-Zhi Li at Zhejiang University in China — report that OECTs turn on via a two-step process, which causes the lag. But they appear to turn off through a simpler one-step process.
    In principle, OECTs operate like transistors in electronics: When switched on, they allow the flow of electrical current. When switched off, they block it. But OECTs operate by coupling the flow of ions with the flow of electrons, which makes them interesting routes for interfacing with chemistry and biology.

    The new study illuminates the two steps OECTs go through when switched on. First, a wavefront of ions races across the transistor. Then, more charge-bearing particles invade the transistor’s flexible structure, causing it to swell slightly and bringing current up to operational levels. In contrast, the team discovered that deactivation is a one-step process: Levels of charged chemicals simply drop uniformly across the transistor, quickly interrupting the flow of current.
    Knowing the lag’s cause should help scientists design new generations of OECTs for a wider set of applications.
    “There’s always been this drive in technology development to make components faster, more reliable and more efficient,” Ginger said. “Yet, the ‘rules’ for how OECTs behave haven’t been well understood. A driving force in this work is to learn them and apply them to future research and development efforts.”
    Whether they reside within devices to measure blood glucose or brain activity, OECTs are largely made up of flexible, organic semiconducting polymers — repeating units of complex, carbon-rich compounds — and operate immersed in liquids containing salts and other chemicals. For this project, the team studied OECTs that change color in response to electrical charge. The polymer materials were synthesized by Luscombe’s team at the Okinawa Institute of Science and Technology and Li’s at Zhejiang University, and then fabricated into transistors by UW doctoral students Jiajie Guo and Shinya “Emerson” Chen, who are co-lead authors on the paper.
    “A challenge in the materials design for OECTs lies in creating a substance that facilitates effective ion transport and retains electronic conductivity,” said Luscombe, who is also a UW affiliate professor of chemistry and of materials science and engineering. “The ion transport requires a flexible material, whereas ensuring high electronic conductivity typically necessitates a more rigid structure, posing a dilemma in the development of such materials.”
    Guo and Chen observed under a microscope — and recorded with a smartphone camera — precisely what happens when the custom-built OECTs are switched on and off. It showed clearly that a two-step chemical process lies at the heart of the OECT activation lag.

    Past research, including by Ginger’s group at the UW, demonstrated that polymer structure, especially its flexibility, is important to how OECTs function. These devices operate in fluid-filled environments containing chemical salts and other biological compounds, which are more bulky compared to the electronic underpinnings of our digital devices.
    The new study goes further by more directly linking OECT structure and performance. The team found that the degree of activation lag should vary based on what material the OECT is made of, such as whether its polymers are more ordered or more randomly arranged, according to Giridharagopal. Future research could explore how to reduce or lengthen the lag times, which for OECTs in the current study were fractions of a second.
    “Depending on the type of device you’re trying to build, you could tailor composition, fluid, salts, charge carriers and other parameters to suit your needs,” said Giridharagopal.
    OECTs aren’t just used in biosensing. They are also used to study nerve impulses in muscles, as well as forms of computing to create artificial neural networks and understand how our brains store and retrieve information. These widely divergent applications necessitate building new generations of OECTs with specialized features, including ramp-up and ramp-down times, according to Ginger.
    “Now that we’re learning the steps needed to realize those applications, development can really accelerate,” said Ginger.
    Guo is now a postdoctoral researcher at the Lawrence Berkeley National Laboratory and Chen is now a scientist at Analog Devices. Other co-authors on the paper are Connor Bischak, a former UW postdoctoral researcher in chemistry who is now an assistant professor at the University of Utah; Jonathan Onorato, a UW doctoral alum and scientist at Exponent; and Kangrong Yan and Ziqui Shen of Zhejiang University. The research was funded by the U.S. National Science Foundation, and polymers developed at Zhejiang University were funded by the National Science Foundation of China. More

  • in

    Machine listening: Making speech recognition systems more inclusive

    Interactions with voice technology, such as Amazon’s Alexa, Apple’s Siri, and Google Assistant, can make life easier by increasing efficiency and productivity. However, errors in generating and understanding speech during interactions are common. When using these devices, speakers often style-shift their speech from their normal patterns into a louder and slower register, called technology-directed speech.
    Research on technology-directed speech typically focuses on mainstream varieties of U.S. English without considering speaker groups that are more consistently misunderstood by technology. In JASA Express Letters, published on behalf of the Acoustical Society of America by AIP Publishing, researchers from Google Research, the University of California, Davis, and Stanford University wanted to address this gap.
    One group commonly misunderstood by voice technology are individuals who speak African American English, or AAE. Since the rate of automatic speech recognition errors can be higher for AAE speakers, downstream effects of linguistic discrimination in technology may result.
    “Across all automatic speech recognition systems, four out of every ten words spoken by Black men were being transcribed incorrectly,” said co-author Zion Mengesha. “This affects fairness for African American English speakers in every institution using voice technology, including health care and employment.”
    “We saw an opportunity to better understand this problem by talking to Black users and understanding their emotional, behavioral, and linguistic responses when engaging with voice technology,” said co-author Courtney Heldreth.
    The team designed an experiment to test how AAE speakers adapt their speech when imagining talking to a voice assistant, compared to talking to a friend, family member, or stranger. The study tested familiar human, unfamiliar human, and voice assistant-directed speech conditions by comparing speech rate and pitch variation. Study participants included 19 adults identifying as Black or African American who had experienced issues with voice technology. Each participant asked a series of questions to a voice assistant. The same questions were repeated as if speaking to a familiar person and, again, to a stranger. Each question was recorded for a total of 153 recordings.
    Analysis of the recordings showed that the speakers exhibited two consistent adjustments when they were talking to voice technology compared to talking to another person: a slower rate of speech with less pitch variation (more monotone speech).
    “These findings suggest that people have mental models of how to talk to technology,” said co-author Michelle Cohn. “A set ‘mode’ that they engage to be better understood, in light of disparities in speech recognition systems.”
    There are other groups misunderstood by voice technology, such as second-language speakers. The researchers hope to expand the language varieties explored in human-computer interaction experiments and address barriers in technology so that it can support everyone who wants to use it. More

  • in

    New technology makes 3D microscopes easier to use, less expensive to manufacture

    Researchers in Purdue University’s College of Engineering are developing patented and patent-pending innovations that make 3D microscopes faster to operate and less expensive to manufacture.
    Traditional, large depth-of-field 3D microscopes are used across academia and industry, with applications ranging from the life sciences to quality control processes used in semiconductor manufacturing. Song Zhang, professor in Purdue’s School of Mechanical Engineering, said they are too slow to capture 3D images and too expensive to build due to the requirement of a high-precision translation stage.
    “Such drawbacks in a microscope slow the measurement process, making it difficult to use for applications that require high speeds, such as in situ quality control,” Zhang said.
    Research about the Purdue 3D microscope and its innovations has been published in the peer-reviewed Optics Letters and the August 2023 and March 2024 issues of the peer-reviewed Optics and Lasers in Engineering. The National Science Foundation awarded a grant to conduct the research.
    The Purdue innovation
    Zhang said the Purdue 3D microscope automatically completes three steps: focusing in on an object, determining the optimal capture process and creating a high-quality 3D image for the end user.
    “In contrast, a traditional microscope requires users to carefully follow instructions provided by the manufacturer to perform a high-quality capture,” Zhang said.

    Zhang and his colleagues use an electronically tunable lens, or ETL, that changes the focal plane of the imaging system without moving parts. He said using the lens makes the 3D microscope easier to use and less expensive to build.
    “Our suite of patents covers methods on how to calibrate the ETL, how to create all-in-focus 3D images quickly and how to speed up the data acquisition process by leveraging the ETL hardware information,” Zhang said. “The end result is the same as a traditional microscope: 3D surface images of a scene. Ours is different because of its high speed and relatively low cost.”
    The next developmental steps
    Zhang and his team have developed algorithms and created a prototype system in their lab. They are looking to translate their research into a commercial product.
    “This will require an industrial partner,” Zhang said. “We are certainly interested in helping this process, including sharing our know-how and research results to make the transition smooth.”
    Zhang disclosed the innovations to the Purdue Innovates Office of Technology Commercialization, which has applied for and received patents to protect the multiple pieces of intellectual property. More

  • in

    Trotting robots reveal emergence of animal gait transitions

    With the help of a form of machine learning called deep reinforcement learning (DRL), the EPFL robot notably learned to transition from trotting to pronking — a leaping, arch-backed gait used by animals like springbok and gazelles — to navigate a challenging terrain with gaps ranging from 14-30cm. The study, led by the BioRobotics Laboratory in EPFL’s School of Engineering, offers new insights into why and how such gait transitions occur in animals.
    “Previous research has introduced energy efficiency and musculoskeletal injury avoidance as the two main explanations for gait transitions. More recently, biologists have argued that stability on flat terrain could be more important. But animal and robotic experiments have shown that these hypotheses are not always valid, especially on uneven ground,” says PhD student Milad Shafiee, first author on a paper published in Nature Communications.
    Shafiee and co-authors Guillaume Bellegarda and BioRobotics Lab head Auke Ijspeert were therefore interested in a new hypothesis for why gait transitions occur: viability, or fall avoidance. To test this hypothesis, they used DRL to train a quadruped robot to cross various terrains. On flat terrain, they found that different gaits showed different levels of robustness against random pushes, and that the robot switched from a walk to a trot to maintain viability, just as quadruped animals do when they accelerate. And when confronted with successive gaps in the experimental surface, the robot spontaneously switched from trotting to pronking to avoid falls. Moreover, viability was the only factor that was improved by such gait transitions.
    “We showed that on flat terrain and challenging discrete terrain, viability leads to the emergence of gait transitions, but that energy efficiency is not necessarily improved,” Shafiee explains. “It seems that energy efficiency, which was previously thought to be a driver of such transitions, may be more of a consequence. When an animal is navigating challenging terrain, it’s likely that its first priority is not falling, followed by energy efficiency.”
    A bio-inspired learning architecture
    To model locomotion control in their robot, the researchers considered the three interacting elements that drive animal movement: the brain, the spinal cord, and sensory feedback from the body. They used DRL to train a neural network to imitate the spinal cord’s transmission of brain signals to the body as the robot crossed an experimental terrain. Then, the team assigned different weights to three possible learning goals: energy efficiency, force reduction, and viability. A series of computer simulations revealed that of these three goals, viability was the only one that prompted the robot to automatically — without instruction from the scientists — change its gait.
    The team emphasizes that these observations represent the first learning-based locomotion framework in which gait transitions emerge spontaneously during the learning process, as well as the most dynamic crossing of such large consecutive gaps for a quadrupedal robot.
    “Our bio-inspired learning architecture demonstrated state-of-the-art quadruped robot agility on the challenging terrain,” Shafiee says.
    The researchers aim to expand on their work with additional experiments that place different types of robots in a wider variety of challenging environments. In addition to further elucidating animal locomotion, they hope that ultimately, their work will enable the more widespread use of robots for biological research, reducing reliance on animal models and the associated ethics concerns. More

  • in

    Scientists harness the wind as a tool to move objects

    Researchers have developed a technique to move objects around with a jet of wind. The new approach makes it possible to manipulate objects at a distance and could be integrated into robots to give machines ethereal fingers.
    ‘Airflow or wind is everywhere in our living environment, moving around objects like pollen, pathogens, droplets, seeds and leaves. Wind has also been actively used in industry and in our everyday lives — for example, in leaf blowers to clean leaves. But so far, we can’t control the direction the leaves move — we can only blow them together into a pile,’ says Professor Quan Zhou from Aalto University, who led the study.
    The first step in manipulating objects with wind is understanding how objects move in the airflow. To that end, a research team at Aalto University recorded thousands of sample movements in an artificially generated airflow and used these to build templates of how objects move on a surface in a jet of air.
    The team’s analysis showed that even though the airflow is generally chaotic, it’s still regular enough to move objects in a controlled way in different directions — even back towards the nozzle blowing out the air.
    ‘We designed an algorithm that controls the direction of the air nozzle with two motors. The jet of air is blown onto the surface from several meters away and to the side of the object, so the generated airflow field moves the object in the desired direction. The control algorithm repeatedly adjusts the direction of the air nozzle so that the airflow moves the objects along the desired trajectory,’ explains Zhou.
    ‘Our observations allowed us to use airflow to move objects along different paths, like circles or even complex letter-like paths. Our method is versatile in terms of the object’s shape and material — we can control the movement of objects of almost any shape,’ he continues.
    The technology still needs to be refined, but the researchers are optimistic about the untapped potential of their nature-inspired approach. It could be used to collect items that are scattered on a surface, such as pushing debris and waste to collection points. It could also be useful in complex processing tasks where physical contact is impossible, such as handling electrical circuits.
    ‘We believe that this technique could get even better with a deeper understanding of the characteristics of the airflow field, which is what we’re working on next,’ says Zhou. More