More stories

  • in

    Joyful music could be a game changer for virtual reality headaches

    Listening to music could reduce the dizziness, nausea and headaches virtual reality users might experience after using digital devices, research suggests.
    Cybersickness — a type of motion sickness from virtual reality experiences such as computer games — significantly reduces when joyful music is part of the immersive experience, the study found.
    The intensity of the nausea-related symptoms of cybersickness was also found to substantially decrease with both joyful and calming music.
    Researchers from the University of Edinburgh assessed the effects of music in a virtual reality environment among 39 people aged between 22 and 36.
    They conducted a series of tests to assess the effect cybersickness had on a participant’s memory skills reading speed and reaction times.
    Participants were immersed in a virtual environment, where they experienced three roller coaster rides aimed at inducing cybersickness.

    Two of the three rides were accompanied by electronic music with no lyrics by artists or from music streams that people might listen to which had been selected as being calming or joyful in a previous study.
    One ride was completed in silence and the order of the rides was randomised across participants.
    After each ride, participants rated their cybersickness symptoms and performed some memory and reaction time tests.
    Eye-tracking tests were also conducted to measure their reading speed and pupil size.
    For comparison purposes the participants had completed the same tests before the rides.

    The study found that joyful music significantly decreased the overall cybersickness intensity. Joyful and calming music substantially decreased the intensity of nausea-related symptoms.
    Cybersickness among the participants was associated with a temporary reduction in verbal working memory test scores, and a decrease in pupil size. It also significantly slowed reaction times and reading speed.
    The researchers also found higher levels of gaming experience were associated with lower cybersickness. There was no difference in the intensity of the cybersickness between female and male participants with comparable gaming experience.
    Researchers say the findings show the potential of music in lessening cybersickness, understanding how gaming experience is linked to cybersickness levels, and the significant effects of cybersickness on thinking skills, reaction times, reading ability and pupil size,
    Dr Sarah E MacPherson, of the University of Edinburgh’s School of Philosophy, Psychology & Language Sciences, said: “Our study suggests calming or joyful music as a solution for cybersickness in immersive virtual reality. Virtual reality has been used in educational and clinical settings but the experience of cybersickness can temporarily impair someone’s thinking skills as well as slowing down their reaction times. The development of music as an intervention could encourage virtual reality to be used more extensively within educational and clinical settings.”
    The study was made possible through a collaboration between Psychology at the University of Edinburgh and the Inria Centre at the University of Rennes in France. More

  • in

    Self-folding origami machines powered by chemical reaction

    A Cornell-led collaboration harnessed chemical reactions to make microscale origami machines self-fold — freeing them from the liquids in which they usually function, so they can operate in dry environments and at room temperature.
    The approach could one day lead to the creation of a new fleet of tiny autonomous devices that can rapidly respond to their chemical environment.
    The group’s paper, “Gas-Phase Microactuation Using Kinetically Controlled Surface States of Ultrathin Catalytic Sheets,” published May 1 in Proceedings of the National Academy of Sciences. The paper’s co-lead authors are Nanqi Bao, Ph.D. ’22, and former postdoctoral researcher Qingkun Liu, Ph.D. ’22.
    The project was led by senior author Nicholas Abbott, a Tisch University Professor in the Robert F. Smith School of Chemical and Biomolecular Engineering in Cornell Engineering, along with Itai Cohen, professor of physics, and Paul McEuen, the John A. Newman Professor of Physical Science, both in the College of Arts and Sciences; and David Muller, the Samuel B. Eckert Professor of Engineering in Cornell Engineering.
    “There are quite good technologies for electrical to mechanical energy transduction, such as the electric motor, and the McEuen and Cohen groups have shown a strategy for doing that on the microscale, with their robots,” Abbott said. “But if you look for direct chemical to mechanical transductions, actually there are very few options.”
    Prior efforts depended on chemical reactions that could only occur in extreme conditions, such as at high temperatures of several 100 degrees Celsius, and the reactions were often tediously slow — sometimes as long as 10 minutes — making the approach impractical for everyday technological applications.

    However, Abbott’s group found a loophole of sorts while reviewing data from a catalysis experiment: a small section of the chemical reaction pathway contained both slow and fast steps.
    “If you look at the response of the chemical actuator, it’s not that it goes from one state directly to the other state. It actually goes through an excursion into a bent state, a curvature, which is more extreme than either of the two end states,” Abbott said. “If you understand the elementary reaction steps in a catalytic pathway, you can go in and sort of surgically extract out the rapid steps. You can operate your chemical actuator around those rapid steps, and just ignore the rest of it.”
    The researchers needed the right material platform to leverage that rapid kinetic moment, so they turned to McEuen and Cohen, who had worked with Muller to develop ultrathin platinum sheets capped with titanium.
    The group also collaborated with theorists, led by professor Manos Mavrikakis at the University of Wisconsin, Madison, who used electronic structure calculations to dissect the chemical reaction that occurs when hydrogen — adsorbed to the material — is exposed to oxygen.
    The researchers were then able to exploit the crucial moment that the oxygen quickly strips the hydrogen, causing the atomically thin material to deform and bend, like a hinge.

    The system actuates at 600 milliseconds per cycle and can operate at 20 degrees Celsius — i.e., room temperature — in dry environments.
    “The result is quite generalizable,” Abbott said. “There are a lot of catalytic reactions which have been developed based on all sorts of species. So carbon monoxide, nitrogen oxides, ammonia: they’re all candidates to use as fuels for chemically driven actuators.”
    The team anticipates applying the technique to other catalytic metals, such as palladium and palladium gold alloys. Eventually this work could lead to autonomous material systems in which the controlling circuitry and onboard computation are handled by the material’s response — for example, an autonomous chemical system that regulates flows based on chemical composition.
    “We are really excited because this work paves the way to microscale origami machines that work in gaseous environments,” Cohen said.
    Co-authors include postdoctoral researcher Michael Reynolds, M.S. ’17, Ph.D. ’21; doctoral student Wei Wang; Michael Cao ’14; and researchers at the University of Wisconsin, Madison.
    The research was supported by the Cornell Center for Materials Research, which is supported by the National Science Foundation’s MRSEC program, the Army Research Office, the NSF, the Air Force Office of Scientific Research and the Kavli Institute at Cornell for Nanoscale Science.
    The researchers made use of the Cornell NanoScale Facility, a member of the National Nanotechnology Coordinated Infrastructure, which is supported by the NSF; and National Energy Research Scientific Computing Center (NERSC) resources, which is supported by the U.S. Department of Energy’s Office of Science.
    The project is part of the Nanoscale Science and Microsystems Engineering (NEXT Nano) program, which is designed to push nanoscale science and microsystems engineering to the next level of design, function and integration. More

  • in

    Quantum entanglement of photons doubles microscope resolution

    Using a “spooky” phenomenon of quantum physics, Caltech researchers have discovered a way to double the resolution of light microscopes.
    In a paper appearing in the journal Nature Communications, a team led by Lihong Wang, Bren Professor of Medical Engineering and Electrical Engineering, shows the achievement of a leap forward in microscopy through what is known as quantum entanglement. Quantum entanglement is a phenomenon in which two particles are linked such that the state of one particle is tied to the state of the other particle regardless of whether the particles are anywhere near each other. Albert Einstein famously referred to quantum entanglement as “spooky action at a distance” because it could not be explained by his relativity theory.
    According to quantum theory, any type of particle can be entangled. In the case of Wang’s new microscopy technique, dubbed quantum microscopy by coincidence (QMC), the entangled particles are photons. Collectively, two entangled photons are known as a biphoton, and, importantly for Wang’s microscopy, they behave in some ways as a single particle that has double the momentum of a single photon.
    Since quantum mechanics says that all particles are also waves, and that the wavelength of a wave is inversely related to the momentum of the particle, particles with larger momenta have smaller wavelengths. So, because a biphoton has double the momentum of a photon, its wavelength is half that of the individual photons.
    This is key to how QMC works. A microscope can only image the features of an object whose minimum size is half the wavelength of light used by the microscope. Reducing the wavelength of that light means the microscope can see even smaller things, which results in increased resolution.
    Quantum entanglement is not the only way to reduce the wavelength of light being used in a microscope. Green light has a shorter wavelength than red light, for example, and purple light has a shorter wavelength than green light. But due to another quirk of quantum physics, light with shorter wavelengths carries more energy. So, once you get down to light with a wavelength small enough to image tiny things, the light carries so much energy that it will damage the items being imaged, especially living things such as cells. This is why ultraviolet (UV) light, which has a very short wavelength, gives you a sunburn.

    QMC gets around this limit by using biphotons that carry the lower energy of longer-wavelength photons while having the shorter wavelength of higher-energy photons.
    “Cells don’t like UV light,” Wang says. “But if we can use 400-nanometer light to image the cell and achieve the effect of 200-nm light, which is UV, the cells will be happy, and we’re getting the resolution of UV.”
    To achieve that, Wang’s team built an optical apparatus that shines laser light into a special kind of crystal that converts some of the photons passing through it into biphotons. Even using this special crystal, the conversion is very rare and occurs in about one in a million photons. Using a series of mirrors, lenses, and prisms, each biphoton — which actually consists of two discrete photons — is split up and shuttled along two paths, so that one of the paired photons passes through the object being imaged and the other does not. The photon passing through the object is called the signal photon, and the one that does not is called the idler photon. These photons then continue along through more optics until they reach a detector connected to a computer that builds an image of the cell based on the information carried by the signal photon. Amazingly, the paired photons remain entangled as a biphoton behaving at half the wavelength despite the presence of the object and their separate pathways.
    Wang’s lab was not the first to work on this kind of biphoton imaging, but it was the first to create a viable system using the concept. “We developed what we believe a rigorous theory as well as a faster and more accurate entanglement-measurement method. We reached microscopic resolution and imaged cells.”
    While there is no theoretical limit to the number of photons that can be entangled with each other, each additional photon would further increase the momentum of the resulting multiphoton while further decreasing its wavelength.
    Wang says future research could enable entanglement of even more photons, although he notes that each extra photon further reduces the probability of a successful entanglement, which, as mentioned above, is already as low as a one-in-a-million chance.
    The paper describing the work, “Quantum Microscopy of Cells at the Heisenberg Limit,” appears in the April 28 issue of Nature Communications. Co-authors are Zhe Heand Yide Zhang, both postdoctoral scholar research associates in medical engineering; medical engineering graduate student Xin Tong (MS ’21); and Lei Li (PhD ’19), formerly a medical engineering postdoctoral scholar and now an assistant professor of electrical and computer engineering at Rice University.
    Funding for the research was provided by the Chan Zuckerberg Initiative and the National Institutes of Health. More

  • in

    Could wearables capture well-being?

    Applying machine learning models, a type of artificial intelligence (AI), to data collected passively from wearable devices can identify a patient’s degree of resilience and well-being, according to investigators at the Icahn School of Medicine at Mount Sinai in New York.
    The findings, reported in the May 2nd issue of JAMIA Open, support wearable devices, such as the Apple Watch®, as a way to monitor and assess psychological states remotely without requiring the completion of mental health questionnaires.
    The paper points out that resilience, or an individual’s ability to overcome difficulty, is an important stress mitigator, reduces morbidity, and improves chronic disease management.
    “Wearables provide a means to continually collect information about an individual’s physical state. Our results provide insight into the feasibility of assessing psychological characteristics from this passively collected data,” said first author Robert P. Hirten, MD, Clinical Director, Hasso Plattner Institute for Digital Health at Mount Sinai. “To our knowledge, this is the first study to evaluate whether resilience, a key mental health feature, can be evaluated from devices such as the Apple Watch.”
    Mental health disorders are common, accounting for 13 percent of the burden of global disease, with a quarter of the population at some point experiencing psychological illness. Yet we have limited resources for their evaluation, say the researchers.
    “There are wide disparities in access across geography and socioeconomic status, and the need for in-person assessment or the completion of validated mental health surveys is further limiting,” said senior author Zahi Fayad, PhD, Director of the BioMedical Engineering and Imaging Institute at Icahn Mount Sinai. “A better understanding of who is at psychological risk and an improved means of tracking the impact of psychological interventions is needed. The growth of digital technology presents an opportunity to improve access to mental health services for all people.”
    To determine if machine learning models could be trained to distinguish an individual’s degree of resilience and psychological well-being using the data from wearable devices, the Icahn Mount Sinai researchers analyzed data from the Warrior Watch Study. Leveraged for the current digital observational study, the data set comprised 329 health care workers enrolled at seven hospitals in New York City.

    Subjects wore an Apple Watch® Series 4 or 5 for the duration of their participation, measuring heart rate variability and resting heart rate throughout the follow-up period. Surveys were collected measuring resilience, optimism, and emotional support at baseline. The metrics collected were found to be predictive in identifying resilience or well-being states. Despite the Warrior Watch Study not being designed to evaluate this endpoint, the findings support the further assessment of psychological characteristics from passively collected wearable data.
    “We hope that this approach will enable us to bring psychological assessment and care to a larger population, who may not have access at this time,” said Micol Zweig, MPH, co-author of the paper and Associate Director of Clinical Research, Hasso Plattner Institute for Digital Health at Mount Sinai. “We also intend to evaluate this technique in other patient populations to further refine the algorithm and improve its applicability.”
    To that end, the research team plans to continue using wearable data to observe a range of physical and psychological disorders and diseases. The simultaneous development of sophisticated analytical tools, including artificial intelligence, say the investigators, can facilitate the analysis of data collected from these devices and apps to identify patterns associated with a given mental or physical disease condition.
    The paper is titled “A machine learning approach to determine resilience utilizing wearable device data: analysis of an observational cohort.”
    Additional co-authors are Matteo Danielleto, PhD, Maria Suprun, PhD, Eddye Golden, MPH, Sparshdeep Kaur, BBA, Drew Helmus, MPH, Anthony Biello, BA, Dennis Charney, MD, Laurie Keefer, PhD, Mayte Suarez-Farinas, PhD, and Girish N Nadkami, MD, all from the Icahn School of Medicine at Mount Sinai.
    Support for this study was provided by the Ehrenkranz Lab for Human Resilience, the BioMedical Engineering and Imaging Institute, the Hasso Plattner Institute for Digital Health, the Mount Sinai Clinical Intelligence Center, and the Dr. Henry D. Janowitz Division of Gastroenterology, all at Icahn Mount Sinai, and from the National Institutes of Health, grant number K23DK129835. More

  • in

    Sensor enables high-fidelity input from everyday objects, human body

    Couches, tables, sleeves and more can turn into a high-fidelity input device for computers using a new sensing system developed at the University of Michigan.
    The system repurposes technology from new bone-conduction microphones, known as Voice Pickup Units (VPUs), which detect only those acoustic waves that travel along the surface of objects. It works in noisy environments, along odd geometries such as toys and arms, and on soft fabrics such as clothing and furniture.
    Called SAWSense, for the surface acoustic waves it relies on, the system recognizes different inputs, such as taps, scratches and swipes, with 97% accuracy. In one demonstration, the team used a normal table to replace a laptop’s trackpad.
    “This technology will enable you to treat, for example, the whole surface of your body like an interactive surface,” said Yasha Iravantchi, U-M doctoral candidate in computer science and engineering. “If you put the device on your wrist, you can do gestures on your own skin. We have preliminary findings that demonstrate this is entirely feasible.”
    Taps, swipes and other gestures send acoustic waves along the surfaces of materials. The system then classifies these waves with machine learning to turn all touch into a robust set of inputs. The system was presented last week at the 2023 Conference on Human Factors in Computing Systems, where it received a best paper award.
    As more objects continue to incorporate smart or connected technology, designers are faced with a number of challenges when trying to give them intuitive input mechanisms. This results in a lot of clunky incorporation of input methods such as touch screens, as well as mechanical and capacitive buttons, Iravantchi says. Touch screens may be too costly to enable gesture inputs across large surfaces like counters and refrigerators, while buttons only allow one kind of input at predefined locations.

    Past approaches to overcome these limitations have included the use of microphones and cameras for audio- and gesture-based inputs, but the authors say techniques like these have limited practicality in the real world.
    “When there’s a lot of background noise, or something comes between the user and the camera, audio and visual gesture inputs don’t work well,” Iravantchi said.
    To overcome these limitations, the sensors powering SAWSense are housed in a hermetically sealed chamber that completely blocks even very loud ambient noise. The only entryway is through a mass-spring system that conducts the surface-acoustic waves inside the housing without ever coming in contact with sounds in the surrounding environment. When combined with the team’s signal processing software, which generates features from the data before feeding it into the machine learning model, the system can record and classify the events along an object’s surface.
    “There are other ways you could detect vibrations or surface-acoustic waves, like piezo-electric sensors or accelerometers,” said Alanson Sample, U-M associate professor of electrical engineering and computer science, “but they can’t capture the broad range of frequencies that we need to tell the difference between a swipe and a scratch, for instance.”
    The high fidelity of the VPUs allows SAWSense to identify a wide range of activities on a surface beyond user touch events. For instance, a VPU on a kitchen countertop can detect chopping, stirring, blending or whisking, as well as identifying electronic devices in use such as a blender or microwave.

    “VPUs do a good job of sensing activities and events happening in a well-defined area,” Iravantchi said. “This allows the functionality that comes with a smart object without the privacy concerns of a standard microphone that senses the whole room, for example.”
    When multiple VPUs are used in combination, SAWSense could enable more specific and sensitive inputs, especially those that require a sense of space and distance like the keys on a keyboard or buttons on a remote.
    In addition, the researchers are exploring the use of VPUs for medical sensing, including picking up delicate noises such as the sounds of joints and connective tissues as they move. The high-fidelity audio data VPUs provide could enable real-time analytics about a person’s health, Sample says.
    The research is partially funded by Meta Platforms Inc.
    The team has applied for patent protection with the assistance of U-M Innovation Partnerships and is seeking partners to bring the technology to market. More

  • in

    Lithography-free photonic chip offers speed and accuracy for artificial intelligence

    Photonic chips have revolutionized data-heavy technologies. On their own or in concert with traditional electronic circuits, these laser-powered devices send and process information at the speed of light, making them a promising solution for artificial intelligence’s data-hungry applications.
    In addition to their incomparable speed, photonic circuits use significantly less energy than electronic ones. Electrons move relatively slowly through hardware, colliding with other particles and generating heat, while photons flow without losing energy, generating no heat at all. Unburdened by the energy loss inherent in electronics, integrated photonics are poised to play a leading role in sustainable computing.
    Photonics and electronics draw on separate areas of science and use distinct architectural structures. Both, however, rely on lithography to define their circuit elements and connect them sequentially. While photonic chips don’t make use of the transistors that populate electronic chips’ ever-shrinking and increasingly layered grooves, their complex lithographic patterning guides laser beams through a coherent circuit to form a photonic network that can perform computational algorithms.
    But now, for the first time, researchers at the University of Pennsylvania School of Engineering and Applied Science have created a photonic device that provides programmable on-chip information processing without lithography, offering the speed of photonics augmented by superior accuracy and flexibility for AI applications.
    Achieving unparalleled control of light, this device consists of spatially distributed optical gain and loss. Lasers cast light directly on a semiconductor wafer, without the need for defined lithographic pathways.
    Liang Feng, Professor in the Departments of Materials Science and Engineering (MSE) and Electrical Systems and Engineering (ESE), along with Ph.D. student Tianwei Wu (MSE) and postdoctoral fellows Zihe Gao and Marco Menarini (ESE), introduced the microchip in a recent study published in Nature Photonics.

    Silicon-based electronic systems have transformed the computational landscape. But they have clear limitations: they are slow in processing signal, they work through data serially and not in parallel, and they can only be miniaturized to a certain extent. Photonics is one of the most promising alternatives because it can overcome all these shortcomings.
    “But photonic chips intended for machine learning applications face the obstacles of an intricate fabrication process where lithographic patterning is fixed, limited in reprogrammability, subject to error or damage and expensive,” says Feng. “By removing the need for lithography, we are creating a new paradigm. Our chip overcomes those obstacles and offers improved accuracy and ultimate reconfigurability given the elimination of all kinds of constraints from predefined features.”
    Without lithography, these chips become adaptable data-processing powerhouses. Because patterns are not pre-defined and etched in, the device is intrinsically free of defects. Perhaps more impressively, the lack of lithography renders the microchip impressively reprogrammable, able to tailor its laser-cast patterns for optimal performance, be the task simple (few inputs, small datasets) or complex (many inputs, large datasets).
    In other words, the intricacy or minimalism of the device is a sort of living thing, adaptable in ways no etched microchip could be.
    “What we have here is something incredibly simple,” says Wu. “We can build and use it very quickly. We can integrate it easily with classical electronics. And we can reprogram it, changing the laser patterns on the fly to achieve real-time reconfigurable computing for on-chip training of an AI network.”
    An unassuming slab of semiconductor, the device couldn’t be simpler. It’s the manipulation of this slab’s material properties that is the key to research team’s breakthrough in projecting lasers into dynamically programmable patterns to reconfigure the computing functions of the photonic information processor.

    This ultimate reconfigurability is critical for real-time machine learning and AI.
    “The interesting part,” says Menarini, “is how we are controlling the light. Conventional photonic chips are technologies based on passive material, meaning its material scatters light, bouncing it back and forth. Our material is active. The beam of pumping light modifies the material such that when the signal beam arrives, it can release energy and increase the amplitude of signals.”
    “This active nature is the key to this science, and the solution required to achieve our lithography-free technology,” adds Gao. “We can use it to reroute optical signals and program optical information processing on-chip.”
    Feng compares the technology to an artistic tool, a pen for drawing pictures on a blank page.
    “What we have achieved is exactly the same: pumping light is our pen to draw the photonic computational network (the picture) on a piece of unpatterned semiconductor wafer (the blank page).”
    But unlike indelible lines of ink, these beams of light can be drawn and redrawn, their patterns tracing innumerable paths to the future. More

  • in

    Realistic simulated driving environment based on ‘crash-prone’ Michigan intersection

    The first statistically realistic roadway simulation has been developed by researchers at the University of Michigan. While it currently represents a particularly perilous roundabout, future work will expand it to include other driving situations for testing autonomous vehicle software.
    The simulation is a machine-learning model that trained on data collected at a roundabout on the south side of Ann Arbor, recognized as one of the most crash-prone intersections in the state of Michigan and conveniently just a few miles from the offices of the research team.
    Known as the Neural Naturalistic Driving Environment or NeuralNDE, it turned that data into a simulation of what drivers experience everyday. Virtual roadways like this are needed to ensure the safety of autonomous vehicle software before other cars, cyclists and pedestrians ever cross its path.
    “The NeuralNDE reproduces the driving environment and, more importantly, realistically simulates these safety-critical situations so we can evaluate the safety performance of autonomous vehicles,” said Henry Liu, U-M professor of civil engineering and director of Mcity, a U-M-led public-private mobility research partnership.
    Liu is also director of Center for Connected and Automated Transportation and corresponding author of the study in Nature Communications.
    Safety critical events, which require a driver to make split-second decisions and take action, don’t happen that often. Drivers can go many hours between events that force them to slam on the brakes or swerve to avoid a collision, and each event has its own unique circumstances.

    Together, these represent two bottlenecks in the effort to simulate our roadways, known as the “curse of rarity” and the “curse of dimensionality” respectively. The curse of dimensionality is caused by the complexity of the driving environment, which includes factors like pavement quality, the current weather conditions, and the different types of road users including pedestrians and bicyclists.
    To model it all, the team tried to see it all. They installed sensor systems on light poles which continuously collect data at the State Street/Ellsworth Road roundabout.
    “The reason that we chose that location is that roundabouts are a very challenging, urban driving scenario for autonomous vehicles. In a roundabout, drivers are required to spontaneously negotiate and cooperate with other drivers moving through the intersection. In addition, this particular roundabout experiences high traffic volume and is two lanes, which adds to its complexity,” said Xintao Yan, a Ph.D. student in civil and environmental engineering and first author of the study, who is advised by Liu.
    The NeuralNDE serves as a key component of the CCAT Safe AI Framework for Trustworthy Edge Scenario Tests, or SAFE TEST, a system developed by Liu’s team that uses artificial intelligence to reduce the testing miles required to ensure the safety of autonomous vehicles by 99.99%. It essentially breaks the “curse of rarity,” introducing safety-critical incidents a thousand times more frequently than they occur in real driving. The NeuralNDE is also critical to a project designed to enable the Mcity Test Facility to be used for remote testing of AV software.
    But unlike a fully virtual environment, these tests take place in mixed reality on closed test tracks such as the Mcity Test Facility and the American Center for Mobility in Ypsilanti, Michigan. In addition to the real conditions of the track, the autonomous vehicles also experience virtual drivers, cyclists and pedestrians behaving in both safe and dangerous ways. By testing these scenarios in a controlled environment, AV developers can fine-tune their systems to better handle all driving situations.
    The NeuralNDE is not only beneficial for AV developers but also for researchers studying human driver behavior. The simulation can interpret data on how drivers respond to different scenarios, which can help develop more functional road infrastructure.
    In 2021, the U-M Transportation Research Institute was awarded $9.95 million in funding by the U.S. Department of Transportation to expand the number of intersections equipped with these sensors to 21. This implementation will expand the capabilities of the NeuralNDE and provide real-time alerts to drivers with connected vehicles.
    The research was funded by Mcity, CCAT and the U-M Transportation Research Institute. Founded in 1965, UMTRI is a global leader in multidisciplinary research and a partner of choice for industry leaders, foundations and government agencies to advance safe, equitable and efficient transportation and mobility. CCAT is a regional university transportation research center that was recently awarded a $15 million, five-year renewal by the USDOT. More

  • in

    Brain activity decoder can reveal stories in people’s minds

    A new artificial intelligence system called a semantic decoder can translate a person’s brain activity — while listening to a story or silently imagining telling a story — into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
    The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.
    Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
    “For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
    The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant’s brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.
    For example, in experiments, a participant listening to a speaker say, “I don’t have my driver’s license yet” had their thoughts translated as, “She has not even started to learn to drive yet.” Listening to the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!'” was decoded as, “Started to scream and cry, and then she just said, ‘I told you to leave me alone.'”
    Beginning with an earlier version of the paper that appeared as a preprint online, the researchers addressed questions about potential misuse of the technology. The paper describes how decoding worked only with cooperative participants who had participated willingly in training the decoder. Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.

    “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
    In addition to having participants listen or think about stories, the researchers asked subjects to watch four short, silent videos while in the scanner. The semantic decoder was able to use their brain activity to accurately describe certain events from the videos.
    The system currently is not practical for use outside of the laboratory because of its reliance on the time need on an fMRI machine. But the researchers think this work could transfer to other, more portable brain-imaging systems, such as functional near-infrared spectroscopy (fNIRS).
    “fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” Huth said. “So, our exact kind of approach should translate to fNIRS,” although, he noted, the resolution with fNIRS would be lower.
    This work was supported by the Whitehall Foundation, the Alfred P. Sloan Foundation and the Burroughs Wellcome Fund.
    The study’s other co-authors are Amanda LeBel, a former research assistant in the Huth lab, and Shailee Jain, a computer science graduate student at UT Austin.
    Alexander Huth and Jerry Tang have filed a PCT patent application related to this work. More