More stories

  • in

    Cornell researchers build first ‘microwave brain’ on a chip

    Cornell University researchers have developed a low-power microchip they call a “microwave brain,” the first processor to compute on both ultrafast data signals and wireless communication signals by harnessing the physics of microwaves.
    Detailed today in the journal Nature Electronics, the processor is the first, true microwave neural network and is fully integrated on a silicon microchip. It performs real-time frequency domain computation for tasks like radio signal decoding, radar target tracking and digital data processing, all while consuming less than 200 milliwatts of power.
    “Because it’s able to distort in a programmable way across a wide band of frequencies instantaneously, it can be repurposed for several computing tasks,” said lead author Bal Govind, a doctoral student who conducted the research with Maxwell Anderson, also a doctoral student. “It bypasses a large number of signal processing steps that digital computers normally have to do.”
    That capability is enabled by the chip’s design as a neural network, a computer system modeled on the brain, using interconnected modes produced in tunable waveguides. This allows it to recognize patterns and learn from data. But unlike traditional neural networks that rely on digital operations and step-by-step instructions timed by a clock, this network uses analog, nonlinear behavior in the microwave regime, allowing it to handle data streams in the tens of gigahertz – much faster than most digital chips.
    “Bal threw away a lot of conventional circuit design to achieve this,” said Alyssa Apsel, professor of engineering, who was co-senior author with Peter McMahon, associate professor of applied and engineering physics. “Instead of trying to mimic the structure of digital neural networks exactly, he created something that looks more like a controlled mush of frequency behaviors that can ultimately give you high-performance computation.”
    The chip can perform both low-level logic functions and complex tasks like identifying bit sequences or counting binary values in high-speed data. It achieved at or above 88% accuracy on multiple classification tasks involving wireless signal types, comparable to digital neural networks but with a fraction of the power and size.
    “In traditional digital systems, as tasks get more complex, you need more circuitry, more power and more error correction to maintain accuracy,” Govind said. “But with our probabilistic approach, we’re able to maintain high accuracy on both simple and complex computations, without that added overhead.”
    The chip’s extreme sensitivity to inputs makes it well-suited for hardware security applications like sensing anomalies in wireless communications across multiple bands of microwave frequencies, according to the researchers.

    “We also think that if we reduce the power consumption more, we can deploy it to applications like edge computing,” Apsel said, “You could deploy it on a smartwatch or a cellphone and build native models on your smart device instead of having to depend on a cloud server for everything.”
    Though the chip is still experimental, the researchers are optimistic about its scalability. They are experimenting with ways to improve its accuracy and integrate it into existing microwave and digital processing platforms.
    The work emerged from an exploratory effort within a larger project supported by the Defense Advanced Research Projects Agency and the Cornell NanoScale Science and Technology Facility, which is funded in part by the National Science Foundation. More

  • in

    AI finds hidden safe zones inside a fusion reactor

    A public-private partnership between Commonwealth Fusion Systems (CFS), the U.S. Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL) and Oak Ridge National Laboratory has led to a new artificial intelligence (AI) approach that is faster at finding what’s known as “magnetic shadows” in a fusion vessel: safe havens protected from the intense heat of the plasma.
    Known as HEAT-ML, the new AI could lay the foundation for software that significantly speeds up the design of future fusion systems. Such software could also enable good decision-making during fusion operations by adjusting the plasma so that potential problems are thwarted before they start.
    “This research shows that you can take an existing code and create an AI surrogate that will speed up your ability to get useful answers, and it opens up interesting avenues in terms of control and scenario planning,” said Michael Churchill, co-author of a paper in Fusion Engineering and Design about HEAT-ML and head of digital engineering at PPPL.
    Fusion, the reaction that fuels the sun and stars, could provide potentially limitless amounts of electricity on Earth. To harness it, researchers need to overcome key scientific and engineering challenges. One such challenge is handling the intense heat coming from the plasma, which reaches temperatures hotter than the sun’s core when confined using magnetic fields in a fusion vessel known as a tokamak. Speeding up the calculations that predict where this heat will hit and what parts of the tokamak will be safe in the shadows of other parts is key to bringing fusion power to the grid.
    “The plasma-facing components of the tokamak might come in contact with the plasma, which is very hot and can melt or damage these elements,” said Doménica Corona Rivera, an associate research physicist at PPPL and first author on the paper on HEAT-ML. “The worst thing that can happen is that you would have to stop operations.”
    PPPL amplifies its impact through public-private partnership
    HEAT-ML was specifically made to simulate a small part of SPARC: a tokamak currently under construction by CFS. The Massachusetts company hopes to demonstrate net energy gain by 2027, meaning SPARC would generate more energy than it consumes.

    Simulating how heat impacts SPARC’s interior is central to this goal and a big computing challenge. To break down the challenge into something manageable, the team focused on a section of SPARC where the most intense plasma heat exhaust intersects with the material wall. This particular part of the tokamak, representing 15 tiles near the bottom of the machine, is the part of the machine’s exhaust system that will be subjected to the most heat.
    To create such a simulation, researchers generate what they call shadow masks. Shadow masks are 3D maps of magnetic shadows, which are specific areas on the surfaces of a fusion system’s internal components that are shielded from direct heat. The location of these shadows depends on the shape of the parts inside the tokamak and how they interact with the magnetic field lines that confine the plasma.
    Creating simulations to optimize the way fusion systems operate
    Originally, an open-source computer program called HEAT, or the Heat flux Engineering Analysis Toolkit, calculated these shadow masks. HEAT was created by CFS Manager Tom Looby during his doctoral work with Matt Reinke, now leader of the SPARC Diagnostic Team, and was first applied on the exhaust system for PPPL’s National Spherical Torus Experiment-Upgrade machine.
    HEAT-ML traces magnetic field lines from the surface of a component to see if the line intersects other internal parts of the tokamak. If it does, that region is marked as “shadowed.” However, tracing these lines and finding where they intersect the detailed 3D machine geometry was a significant bottleneck in the process. It could take around 30 minutes for a single simulation and even longer for some complex geometries.
    HEAT-ML overcomes this bottleneck, accelerating the calculations to a few milliseconds. It uses a deep neural network: a type of AI that has hidden layers of mathematical operations and parameters that it applies to the data to learn how to do a specific task by looking for patterns. HEAT-ML’s deep neural network was trained using a database of approximately 1,000 SPARC simulations from HEAT to learn how to calculate shadow masks.
    HEAT-ML is currently tied to the specific design of SPARC’s exhaust system; it only works for that small part of that particular tokamak and is an optional setting in the HEAT code. However, the research team hopes to expand its capabilities to generalize the calculation of shadow masks for exhaust systems of any shape and size, as well as the rest of the plasma-facing components inside a tokamak.
    DOE supported this work under contracts DE-AC02-09CH11466 and DE-AC05-00OR22725, and it also received support from CFS. More

  • in

    Tiny “talking” robots form shape-shifting swarms that heal themselves

    Animals like bats, whales and insects have long used acoustic signals for communication and navigation. Now, an international team of scientists have taken a page from nature’s playbook to model micro-sized robots that use sound waves to coordinate into large swarms that exhibit intelligent-like behavior. The robot groups could one day carry out complex tasks like exploring disaster zones, cleaning up pollution, or performing medical treatments from inside the body, according to team lead Igor Aronson, Huck Chair Professor of Biomedical Engineering, Chemistry, and Mathematics at Penn State.
    “Picture swarms of bees or midges,” Aronson said. “They move, that creates sound, and the sound keeps them cohesive, many individuals acting as one.”
    The researchers published their work on August 12 in the journal Physical Review X.
    Since the miniature, sound-broadcasting swarms of micromachines are self-organizing, they can navigate tight spaces and even re-form themselves if deformed. The swarms’ collective — or emergent — intelligence could one day be harnessed to carry out tasks like cleaning up pollution in contaminated environments, Aronson explained.
    Beyond the environment, the robot swarms could potentially work inside the body, delivering drugs directly to a problem area, for example. Their collective sensing also helps in detecting changes in surroundings, and their ability to “self-heal” means they can keep functioning as a collective unit even after breaking apart, which could be especially useful for threat detection and sensor applications, Aronson said.
    “This represents a significant leap toward creating smarter, more resilient and, ultimately, more useful microrobots with minimal complexity that could tackle some of our world’s toughest problems,” he said. “The insights from this research are crucial for designing the next generation of microrobots, capable of performing complex tasks and responding to external cues in challenging environments.”
    For the study, the team developed a computer model to track the movements of tiny robots, each equipped with an acoustic emitter and a detector. They found that acoustic communication allowed the individual robotic agents to work together seamlessly, adapting their shape and behavior to their environment, much like a school of fish or a flock of birds.

    While the robots in the paper were computational agents within a theoretical — or agent-based — model, rather than physical devices that were manufactured, the simulations observed the emergence of collective intelligence that would likely appear in any experimental study with the same design, Aronson said.
    “We never expected our models to show such a high level of cohesion and intelligence from such simple robots,” Aronson said. “These are very simple electronic circuits. Each robot can move along in some direction, has a motor, a tiny microphone, speaker and an oscillator. That’s it, but nonetheless it’s capable of collective intelligence. It synchronizes its own oscillator to the frequency of the swarm’s acoustic field and migrates toward the strongest signal.”
    The discovery marks a new milestone for a budding field called active matter, the study of the collective behavior of self-propelled microscopic biological and synthetic agents, from swarms of bacteria or living cells to microrobots. It shows for the first time that sound waves can function as a means of controlling the micro-sized robots, Aronson explained. Up until now, active matter particles have been controlled predominantly through chemical signaling.
    “Acoustic waves work much better for communication than chemical signaling,” Aronson said. “Sound waves propagate faster and farther almost without loss of energy — and the design is much simpler. The robots effectively ‘hear’ and ‘find’ each other, leading to collective self-organization. Each element is very simple. The collective intelligence and functionality arise from minimal ingredients and simple acoustic communication.”
    The other authors on the paper are Alexander Ziepke, Ivan Maryshev and Erwin Frey of the Ludwig Maximilian University of Munich. The John Templeton Foundation funded the research. More

  • in

    Why AI emails can quietly destroy trust at work

    With over 75% of professionals using AI in their daily work, writing and editing messages with tools like ChatGPT, Gemini, Copilot or Claude has become a commonplace practice. While generative AI tools are seen to make writing easier, are they effective for communicating between managers and employees?
    A new study of 1,100 professionals reveals a critical paradox in workplace communications: AI tools can make managers’ emails more professional, but regular use can undermine trust between them and their employees.
    “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida’s Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance.”
    In the study published in the International Journal of Business Communication, Coman and his co-author, Peter Cardon, Ph.D., of the University of Southern California, surveyed professionals about how they viewed emails that they were told were written with low, medium and high AI assistance. Survey participants were asked to evaluate different AI-written versions of a congratulatory message on both their perception of the message content and their perception of the sender.
    While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees.
    “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.”
    While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring and competency of their manager.

    The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools.
    The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities.
    “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust.
    The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention. More

  • in

    Tiny gold “super atoms” could spark a quantum revolution

    The efficiency of quantum computers, sensors and other applications often relies on the properties of electrons, including how they are spinning. One of the most accurate systems for high performance quantum applications relies on tapping into the spin properties of electrons of atoms trapped in a gas, but these systems are difficult to scale up for use in larger quantum devices like quantum computers. Now, a team of researchers from Penn State and Colorado State has demonstrated how a gold cluster can mimic these gaseous, trapped atoms, allowing scientists to take advantage of these spin properties in a system that can be easily scaled up.
    “For the first time, we show that gold nanoclusters have the same key spin properties as the current state-of-the-art methods for quantum information systems,” said Ken Knappenberger, department head and professor of chemistry in the Penn State Eberly College of Science and leader of the research team. “Excitingly, we can also manipulate an important property called spin polarization in these clusters, which is usually fixed in a material. These clusters can be easily synthesized in relatively large quantities, making this work a promising proof-of-concept that gold clusters could be used to support a variety of quantum applications.”
    Two papers describing the gold clusters and confirming their spin properties appeared in ACS Central Science, ACS Central Science and The Journal of Physical Chemistry Letters.
    “An electron’s spin not only influences important chemical reactions, but also quantum applications like computation and sensing,” said Nate Smith, graduate student in chemistry in the Penn State Eberly College of Science and first author of one of the papers. “The direction an electron spins and its alignment with respect to other electrons in the system can directly impact the accuracy and longevity of quantum information systems.”
    Much like the Earth spins around its axis, which is tilted with respect to the sun, an electron can spin around its axis, which can be tilted with respect to its nucleus. But unlike Earth, an electron can spin clockwise or counterclockwise. When many electrons in a material are spinning in the same direction and their tilts are aligned, the electrons are considered correlated, and the material is said to have a high degree of spin polarization.
    “Materials with electrons that are highly correlated, with a high degree of spin polarization, can maintain this correlation for a much longer time, and thus remain accurate for much longer,” Smith said.
    The current state-of-the-art system for high accuracy and low error in quantum information systems involve trapped atomic ions — atoms with an electric charge — in a gaseous state. This system allows electrons to be excited to different energy levels, called Rydberg states, which have very specific spin polarizations that can last for a long period of time. It also allows for the superposition of electrons, with electrons existing in multiple states simultaneously until they are measured, which is a key property for quantum systems.

    “These trapped gaseous ions are by nature dilute, which makes them very difficult to scale up,” Knappenberger said. “The condensed phase required for a solid material, by definition, packs atoms together, losing that dilute nature. So, scaling up provides all the right electronic ingredients, but these systems become very sensitive to interference from the environment. The environment basically scrambles all the information that you encoded into the system, so the rate of error becomes very high. In this study, we found that gold clusters can mimic all the best properties of the trapped gaseous ions with the benefit of scalability.”
    Scientists have heavily studied gold nanostructures for their potential use in optical technology, sensing, therapeutics and to speed up chemical reactions, but less is known about their magnetic and spin-dependent properties. In the current studies, the researchers specifically explored monolayer-protected clusters, which have a core of gold and are surrounded by other molecules called ligands. The researchers can precisely control the construction of these clusters and can synthesize relatively large amounts at one time.
    “These clusters are referred to as super atoms, because their electronic character is like that of an atom, and now we know their spin properties are also similar,” Smith said. “We identified 19 distinguishable and unique Rydberg-like spin-polarized states that mimic the super-positions that we could do in the trapped, gas-phase dilute ions. This means the clusters have the key properties needed to carry out spin-based operations.”
    The researchers determined the spin polarization of the gold clusters using a similar method used with traditional atoms. While one type of gold cluster had 7% spin polarization, a cluster with different a ligand approached 40% spin polarization, which Knappenberger said is competitive with some of the leading two-dimensional quantum materials.
    “This tells us that the spin properties of the electron are intimately related to the vibrations of the ligands,” Knappenberger said. “Traditionally, quantum materials have a fixed value of spin polarization that cannot be significantly changed, but our results suggest we can modify the ligand of these gold clusters to tune this property widely.”
    The research team plans to explore how different structures within the ligands impact spin polarization and how they could be manipulated to fine tune spin properties.
    “The quantum field is generally dominated by researchers in physics and materials science, and here we see the opportunity for chemists to use our synthesis skills to design materials with tunable results,” Knappenberger said. “This is a new frontier in quantum information science.”
    In addition to Smith and Knappenberger, the research team includes Juniper Foxley, graduate student in chemistry at Penn State; Patrick Herbert, who earned a doctoral degree in chemistry at Penn State in 2019; Jane Knappenberger, researcher in the Penn State Eberly College of Science; as well as Marcus Tofanelli and Christopher Ackerson at Colorado State
    Funding from the Air Force Office of Scientific Research and the U.S. National Science Foundation supported this research. More

  • in

    New “evolution engine” creates super-proteins 100,000x faster

    In medicine and biotechnology, the ability to evolve proteins with new or improved functions is crucial, but current methods are often slow and laborious. Now, Scripps Research scientists have developed a synthetic biology platform that accelerates evolution itself — enabling researchers to evolve proteins with useful, new properties thousands of times faster than nature. The system, named T7-ORACLE, was described in Science on August 7, 2025, and represents a breakthrough in how researchers can engineer therapeutic proteins for cancer, neurodegeneration and essentially any other disease area.
    “This is like giving evolution a fast-forward button,” says co-senior author Pete Schultz, the President and CEO of Scripps Research, where he also holds the L.S. “Sam” Skaggs Presidential Chair. “You can now evolve proteins continuously and precisely inside cells without damaging the cell’s genome or requiring labor-intensive steps.”
    Directed evolution is a laboratory process that involves introducing mutations and selecting variants with improved function over multiple cycles. It’s used to tailor proteins with desired properties, such as highly selective, high-affinity antibodies, enzymes with new specificities or catalytic properties, or to investigate the emergence of resistance mutations in drug targets. However, traditional methods often require repeated rounds of DNA manipulation and testing with each round taking a week or more. Systems for continuous evolution — where proteins evolve inside living cells without manual intervention — aim to streamline this process by enabling simultaneous mutation and selection with each round of cell division (roughly 20 minutes for bacteria). But existing approaches have been limited by technical complexity or modest mutation rates.
    T7-ORACLE circumvents these bottlenecks by engineering E. coli bacteria — a standard model organism in molecular biology — to host a second, artificial DNA replication system derived from bacteriophage T7, a virus that infects bacteria and has been widely studied for its simple, efficient replication system. T7-ORACLE enables continuous hypermutation and accelerated evolution of biomacromolecules, and is designed to be broadly applicable to many protein targets and biological challenges. Conceptually, T7-ORACLE builds on and extends efforts on existing orthogonal replication systems — meaning they operate separately from the cell’s own machinery — such as OrthoRep in Saccharomyces cerevisiae (baker’s yeast) and EcORep in E. coli. In comparison to these systems, T7-ORACLE benefits from the combination of high mutagenesis, fast growth, high transformation efficiency, and the ease with which both the E. coli host and the circular replicon plasmid can be integrated into standard molecular biology workflows.
    The T-7 ORACLE orthogonal system targets only plasmid DNA (small, circular pieces of genetic material), leaving the cell’s host genome untouched. By engineering T7 DNA polymerase (a viral enzyme that replicates DNA) to be error-prone, the researchers introduced mutations into target genes at a rate 100,000 times higher than normal without damaging the host cells.
    “This system represents a major advance in continuous evolution,” says co-senior author Christian Diercks, an assistant professor of chemistry at Scripps Research. “Instead of one round of evolution per week, you get a round each time the cell divides — so it really accelerates the process.”
    To demonstrate the power of T7-ORACLE, the research team inserted a common antibiotic resistance gene, TEM-1 β-lactamase, into the system and exposed the E. coli cells to escalating doses of various antibiotics. In less than a week, the system evolved versions of the enzyme that could resist antibiotic levels up to 5,000 times higher than the original. This proof-of-concept demonstrated not only T7-ORACLE’s speed and precision, but also its real-world relevance by replicating how resistance develops in response to antibiotics.

    “The surprising part was how closely the mutations we saw matched real-world resistance mutations found in clinical settings,” notes Diercks. “In some cases, we saw new combinations that worked even better than those you would see in a clinic.”
    But Diercks emphasizes that the study isn’t focused on antibiotic resistance per se.
    “This isn’t a paper about TEM-1 β-lactamase,” he explains. “That gene was just a well-characterized benchmark to show how the system works. What matters is that we can now evolve virtually any protein, like cancer drug targets and therapeutic enzymes, in days instead of months.”
    The broader potential of T7-ORACLE lies in its adaptability as a platform for protein engineering. Although the system is built into E. coli, the bacterium serves primarily as a vessel for continuous evolution. Scientists can insert genes from humans, viruses or other sources into plasmids, which are then introduced into E. coli cells. T7-ORACLE mutates these genes, generating variant proteins that can be screened or selected for improved function. Because E. coli is easy to grow and widely used in labs, it provides a convenient, scalable system for evolving virtually any protein of interest.
    This could help scientists more rapidly evolve antibodies to target specific cancers, evolve more effective therapeutic enzymes, and design proteases that target proteins involved in cancer and neurodegenerative disease.
    “What’s exciting is that it’s not limited to one disease or one kind of protein,” says Diercks. “Because the system is customizable, you can drop in any gene and evolve it toward whatever function you need.”
    Moreover, T7-ORACLE works with standard E. coli cultures and widely used lab workflows, avoiding the complex protocols required by other continuous evolution systems.

    “The main thing that sets this apart is how easy it is to implement,” adds Diercks. “There’s no specialized equipment or expertise required. If you already work with E. coli, you can probably use this system with minimal adjustments.”
    T7-ORACLE reflects Schultz’s broader goal: to rebuild key biological processes — such as DNA replication, RNA transcription and protein translation — so they function independently of the host cell. This separation allows scientists to reprogram these processes without disrupting normal cellular activity. By decoupling fundamental processes from the genome, tools like T7-ORACLE help advance synthetic biology.
    “In the future, we’re interested in using this system to evolve polymerases that can replicate entirely unnatural nucleic acids: synthetic molecules that resemble DNA and RNA but with novel chemical properties,” says Diercks. “That would open up possibilities in synthetic genomics that we’re just beginning to explore.”
    Currently, the research team is focused on evolving human-derived enzymes for therapeutic use, and on tailoring proteases to recognize specific cancer-related protein sequences.
    “The T7-ORACLE approach merges the best of both worlds,” says Schultz. “We can now combine rational protein design with continuous evolution to discover functional molecules more efficiently than ever.”
    In addition to Diercks and Schultz, authors of the study, “An orthogonal T7 replisome for continuous hypermutation and accelerated evolution in E. coli,” are Philipp Sondermann, Cynthia Rong, Thomas G. Gillis, Yahui Ban, Celine Wang and David A. Dik of Scripps Research.
    This work was supported by funding from the National Institutes of Health (grant RGM145323A). More

  • in

    Pain relief without pills? VR nature scenes trigger the brain’s healing switch

    Immersing in virtual reality (VR) nature scenes helped relieve symptoms that are often seen in people living with long-term pain, with those who felt more present experiencing the strongest effects.
    A new study led by the University of Exeter, published in the journal Pain, tested the impact of immersive 360-degree nature films delivered using VR compared with 2D video images in reducing experience of pain, finding VR almost twice as effective.
    Long-term (chronic) pain typically lasts more than three months and is particularly difficult to treat. The researchers simulated this type of pain in healthy participants, finding that nature VR had an effect similar to that of painkillers, which endured for at least five minutes after the VR experience had ended.
    Dr Sam Hughes, Senior Lecturer in Pain Neuroscience at the University of Exeter, led the study. He said: “We’ve seen a growing body of evidence show that exposure to nature can help reduce short term, everyday pain, but there has been less research into how this might work for people living with chronic or longer-term pain. Also, not everyone is able to get out for walks in nature, particularly those living with long term health conditions like chronic pain. Our study is the first to look at the effect of prolonged exposure to a virtual reality nature scene on symptoms seen during long term pain sensitivity. Our results suggest that immersive nature experiences can reduce the development of this pain sensitivity through an enhanced sense of presence and through harnessing the brains in-built pain suppression systems”
    The study, which was funded by the Academy of Medical Sciences, involved 29 healthy participants who were shown two types of nature scene after having pain delivered on the forearm using electric shocks. On the first visit, they measured the changes in pain that occur over a 50-minute period following the electric shocks and showed how the healthy participants developed sensitivity to sharp pricking stimuli in the absence of any nature scenes. The results showed that the participants developed a type of sensitivity that closely resembles that seen in people living with nerve pain — which occurs due to changes in how pain signals are processed in the brain and spinal cord.
    On the second visit, they immersed the same participants in a 45-minute virtual reality 360-degree experience of the waterfalls of Oregon to see how this could change how the development of pain sensitivity. The scene was specially chosen to maximize therapeutic effects.
    In the second visit, they explored the same scene, but on a 2D screen.

    They completed questionnaires on their experience of pain after watching the scenes in each case, and also on how present they felt in each experience, and to what extent they felt the nature scenes to be restorative[LV1] .
    On a separate visit, participants underwent MRI brain scans at the University of Exeter’s Mireille Gillings Neuroimaging Centre. Researchers administered a cold gel to illicit a type of ongoing pain and then scanned participants to study how their brains respond.
    The researchers found that the immersive VR experience significantly reduced the development and spread of feelings of pain sensitivity to pricking stimuli, and these pain-reducing effects were still there even at the end of the 45-minute experience.
    The more present the person felt during the VR experience, the stronger this pain-relieving effect. The fMRI brain scans also revealed that people with stronger connectivity in brain regions involved in modulating pain responses experienced less pain. The results suggest that nature scenes delivered using VR can help to change how pain signals are transmitted in the brain and spinal cord during long-term pain conditions.
    Dr Sonia Medina, of the University of Exeter Medical School and one of the authors on the study, said: “We think VR has a particularly strong effect on reducing experience of pain because it’s so immersive. It really created that feeling of being present in nature – and we found the pain – reducing effect was greatest in people for whom that perception was strongest. We hope our study leads to more research to investigate further how exposure to nature effects our pain responses, so we could one day see nature scenes incorporated into ways of reducing pain for people in settings like care homes or hospitals.”
    The paper is titled ‘Immersion in nature through virtual reality attenuates the development and spread of mechanical secondary hyperalgesia: a role for insulo-thalamic effective connectivity’ and is published in the journal Pain. More

  • in

    This spectrometer is smaller than a pixel, and it sees what we can’t

    Researchers have successfully demonstrated a spectrometer that is orders of magnitude smaller than current technologies and can accurately measure wavelengths of light from ultraviolet to the near-infrared. The technology makes it possible to create hand-held spectroscopy devices and holds promise for the development of devices that incorporate an array of the new sensors to serve as next-generation imaging spectrometers.
    “Spectrometers are critical tools for helping us understand the chemical and physical properties of various materials based on how light changes when it interacts with those materials,” says Brendan O’Connor, corresponding author of a paper on the work and a professor of mechanical and aerospace engineering at North Carolina State University. “They are used in applications that range from manufacturing to biomedical diagnostics. However, the smallest spectrometers on the market are still fairly bulky.
    “We’ve created a spectrometer that operates quickly, at low voltage, and that is sensitive to a wide spectrum of light,” O’Connor says. “Our demonstration prototype is only a few square millimeters in size – it could fit on your phone. You could make it as small as a pixel, if you wanted to.”
    The technology makes use of a tiny photodetector capable of sensing wavelengths of light after the light interacts with a target material. By applying different voltages to the photodetector, you can manipulate which wavelengths of light the photodetector is most sensitive to.
    “If you rapidly apply a range of voltages to the photodetector, and measure all of the wavelengths of light being captured at each voltage, you have enough data that a simple computational program can recreate an accurate signature of the light that is passing through or reflecting off of the target material,” O’Connor says. “The range of voltages is less than one volt, and the entire process can take place in less than a millisecond.”
    Previous attempts to create miniaturized photodetectors have relied on complex optics, used high voltages, or have not been as sensitive to such a broad range of wavelengths.
    In proof-of-concept testing, the researchers found their pixel-sized spectrometer was as accurate as a conventional spectrometer and had sensitivity comparable to commercial photodetection devices.
    “In the long term, our goal is to bring spectrometers to the consumer market,” O’Connor says. “The size and energy demand of the technology make it feasible to incorporate into a smartphone, and we think this makes some exciting applications possible. From a research standpoint, this also paves the way for improved access to imaging spectroscopy, microscopic spectroscopy, and other applications that would be useful in the lab.”
    The paper, “Single pixel spectrometer based on a bias-tunable tandem organic photodetector,” is published in the journal Device. First author of the paper is Harry Schrickx, a former Ph.D. student at NC State. The paper was co-authored by Abdullah Al Shafe, a former Ph.D. student at NC State; Caleb Moore, a former undergraduate at NC State; Yusen Pei, a Ph.D. student at NC State; Franky So, the Walter and Ida Freeman Distinguished Professor of Materials Science and Engineering at NC State; and Michael Kudenov, the John and Catherine Amein Family Distinguished Professor of Electrical and Computer Engineering at NC State.
    The work was done with support from the National Science Foundation under grants 1809753 and 2324190, and from the Office of Naval Research under grant N000142412101. More