More stories

  • in

    Artificial intelligence systems excel at imitation, but not innovation

    Artificial intelligence (AI) systems are often depicted as sentient agents poised to overshadow the human mind. But AI lacks the crucial human ability of innovation, researchers at the University of California, Berkeley have found.
    While children and adults alike can solve problems by finding novel uses for everyday objects, AI systems often lack the ability to view tools in a new way, according to findings published according to findings published in Perspectives on Psychological Science, a journal of the Association for Psychological Science.
    AI language models like ChatGPT are passively trained on data sets containing billions of words and images produced by humans. This allows AI systems to function as a “cultural technology” similar to writing that can summarize existing knowledge, Eunice Yiu, a co-author of the article, explained in an interview. But unlike humans, they struggle when it comes to innovating on these ideas, she said.
    “Even young human children can produce intelligent responses to certain questions that [language learning models] cannot,” Yiu said. “Instead of viewing these AI systems as intelligent agents like ourselves, we can think of them as a new form of library or search engine. They effectively summarize and communicate the existing culture and knowledge base to us.”
    Yiu and Eliza Kosoy, along with their doctoral advisor and senior author on the paper, developmental psychologist Alison Gopnik, tested how the AI systems’ ability to imitate and innovate differs from that of children and adults. They presented 42 children ages 3 to 7 and 30 adults with text descriptions of everyday objects. In the first part of the experiment, 88% of children and 84% of adults were able to correctly identify which objects would “go best” with another. For example, they paired a compass with a ruler instead of a teapot.
    In the next stage of the experiment, 85% of children and 95% of adults were also able to innovate on the expected use of everyday objects to solve problems. In one task, for example, participants were asked how they could draw a circle without using a typical tool such as a compass. Given the choice between a similar tool like a ruler, a dissimilar tool such as a teapot with a round bottom, and an irrelevant tool such as a stove, the majority of participants chose the teapot, a conceptually dissimilar tool that could nonetheless fulfill the same function as the compass by allowing them to trace the shape of a circle.
    When Yiu and colleagues provided the same text descriptions to five large language models, the models performed similarly to humans on the imitation task, with scores ranging from 59% for the worst-performing model to 83% for the best-performing model. The AIs’ answers to the innovation task were far less accurate, however. Effective tools were selected anywhere from 8% of the time by the worst-performing model to 75% by the best-performing model.

    “Children can imagine completely novel uses for objects that they have not witnessed or heard of before, such as using the bottom of a teapot to draw a circle,” Yiu said. “Large models have a much harder time generating such responses.”
    In a related experiment, the researchers noted, children were able to discover how a new machine worked just by experimenting and exploring. But when the researchers gave several large language models text descriptions of the evidence that the children produced, they struggled to make the same inferences, likely because the answers were not explicitly included in their training data, Yiu and colleagues wrote.
    These experiments demonstrate that AI’s reliance on statistically predicting linguistic patterns is not enough to discover new information about the world, Yiu and colleagues wrote.
    “AI can help transmit information that is already known, but it is not an innovator,” Yiu said. “These models can summarize conventional wisdom but they cannot expand, create, change, abandon, evaluate, and improve on conventional wisdom in the way a young human can.” The development of AI is still in its early days, though, and much remains to be learned about how to expand the learning capacity of AI, Yiu said. Taking inspiration from children’s curious, active, and intrinsically motivated approach to learning could help researchers design new AI systems that are better prepared to explore the real world, she said. More

  • in

    COP28 is making headlines. Here’s why the focus on methane matters

    This year’s United Nations’ annual climate summit, dubbed COP28, is making a lot of headlines — not something I would have found myself writing a few years ago.

    One reason for COP’s higher profile is a growing sense of urgency to take stronger action to reduce humans’ fossil fuel emissions and mitigate the looming climate crisis. The world is nowhere near on track to meet the goals of the 2015 Paris Agreement — that is, reducing greenhouse gas emissions sufficiently to limit global warming to “well below” 2 degrees Celsius of preindustrial averages by the year 2100 (SN: 12/12/15). Meanwhile, 2023 has been the hottest year on record, people have been suffering through a barrage of extreme weather events, including heat waves, droughts and floods, and 2024 is likely to break more temperature records (SN: 12/6/23; SN: 7/19/23).

    The headlines emerging from COP28 have been a mix of pleasing, frustrating and bewildering. For example: It’s good news that 198 nations have ratified the Loss and Damage Fund, a formal acknowledgment by wealthy, high-polluting nations that they should help mitigate the rising costs of climate change faced by developing nations. But it’s frustrating that the pledges by the wealthy nations so far amount to just about $725 million, less than 0.2 percent of the annual climate change–linked losses faced by developing nations.

    For me, one of the biggest questions related to those headlines pertains to methane. It feels unclear whether, on balance, there’s more good or bad news when it comes to emissions of that second most important human-caused greenhouse gas.

    Methane is a powerhouse climate-warming gas, with about 80 times the atmosphere-warming potential of carbon dioxide. However, methane has a saving grace: It mercifully lingers for only about a decade in the atmosphere (SN: 4/22/20). Carbon dioxide can stick around for up to 1,000 years. Cutting methane emissions can mean its atmospheric concentration drops relatively rapidly.

    The Global Methane Pledge, launched two years ago at COP26, may be gaining some momentum, but it still lacks the sign-on of key big-emitting nations. Then there’s the December 1 announcement by 49 oil and gas companies that they would reduce methane leaks from their infrastructure to “near zero” by 2030, which seems like a good thing on the face of it but has also been called greenwashing (SN: 11/24/21).

    And all of this policy wrangling is happening against a bizarre backdrop: a startling, puzzling, worrisome sharp increase in methane emissions over the last decade — not from humans, but from natural sources, particularly wetlands.

    To help me sift through the headlines and better understand all the news that’s seeping out, I talked with Euan Nisbet, a geochemist at Royal Holloway, University of London in Egham.

    Methane “is rising very fast,” Nisbet says. “So fast it looks like the Paris Agreement is going to fail.”

    Countries are promising to cut methane emissions

    While the rise in natural methane emissions is worrisome, about 60 percent of current methane emissions into the atmosphere still comes from human activities. Methane doesn’t just seep out of leaky oil and gas pipelines or get pumped into the air during coal combustion. Agriculture, including ruminant animals, are a big source (SN: 5/5/22). Landfills are another (SN: 11/14/19).

    That’s where the Global Methane Pledge comes in, promising a 30 percent cut in humans’ emissions by 2030. The pledge was spearheaded in 2021 by the United States and the European Union, and so far, 150 nations have signed on. Most recently, Turkmenistan, which has sizable methane emissions, joined. So there’s hope: If everyone were to follow suit, it really is possible to cut global methane emissions deeply, bringing us much closer to meeting the Paris Agreement’s goals, Nisbet argues in a Dec. 8 editorial in Science.

    Still, many of the world’s biggest methane emitters, including China, India, Russia, Iran and South Africa, have not signed on to the pledge. China’s methane comes in large part from its coal combustion; India’s, from coal as well as waste heaps and biomass fires. And China alone currently releases an estimated 65 million metric tons of methane per year, more than double that of the United States or India, the next two biggest emitters.

    With only seven years left before the 2030 deadline, meeting the global pledge’s methane reduction goals would be steep — but, Nisbet says, not impossible.

    There’s precedent for successfully making such steep cuts to methane in such a short time, he adds. During the 2000s, “there was a seven-year period where [the U.K. government] brought methane emissions down by 30 percent,” in large part by reducing emissions from landfills and gas leaks.

    China has just released its own Methane Emissions Control Action Plan in November, alongside a joint commitment between China and the United States to take action on methane. That news sounds potentially promising, if not wholly reassuring, as the plan does not include a lot of concrete numbers, Nisbet says.

    So, what about the oil and gas industry’s recent promise to address its leaky infrastructure? Such a promise also sounds positive on the face of it — leaky infrastructure is definitely the low-hanging fruit when it comes to reducing humans’ methane emissions to the atmosphere (SN: 2/3/22).

    On the other hand, hundreds of scientific and environmental organizations have signed an open letter in response. The letter suggests that the oil and gas industry’ promise is just greenwashing, “a smokescreen to hide the reality that we need to phase out oil, gas and coal,” the letter states. Furthermore, many oil and gas companies may routinely abandon old, still-leaking wells — effectively eliminating those leaks from their company’s emissions roster without actually stopping them.

    That said, addressing the leaks does have to be done, Nisbet says. “I’d love to shut down the coal industry quickly, but I’m aware of the enormous social problems that brings. It’s a very difficult thing to nuance. You can’t go cold turkey. We’ve got to wind it down in an intelligent and collaborative way. The best thing to do is to stop the crazy leaks and venting.”

    Natural methane emission has been surging

    Plugging the leaks as soon as possible has taken on an increasing urgency, Nisbet says, because of a stark rise in natural methane being emitted to the atmosphere. Why this rise is happening isn’t clear, but it seems to be some sort of climate change–related feedback, perhaps linked to changes in both temperature and precipitation.

    That natural methane emissions bump was also not something that the architects of the Paris Agreement saw coming. Most of that rise has happened since the agreement was signed. From 1999 to 2006, atmospheric methane had spent several years in near-equilibrium — elevated due to human activities, but relatively stable. Then, in 2007, atmospheric methane concentrations began to increase. In 2013, there was a particularly sharp rise, and then again in 2020.

    Much of that increase seems to have come from tropical wetlands. Over the past decade, researchers have tracked shifts in methane sources by measuring carbon-12 and carbon-13 in the gas. The ratio of those two forms of carbon in the methane varies significantly depending on the source of the gas. Fossil fuel-derived methane tends to have higher concentrations of carbon-13 relative to carbon-12; methane from wetlands or agriculture tends to be more enriched in carbon-12.

    The recent spikes in natural methane are eerily reminiscent of ice core records of “glacial termination” events, times in Earth’s deep past when the world abruptly shifted from a glacial period to a period of rapid warming, Nisbet and others reported in June in Global Biogeochemical Cycles. Such glacial termination events are large-scale reorganizations of the ocean-atmosphere system, involving dramatic changes to the circulation of the global ocean, as well as to large climate patterns like the Indian Ocean Dipole (SN: 1/9/20).

    “Is this comparable to the start of a termination event? It looks horribly like that,” Nisbet says. But “it may not be. It might be totally innocent.”

    Right now, scientists are racing to understand what’s happening with the natural methane bump, and how exactly the increased emissions might be linked to climate change. But as we search for those answers, there is something that humans can and must do in the meantime, he says: Cut human emissions of the gas as much as possible, as fast as possible. “It’s very simple. When you’re in a hole, stop digging.” More

  • in

    Made-to-order diagnostic tests may be on the horizon

    McGill University researchers have made a breakthrough in diagnostic technology, inventing a ‘lab on a chip’ that can be 3D-printed in just 30 minutes. The chip has the potential to make on-the-spot testing widely accessible.
    As part of a recent study, the results of which were published in the journal Advanced Materials, the McGill team developed capillaric chips that act as miniature laboratories. Unlike other computer microprocessors, these chips are single-use and require no external power source — a simple paper strip suffices. They function through capillary action — the very phenomena by which a spilled liquid on the kitchen table spontaneously wicks into the paper towel used to wipe it up.
    “Traditional diagnostics require peripherals, while ours can circumvent them. Our diagnostics are a bit what the cell phone was to traditional desktop computers that required a separate monitor, keyboard and power supply to operate,” explains Prof. David Juncker, Chair of the Department of Biomedical Engineering at McGill and senior author on the study.
    At-home testing became crucial during the COVID-19 pandemic. But rapid tests have limited availability and can only drive one liquid across the strip, meaning most diagnostics are still done in central labs. Notably, the capillaric chips can be 3D-printed for various tests, including COVID-19 antibody quantification.
    The study brings 3D-printed home diagnostics one step closer to reality, though some challenges remain, such as regulatory approvals and securing necessary test materials. The team is actively working to make their technology more accessible, adapting it for use with affordable 3D printers. The innovation aims to speed up diagnoses, enhance patient care, and usher in a new era of accessible testing.
    “This advancement has the capacity to empower individuals, researchers, and industries to explore new possibilities and applications in a more cost-effective and user-friendly manner,” says Prof. Juncker. “This innovation also holds the potential to eventually empower health professionals with the ability to rapidly create tailored solutions for specific needs right at the point-of-care.” More

  • in

    New conductive, cotton-based fiber developed for smart textiles

    A single strand of fiber developed at Washington State University has the flexibility of cotton and the electric conductivity of a polymer, called polyaniline.
    The newly developed material showed good potential for wearable e-textiles. The WSU researchers tested the fibers with a system that powered an LED light and another that sensed ammonia gas, detailing their findings in the journal Carbohydrate Polymers.
    “We have one fiber in two sections: one section is the conventional cotton: flexible and strong enough for everyday use, and the other side is the conductive material,” said Hang Liu, WSU textile researcher and the study’s corresponding author. “The cotton can support the conductive material which can provide the needed function.”
    While more development is needed, the idea is to integrate fibers like these into apparel as sensor patches with flexible circuits. These patches could be part of uniforms for firefighters, soldiers or workers who handle chemicals to detect for hazardous exposures. Other applications include health monitoring or exercise shirts that can do more than current fitness monitors.
    “We have some smart wearables, like smart watches, that can track your movement and human vital signs, but we hope that in the future your everyday clothing can do these functions as well,” said Liu. “Fashion is not just color and style, as a lot of people think about it: fashion is science.”
    In this study, the WSU team worked to overcome the challenges of mixing the conductive polymer with cotton cellulose. Polymers are substances with very large molecules that have repeating patterns. In this case, the researchers used polyaniline, also known as PANI, a synthetic polymer with conductive properties already used in applications such as printed circuit board manufacturing.
    While intrinsically conductive, polyaniline is brittle and by itself, cannot be made into a fiber for textiles. To solve this, the WSU researchers dissolved cotton cellulose from recycled t-shirts into a solution and the conductive polymer into another separate solution. These two solutions were then merged together side-by-side, and the material was extruded to make one fiber.

    The result showed good interfacial bonding, meaning the molecules from the different materials would stay together through stretching and bending.
    Achieving the right mixture at the interface of cotton cellulose and polyaniline was a delicate balance, Liu said.
    “We wanted these two solutions to work so that when the cotton and the conductive polymer contact each other they mix to a certain degree to kind of glue together, but we didn’t want them to mix too much, otherwise the conductivity would be reduced,” she said.
    Additional WSU authors on this study included first author Wangcheng Liu as well as Zihui Zhao, Dan Liang, Wei-Hong Zhong and Jinwen Zhang. This research received support from the National Science Foundation and the Walmart Foundation Project. More

  • in

    AI chatbot shows potential as diagnostic partner

    Physician-investigators at Beth Israel Deaconess Medical Center (BIDMC) compared a chatbot’s probabilistic reasoning to that of human clinicians. The findings, published in JAMA Network Open, suggest that artificial intelligence could serve as useful clinical decision support tools for physicians.
    “Humans struggle with probabilistic reasoning, the practice of making decisions based on calculating odds,” said the study’s corresponding author Adam Rodman, MD, an internal medicine physician and investigator in the department of Medicine at BIDMC. “Probabilistic reasoning is one of several components of making a diagnosis, which is an incredibly complex process that uses a variety of different cognitive strategies. We chose to evaluate probabilistic reasoning in isolation because it is a well-known area where humans could use support.”
    Basing their study on a previously published national survey of more than 550 practitioners performing probabilistic reasoning on five medical cases, Rodman and colleagues fed the publicly available Large Language Model (LLM), Chat GPT-4, the same series of cases and ran an identical prompt 100 times to generate a range of responses.
    The chatbot — just like the practitioners before them — was tasked with estimating the likelihood of a given diagnosis based on patients’ presentation. Then, given test results such as chest radiography for pneumonia, mammography for breast cancer, stress test for coronary artery disease and a urine culture for urinary tract infection, the chatbot program updated its estimates.
    When test results were positive, it was something of a draw; the chatbot was more accurate in making diagnoses than the humans in two cases, similarly accurate in two cases and less accurate in one case. But when tests came back negative, the chatbot shone, demonstrating more accuracy in making diagnoses than humans in all five cases.
    “Humans sometimes feel the risk is higher than it is after a negative test result, which can lead to overtreatment, more tests and too many medications,” said Rodman.
    But Rodman is less interested in how chatbots and humans perform toe-to-toe than in how highly skilled physicians’ performance might change in response to having these new supportive technologies available to them in the clinic, added Rodman. He and colleagues are looking into it.
    “LLMs can’t access the outside world — they aren’t calculating probabilities the way that epidemiologists, or even poker players, do. What they’re doing has a lot more in common with how humans make spot probabilistic decisions,” he said. “But that’s what is exciting. Even if imperfect, their ease of use and ability to be integrated into clinical workflows could theoretically make humans make better decisions,” he said. “Future research into collective human and artificial intelligence is sorely needed.”
    Co-authors included Thomas A. Buckley, University of Massachusetts Amherst; Arun K. Manrai, PhD, Harvard Medical School; Daniel J. Morgan, MD, MS, University of Maryland School of Medicine.
    Rodman reported receiving grants from the Gordon and Betty Moore Foundation. Morgan reported receiving grants from the Department of Veterans Affairs, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, and the National Institutes of Health, and receiving travel reimbursement from the Infectious Diseases Society of America, the Society for Healthcare Epidemiology of America. The American College of Physicians and the World Heart Health Organization outside the submitted work. More

  • in

    Battle of the AIs in medical research: ChatGPT vs Elicit

    The use of generative AI in literature search suggests the possibility of efficiently collecting a vast amount of medical information, provided that users are well aware that the performance of generative AI is still in its infancy and that not all information presented is necessarily reliable. It is advised to use different generative AIs depending on the type of information needed.
    Can AI save us from the arduous and time-consuming task of academic research collection? An international team of researchers investigated the credibility and efficiency of generative AI as an information-gathering tool in the medical field.
    The research team, led by Professor Masaru Enomoto of the Graduate School of Medicine at Osaka Metropolitan University, fed identical clinical questions and literature selection criteria to two generative AIs; ChatGPT and Elicit. The results showed that while ChatGPT suggested fictitious articles, Elicit was efficient, suggesting multiple references within a few minutes with the same level of accuracy as the researchers.
    “This research was conceived out of our experience with managing vast amounts of medical literature over long periods of time. Access to information using generative AI is still in its infancy, so we need to exercise caution as the current information is not accurate or up-to-date.” Said Dr. Enomoto. “However, ChatGPT and other generative AIs are constantly evolving and are expected to revolutionize the field of medical research in the future.”
    Their findings were published in Hepatology Communications. More

  • in

    Researchers safely integrate fragile 2D materials into devices

    Two-dimensional materials, which are only a few atoms thick, can exhibit some incredible properties, such as the ability to carry electric charge extremely efficiently, which could boost the performance of next-generation electronic devices.
    But integrating 2D materials into devices and systems like computer chips is notoriously difficult. These ultrathin structures can be damaged by conventional fabrication techniques, which often rely on the use of chemicals, high temperatures, or destructive processes like etching.
    To overcome this challenge, researchers from MIT and elsewhere have developed a new technique to integrate 2D materials into devices in a single step while keeping the surfaces of the materials and the resulting interfaces pristine and free from defects.
    Their method relies on engineering surface forces available at the nanoscale to allow the 2D material to be physically stacked onto other prebuilt device layers. Because the 2D material remains undamaged, the researchers can take full advantage of its unique optical and electrical properties.
    They used this approach to fabricate arrays of 2D transistors that achieved new functionalities compared to devices produced using conventional fabrication techniques. Their method, which is versatile enough to be used with many materials, could have diverse applications in high-performance computing, sensing, and flexible electronics.
    Core to unlocking these new functionalities is the ability to form clean interfaces, held together by special forces that exist between all matter, called van der Waals forces.
    However, such van der Waals integration of materials into fully functional devices is not always easy, says Farnaz Niroui, assistant professor of electrical engineering and computer science (EECS), a member of the Research Laboratory of Electronics (RLE), and senior author of a new paper describing the work.

    “Van der Waals integration has a fundamental limit,” she explains. “Since these forces depend on the intrinsic properties of the materials, they cannot be readily tuned. As a result, there are some materials that cannot be directly integrated with each other using their van der Waals interactions alone. We have come up with a platform to address this limit to help make van der Waals integration more versatile, to promote the development of 2D-materials-based devices with new and improved functionalities.”
    Niroui wrote the paper with lead author Peter Satterthwaite, an electrical engineering and computer science graduate student; Jing Kong, professor of EECS and a member of RLE; and others at MIT, Boston University, National Tsing Hua University in Taiwan, the National Science and Technology Council of Taiwan, and National Cheng Kung University in Taiwan. The research will be published in Nature Electronics.
    Advantageous attraction
    Making complex systems such as a computer chip with conventional fabrication techniques can get messy. Typically, a rigid material like silicon is chiseled down to the nanoscale, then interfaced with other components like metal electrodes and insulating layers to form an active device. Such processing can cause damage to the materials.
    Recently, researchers have focused on building devices and systems from the bottom up, using 2D materials and a process that requires sequential physical stacking. In this approach, rather than using chemical glues or high temperatures to bond a fragile 2D material to a conventional surface like silicon, researchers leverage van der Waals forces to physically integrate a layer of 2D material onto a device.
    Van der Waals forces are natural forces of attraction that exist between all matter. For example, a gecko’s feet can stick to the wall temporarily due to van der Waals forces. Though all materials exhibit a van der Waals interaction, depending on the material, the forces are not always strong enough to hold them together. For instance, a popular semiconducting 2D material known as molybdenum disulfide will stick to gold, a metal, but won’t directly transfer to insulators like silicon dioxide by just coming into physical contact with that surface.

    However, heterostructures made by integrating semiconductor and insulating layers are key building blocks of an electronic device. Previously, this integration has been enabled by bonding the 2D material to an intermediate layer like gold, then using this intermediate layer to transfer the 2D material onto the insulator, before removing the intermediate layer using chemicals or high temperatures.
    Instead of using this sacrificial layer, the MIT researchers embed the low-adhesion insulator in a high-adhesion matrix. This adhesive matrix is what makes the 2D material stick to the embedded low-adhesion surface, providing the forces needed to create a van der Waals interface between the 2D material and the insulator.
    Making the matrix
    To make electronic devices, they form a hybrid surface of metals and insulators on a carrier substrate. This surface is then peeled off and flipped over to reveal a completely smooth top surface that contains the building blocks of the desired device.
    This smoothness is important, since gaps between the surface and 2D material can hamper van der Waals interactions. Then, the researchers prepare the 2D material separately, in a completely clean environment, and bring it into direct contact with the prepared device stack.
    “Once the hybrid surface is brought into contact with the 2D layer, without needing any high-temperatures, solvents, or sacrificial layers, it can pick up the 2D layer and integrate it with the surface. This way, we are allowing a van der Waals integration that would be traditionally forbidden, but now is possible and allows formation of fully functioning devices in a single step,” Satterthwaite explains.
    This single-step process keeps the 2D material interface completely clean, which enables the material to reach its fundamental limits of performance without being held back by defects or contamination.
    And because the surfaces also remain pristine, researchers can engineer the surface of the 2D material to form features or connections to other components. For example, they used this technique to create p-type transistors, which are generally challenging to make with 2D materials. Their transistors have improved on previous studies, and can provide a platform toward studying and achieving the performance needed for practical electronics.
    Their approach can be done at scale to make larger arrays of devices. The adhesive matrix technique can also be used with a range of materials, and even with other forces to enhance the versatility of this platform. For instance, the researchers integrated graphene onto a device, forming the desired van der Waals interfaces using a matrix made with a polymer. In this case, adhesion relies on chemical interactions rather than van der Waals forces alone.
    In the future, the researchers want to build on this platform to enable integration of a diverse library of 2D materials to study their intrinsic properties without the influence of processing damage, and develop new device platforms that leverage these superior functionalities.
    This research is funded, in part, by the U.S. National Science Foundation, the U.S. Department of Energy, the BUnano Cross-Disciplinary Fellowship at Boston University, and the U.S. Army Research Office. The fabrication and characterization procedures were carried out, largely, in the MIT.nano shared facilities. More

  • in

    Immersive VR goggles for mice unlock new potential for brain science

    Northwestern University researchers have developed new virtual reality (VR) goggles for mice.
    Besides just being cute, these miniature goggles provide more immersive experiences for mice living in laboratory settings. By more faithfully simulating natural environments, the researchers can more accurately and precisely study the neural circuitry that underlies behavior.
    Compared to current state-of-the-art systems, which simply surround mice with computer or projection screens, the new goggles provide a leap in advancement. In current systems, mice can still see the lab environment peeking out from behind the screens, and the screens’ flat nature cannot convey three-dimensional (3D) depth. In another disadvantage, researchers have been unable to easily mount screens above mice’s heads to simulate overhead threats, such as looming birds of prey.
    The new VR goggles bypass all those issues. And, as VR grows in popularity, the goggles also could help researchers glean new insights into how the human brain adapts and reacts to repeated VR exposure — an area that is currently little understood.
    The research will be published on Friday (Dec. 8) in the journal Neuron. It marks the first time researchers have used a VR system to simulate an overhead threat.
    “For the past 15 years, we have been using VR systems for mice,” said Northwestern’s Daniel Dombeck, the study’s senior author. “So far, labs have been using big computer or projection screens to surround an animal. For humans, this is like watching a TV in your living room. You still see your couch and your walls. There are cues around you, telling you that you aren’t inside the scene. Now think about putting on VR goggles, like Oculus Rift, that take up your full vision. You don’t see anything but the projected scene, and a different scene is projected into each eye to create depth information. That’s been missing for mice.”
    Dombeck is a professor of neurobiology at Northwestern’s Weinberg College of Arts and Sciences. His laboratory is a leader in developing VR-based systems and high-resolution, laser-based imaging systems for animal research.

    The value of VR
    Although researchers can observe animals in nature, it is incredibly difficult to image patterns of real-time brain activity while animals engage with the real world. To overcome this challenge, researchers have integrated VR into laboratory settings. In these experimental setups, an animal uses a treadmill to navigate scenes, such as a virtual maze, projected onto surrounding screens.
    By keeping the mouse in place on the treadmill — rather than allowing it to run through a natural environment or physical maze — neurobiologists can use tools to view and map the brain as the mouse traverses a virtual space. Ultimately, this helps researchers grasp general principles of how activated neural circuits encode information during various behaviors.
    “VR basically reproduces real environments,” Dombeck said. “We’ve had a lot of success with this VR system, but it’s possible the animals aren’t as immersed as they would be in a real environment. It takes a lot of training just to get the mice to pay attention to the screens and ignore the lab around them.”
    Introducing iMRSIV
    With recent advances in hardware miniaturization, Dombeck and his team wondered if they could develop VR goggles to more faithfully replicate a real environment. Using custom-designed lenses and miniature organic light-emitting diode (OLED) displays, they created compact goggles.

    Called Miniature Rodent Stereo Illumination VR (iMRSIV), the system comprises two lenses and two screens — one for each side of the head to separately illuminate each eye for 3D vision. This provides each eye with a 180-degree field-of-view that fully immerses the mouse and excludes the surrounding environment.
    Unlike VR goggles for a human, the iMRSIV (pronounced “immersive”) system does not wrap around the mouse’s head. Instead, the goggles are attached to the experimental setup and closely perch directly in front of the mouse’s face. Because the mouse runs in place on a treadmill, the goggles still cover the mouse’s field of view.
    “We designed and built a custom holder for the goggles,” said John Issa, a postdoctoral fellow in Dombeck’s laboratory and study co-first author. “The whole optical display — the screens and the lenses — go all the way around the mouse.”
    Reduced training times
    By mapping the mice’s brains, Dombeck and his team found that the brains of goggle-wearing mice were activated in very similar ways as in freely moving animals. And, in side-by-side comparisons, the researchers noticed that goggle-wearing mice engaged with the scene much more quickly than mice with traditional VR systems.
    “We went through the same kind of training paradigms that we have done in the past, but mice with the goggles learned more quickly,” Dombeck said. “After the first session, they could already complete the task. They knew where to run and looked to the right places for rewards. We think they actually might not need as much training because they can engage with the environment in a more natural way.”
    Simulating overhead threats for the first time
    Next, the researchers used the goggles to simulate an overhead threat — something that had been previously impossible with current systems. Because hardware for imaging technology already sits above the mouse, there is nowhere to mount a computer screen. The sky above a mouse, however, is an area where animals often look for vital — sometimes life-or-death — information.
    “The top of a mouse’s field of view is very sensitive to detect predators from above, like a bird,” said co-first author Dom Pinke, a research specialist in Dombeck’s lab. “It’s not a learned behavior; it’s an imprinted behavior. It’s wired inside the mouse’s brain.”
    To create a looming threat, the researchers projected a dark, expanding disk into the top of the goggles — and the top of the mice’s fields of view. In experiments, mice — upon noticing the disk — either ran faster or froze. Both behaviors are common responses to overhead threats. Researchers were able to record neural activity to study these reactions in detail.
    “In the future, we’d like to look at situations where the mouse isn’t prey but is the predator,” Issa said. “We could watch brain activity while it chases a fly, for example. That activity involves a lot of depth perception and estimating distances. Those are things that we can start to capture.”
    Making neurobiology accessible
    In addition to opening the door for more research, Dombeck hopes the goggles open the door to new researchers. Because the goggles are relatively inexpensive and require less intensive laboratory setups, he thinks they could make neurobiology research more accessible.
    “Traditional VR systems are pretty complicated,” Dombeck said. “They’re expensive, and they’re big. They require a big lab with a lot of space. And, on top of that, if it takes a long time to train a mouse to do a task, that limits how many experiments you can do. We’re still working on improvements, but our goggles are small, relatively cheap and pretty user friendly as well. This could make VR technology more available to other labs.”
    The study, “Full field-of-view virtual reality goggles for mice,” was supported by the National Institutes of Health (award number R01-MH101297), the National Science Foundation (award number ECCS-1835389), the Hartwell Foundation and the Brain and Behavior Research Foundation. More