More stories

  • in

    Breakthrough AI model could transform how we prepare for natural disasters

    As climate-related disasters grow more intense and frequent, an international team of researchers has introduced Aurora — a groundbreaking AI model designed to deliver faster, more accurate, and more affordable forecasts for air quality, ocean waves, and extreme weather events. This model, called Aurora, has been trained on over a million hours of data. According to the researchers, it could revolutionize the way we prepare for natural disasters and respond to climate change.
    From deadly floods in Europe to intensifying tropical cyclones around the world, the climate crisis has made timely and precise forecasting more essential than ever. Yet traditional forecasting methods rely on highly complex numerical models developed over decades, requiring powerful supercomputers and large teams of experts. According to its developers, Aurora offers a powerful and efficient alternative using artificial intelligence.
    Machine learning at the core
    ‘Aurora uses state-of-the-art machine learning techniques to deliver superior forecasts for key environmental systems — air quality, weather, ocean waves, and tropical cyclones,’ explains Max Welling, machine learning expert at the University of Amsterdam and one of the researchers behind the model. Unlike conventional methods, Aurora requires far less computational power, making high-quality forecasting more accessible and scalable — especially in regions that lack expensive infrastructure.
    Trained on a million hours of earth data
    Aurora is built on a 1.3 billion parameter foundation model, trained on more than one million hours of Earth system data. It has been fine-tuned to excel in a range of forecasting tasks: Air quality: Outperforms traditional models in 74% of cases Ocean waves: Exceeds numerical simulations on 86% of targets Tropical cyclones: Beats seven operational forecasting centres in 100% of tests High-resolution weather: Surpasses leading models in 92% of scenarios, especially during extreme eventsForecasting that’s fast, accurate, and inclusive
    As climate volatility increases, rapid and reliable forecasts are crucial for disaster preparedness, emergency response, and climate adaptation. The researchers believe Aurora can help by making advanced forecasting more accessible.

    ‘Development cycles that once took years can now be completed in just weeks by small engineering teams,’ notes AI researcher Ana Lucic, also of the University of Amsterdam. ‘This could be especially valuable for countries in the Global South, smaller weather services, and research groups focused on localised climate risks.’ ‘Importantly, this acceleration builds on decades of foundational research and the vast datasets made available through traditional forecasting methods,’ Welling adds.
    Aurora is available freely online for anyone to use. If someone wants to fine-tune it for a specific task, they will need to provide data for that task. ‘But the “initial” training is done, we don’t need these vast datasets anymore, all the information from them is baked into Aurora already’, Lucic explains.
    A future-proof forecasting tool
    Although current research focuses on the four applications mentioned above, the researchers say Aurora is flexible and can be used for a wide range of future scenarios. These could include forecasting flood risks, wildfire spread, seasonal weather trends, agricultural yields, and renewable energy output. ‘Its ability to process diverse data types makes it a powerful and future-ready tool’, states Welling.
    As the world faces more extreme weather — from heatwaves to hurricanes — innovative models like Aurora could shift the global approach from reactive crisis response to proactive climate resilience concludes the study. More

  • in

    Could AI understand emotions better than we do?

    Is artificial intelligence (AI) capable of suggesting appropriate behaviour in emotionally charged situations? A team from the University of Geneva (UNIGE) and the University of Bern (UniBE) put six generative AIs — including ChatGPT — to the test using emotional intelligence (EI) assessments typically designed for humans. The outcome: these AIs outperformed average human performance and were even able to generate new tests in record time. These findings open up new possibilities for AI in education, coaching, and conflict management. The study is published in Communications Psychology.
    Large Language Models (LLMs) are artificial intelligence (AI) systems capable of processing, interpreting and generating human language. The ChatGPT generative AI, for example, is based on this type of model. LLMs can answer questions and solve complex problems. But can they also suggest emotionally intelligent behaviour?
    These results pave the way for AI to be used in contexts thought to be reserved for humans.
    Emotionally charged scenarios
    To find out, a team from UniBE, Institute of Psychology, and UNIGE’s Swiss Center for Affective Sciences (CISA) subjected six LLMs (ChatGPT-4, ChatGPT-o1, Gemini 1.5 Flash, Copilot 365, Claude 3.5 Haiku and DeepSeek V3) to emotional intelligence tests. ”We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,” says Katja Schlegel, lecturer and principal investigator at the Division of Personality Psychology, Differential Psychology, and Assessment at the Institute of Psychology at UniBE, and lead author of the study.
    For example: One of Michael’s colleagues has stolen his idea and is being unfairly congratulated. What would be Michael’s most effective reaction?
    a) Argue with the colleague involved
    b) Talk to his superior about the situation

    c) Silently resent his colleague
    d) Steal an idea back
    Here, option b) was considered the most appropriate.
    In parallel, the same five tests were administered to human participants. “In the end, the LLMs achieved significantly higher scores — 82% correct answers versus 56% for humans. This suggests that these AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence,” explains Marcello Mortillaro, senior scientist at the UNIGE’s Swiss Center for Affective Sciences (CISA), who was involved in the research.
    New tests in record time
    In a second stage, the scientists asked ChatGPT-4 to create new emotional intelligence tests, with new scenarios. These automatically generated tests were then taken by over 400 participants. ”They proved to be as reliable, clear and realistic as the original tests, which had taken years to develop,” explains Katja Schlegel. ”LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context. This reinforces the idea that LLMs, such as ChatGPT, have emotional knowledge and can reason about emotions,” adds Marcello Mortillaro.
    These results pave the way for AI to be used in contexts thought to be reserved for humans, such as education, coaching or conflict management, provided it is used and supervised by experts. More

  • in

    3D printers leave hidden ‘fingerprints’ that reveal part origins

    A new artificial intelligence system pinpoints the origin of 3D printed parts down to the specific machine that made them. The technology could allow manufacturers to monitor their suppliers and manage their supply chains, detecting early problems and verifying that suppliers are following agreed upon processes.
    A team of researchers led by Bill King, a professor of mechanical science and engineering at the University of Illinois Urbana-Champaign, has discovered that parts made by additive manufacturing, also known as 3D printing, carry a unique signature from the specific machine that fabricated them. This inspired the development of an AI system which detects the signature, or “fingerprint,” from a photograph of the part and identifies its origin.
    “We are still amazed that this works: we can print the same part design on two identical machines -same model, same process settings, same material — and each machine leaves a unique fingerprint that the AI model can trace back to the machine,” King said. “It’s possible to determine exactly where and how something was made. You don’t have to take your supplier’s word on anything.”
    The results of this study were recently published in the Nature partner journal Advanced Manufacturing.
    The technology has major implications for supplier management and quality control, according to King. When a manufacturer contracts a supplier to produce parts for a product, the supplier typically agrees to adhere to a specific set of machines, processes, and factory procedures and not to make any changes without permission. However, this provision is difficult to enforce. Suppliers often make changes without notice, from the fabrication process to the materials used. They are normally benign, but they can also cause major issues in the final product.
    “Modern supply chains are based on trust,” King said. “There’s due diligence in the form of audits and site tours at the start of the relationship. But, for most companies, it’s not feasible to continuously monitor their suppliers. Changes to the manufacturing process can go unnoticed for a long time, and you don’t find out until a bad batch of products is made. Everyone who works in manufacturing has a story about a supplier that changed something without permission and caused a serious problem.”
    While studying the repeatability of 3D printers, King’s research group noticed that the tolerances of part dimensions were correlated with individual machines. This inspired the researchers to examine photographs of the parts. It turned out that it is possible to determine the specific machine made the part, the fabrication process, and the materials used — the production “fingerprint.”
    “These manufacturing fingerprints have been hiding in plain sight,” King said. “There are thousands of 3D printers in the world, and tens of millions of 3D printed parts used in airplanes, automobiles, medical devices, consumer products, and a host of other applications. Each one of these parts has a unique signature that can be detected using AI.”

    King’s research group developed an AI model to identify production fingerprints from photographs taken with smartphone cameras. The AI model was developed on a large data set, comprising photographs of 9,192 parts made on 21 machines from six companies and with four different fabrication processes. When calibrating their model, the researchers found that a fingerprint could be obtained with 98% accuracy from just 1 square millimeter of the part’s surface.
    “Our results suggest that the AI model can make accurate predictions when trained with as few as 10 parts,” King said. “Using just a few samples from a supplier, it’s possible to verify everything that they deliver after.”
    King believes that this technology has the potential to overhaul supply chain management. With it, manufacturers can detect problems at early stages of production, and they save the time and resources needed to pinpoint the origins of errors. The technology could also be used to track the origins of illicit goods.
    Miles Bimrose, Davis McGregor, Charlie Wood and Sameh Tawfick also contributed to this work. More

  • in

    AI is good at weather forecasting. Can it predict freak weather events?

    Increasingly powerful AI models can make short-term weather forecasts with surprising accuracy. But neural networks only predict based on patterns from the past — what happens when the weather does something that’s unprecedented in recorded history? A new study led by scientists from the University of Chicago, in collaboration with New York University and the University of California Santa Cruz, is testing the limits of AI-powered weather prediction. In research published May 21 in Proceedings of the National Academy of Sciences, they found that neural networks cannot forecast weather events beyond the scope of existing training data — which might leave out events like 200-year floods, unprecedented heat waves or massive hurricanes.
    This limitation is particularly important as researchers incorporate neural networks into operational weather forecasting, early warning systems, and long-term risk assesments, the authors said. But they also said there are ways to address the problem by integrating more math and physics into the AI tools.
    “AI weather models are one of the biggest achievements in AI in science. What we found is that they are remarkable, but not magical,” said Pedram Hassanzadeh, an associate professor of geophysical sciences at UChicago and a corresponding author on the study. “We’ve only had these models for a few years, so there’s a lot of room for innovation.”
    Gray swan events
    Weather forecasting AIs work in a similar way to other neural networks that many people now interact with, such as ChatGPT.
    Essentially, the model is “trained” by feeding it a bunch of text or images into a model and asking it to look for patterns. Then, when a user presents the model with a question, it looks back at what it’s previously seen and uses that to predict an answer.
    In the case of weather forecasts, scientists train neural networks by feeding them decades’ worth of weather data. Then a user can input data about the current weather conditions and ask the model to predict the weather for the next several days.

    The AI models are very good at this. Generally, they can achieve the same accuracy as a top-of-the-line, supercomputer-based weather model that uses 10,000 to 100,000 times more time and energy, Hassanzadeh said.
    “These models do really, really well for day-to-day weather,” he said. “But what if next week there’s a freak weather event?”
    The concern is that the neural network is only working off the weather data we currently have, which goes back about 40 years. But that’s not the full range of possible weather.
    “The floods caused by Hurricane Harvey in 2017 were considered a once-in-a-2,000-year event, for example,” Hassanzadeh said. “They can happen.”
    Scientists sometimes refer to these events as “gray swan” events. They’re not quite all the way to a black swan event — something like the asteroid that killed the dinosaurs — but they are locally devastating.
    The team decided to test the limits of the AI models using hurricanes as an example. They trained a neural network using decades of weather data, but removed all the hurricanes stronger than a Category 2. Then they fed it an atmospheric condition that leads to a Category 5 hurricane in a few days. Could the model extrapolate to predict the strength of the hurricane?

    The answer was no.
    “It always underestimated the event. The model knows something is coming, but it always predicts it’ll only be a Category 2 hurricane,” said Yongqiang Sun, research scientist at UChicago and the other corresponding author on the study.
    This kind of error, known as a false negative, is a big deal in weather forecasting. If a forecast tells you a storm will be a Category 5 hurricane and it only turns out to be a Category 2, that means people evacuated who may not have needed to, which is not ideal. But if a forecast underestimates a hurricane that turns out to be a Category 5, the consequences would be far worse.
    Hurricane warnings and why physics matters
    The big difference between neural networks and traditional weather models is that traditional models “understand” physics. Scientists design them to incorporate our understanding of the math and physics that govern atmospheric dynamics, jet streams and other phenomena.
    The neural networks aren’t doing any of that. Like ChatGPT, which is essentially a predictive text machine, they simply look at weather patterns and suggest what comes next, based on what has happened in the past.
    No major service is currently using only AI models for forecasting. But as their use expands, this tendency will need to be factored in, Hassanzadeh said.
    Researchers, from meteorologists to economists, are beginning to use AI for long-term risk assessments. For example, they might ask an AI to generate many examples of weather patterns, so that we can see the most extreme events that might happen in each region in the future. But if an AI cannot predict anything stronger than what it’s seen before, its usefulness would be limited for this critical task. However, they found the model could predict stronger hurricanes if there was any precedent, even elsewhere in the world, in its training data. For example, if the researchers deleted all the evidence of Atlantic hurricanes but left in Pacific hurricanes, the model could extrapolate to predict Atlantic hurricanes.
    “This was a surprising and encouraging finding: it means that the models can forecast an event that was unpresented in one region but occurred once in a while in another region,” Hassanzadeh said.
    Merging approaches
    The solution, the researchers suggested, is to begin incorporating mathematical tools and the principles of atmospheric physics into AI-based models.
    “The hope is that if AI models can really learn atmospheric dynamics, they will be able to figure out how to forecast gray swans,” Hassanzadeh said.
    How to do this is a hot area of research. One promising approach the team is pursuing is called active learning — where AI helps guide traditional physics-based weather models to create more examples of extreme events, which can then be used to improve the AI’s training.
    “Longer simulated or observed datasets aren’t going to work. We need to think about smarter ways to generate data,” said Jonathan Weare, professor at the Courant Institute of Mathematical Sciences at New York University and study co-author. “In this case, that means answering the question ‘where should I place my training data to achieve better performance on extremes?’ Fortunately, we think AI weather models themselves, when paired with the right mathematical tools, can help answer this question.”
    University of Chicago Prof. Dorian Abbot and computational scientist Mohsen Zand were also co-authors on the study, as well as Ashesh Chattopadhyay of the University of California Santa Cruz.
    The study used resources maintained by the University of Chicago Research Computing Center. More

  • in

    Infrared contact lenses allow people to see in the dark, even with their eyes closed

    Neuroscientists and materials scientists have created contact lenses that enable infrared vision in both humans and mice by converting infrared light into visible light. Unlike infrared night vision goggles, the contact lenses, described in the Cell Press journal Cell on May 22, do not require a power source — and they enable the wearer to perceive multiple infrared wavelengths. Because they’re transparent, users can see both infrared and visible light simultaneously, though infrared vision was enhanced when participants had their eyes closed.
    “Our research opens up the potential for non-invasive wearable devices to give people super-vision,” says senior author Tian Xue, a neuroscientist at the University of Science and Technology of China. “There are many potential applications right away for this material. For example, flickering infrared light could be used to transmit information in security, rescue, encryption or anti-counterfeiting settings.”
    The contact lens technology uses nanoparticles that absorb infrared light and convert it into wavelengths that are visible to mammalian eyes (e.g., electromagnetic radiation in the 400-700 nm range). The nanoparticles specifically enable detection of “near-infrared light,” which is infrared light in the 800-1600 nm range, just beyond what humans can already see. The team previously showed that these nanoparticles enable infrared vision in mice when injected into the retina, but they wanted to design a less invasive option.
    To create the contact lenses, the team combined the nanoparticles with flexible, non-toxic polymers that are used in standard soft contact lenses. After showing that the contact lenses were non-toxic, they tested their function in both humans and mice.
    They found that contact lens-wearing mice displayed behaviors suggesting that they could see infrared wavelengths. For example, when the mice were given the choice of a dark box and an infrared-illuminated box, contact-wearing mice chose the dark box whereas contact-less mice showed no preference. The mice also showed physiological signals of infrared vision: the pupils of contact-wearing mice constricted in the presence of infrared light, and brain imaging revealed that infrared light caused their visual processing centers to light up.
    In humans, the infrared contact lenses enabled participants to accurately detect flashing morse code-like signals and to perceive the direction of incoming infrared light. “It’s totally clear cut: without the contact lenses, the subject cannot see anything, but when they put them on, they can clearly see the flickering of the infrared light,” said Xue. “We also found that when the subject closes their eyes, they’re even better able to receive this flickering information, because near-infrared light penetrates the eyelid more effectively than visible light, so there is less interference from visible light.”
    An additional tweak to the contact lenses allows users to differentiate between different spectra of infrared light by engineering the nanoparticles to color-code different infrared wavelengths. For example, infrared wavelengths of 980 nm were converted to blue light, wavelengths of 808 nm were converted to green light, and wavelengths of 1,532 nm were converted to red light. In addition to enabling wearers to perceive more detail within the infrared spectrum, these color-coding nanoparticles could be modified to help color blind people see wavelengths that they would otherwise be unable to detect.

    “By converting red visible light into something like green visible light, this technology could make the invisible visible for color blind people,” says Xue.
    Because the contact lenses have limited ability to capture fine details (due to their close proximity to the retina, which causes the converted light particles to scatter), the team also developed a wearable glass system using the same nanoparticle technology, which enabled participants to perceive higher-resolution infrared information.
    Currently, the contact lenses are only able to detect infrared radiation projected from an LED light source, but the researchers are working to increase the nanoparticles’ sensitivity so that they can detect lower levels of infrared light.
    “In the future, by working together with materials scientists and optical experts, we hope to make a contact lens with more precise spatial resolution and higher sensitivity,” says Xue. More

  • in

    ‘Fast-fail’ AI blood test could steer patients with pancreatic cancer away from ineffective therapies

    An artificial intelligence technique for detecting DNA fragments shed by tumors and circulating in a patient’s blood, developed by Johns Hopkins Kimmel Cancer Center investigators, could help clinicians more quickly identify and determine if pancreatic cancer therapies are working.
    After testing the method, called ARTEMIS-DELFI, in blood samples from patients participating in two large clinical trials of pancreatic cancer treatments, researchers found that it could be used to identify therapeutic responses. ARTEMIS-DELFI and another method developed by investigators, called WGMAF, to study mutations were found to be better predictors of outcome than imaging or other existing clinical and molecular markers two months after treatment initiation. However, ARTEMIS-DELFI was determined to be the superior test as it was simpler and potentially more broadly applicable.
    A description of the work was published May 21 in Science Advances. It was partly supported by grants from the National Institutes of Health.
    Time is of the essence when treating patients with pancreatic cancer, explains senior study author Victor E. Velculescu, M.D., Ph.D., co-director of the cancer genetics and epigenetics program at the cancer center. Many patients with pancreatic cancer receive a diagnosis at a late stage, when cancer may progress rapidly.
    “Providing patients with more potential treatment options is especially vital as a growing number of experimental therapies for pancreatic cancer have become available,” Velculescu says. “We want to know as quickly as we can if the therapy is helping the patient or not. If it is not working, we want to be able to switch to another therapy.”
    Currently, clinicians use imaging tools to monitor cancer treatment response and tumor progression. However, these tools produce results that may not be timely and are less accurate for patients receiving immunotherapies, which can make the results more complicated to interpret. In the study, Velculescu and his colleagues tested two alternate approaches to monitoring treatment response in patients participating in the phase 2 CheckPAC trial of immunotherapy for pancreatic cancer.
    One approach, called WGMAF (tumor-informed plasma whole-genome sequencing), analyzed DNA from tumor biopsies as well as cell-free DNA in blood samples to detect a treatment response. The other, called ARTEMIS-DELFI (tumor-independent genome-wide cfDNA fragmentation profiles and repeat landscapes), used machine learning, a form of artificial intelligence, to scan millions of cell-free DNA fragments only in the patient’s blood samples. Both approaches were able to detect which patients were benefiting from the therapies. However, not all patients had tumor samples, and many patients’ tumor samples had only a small fraction of cancer cells compared to the overall tissue, which also contained normal pancreatic and other cells, thereby confounding the WGMAF test.

    The ARTEMIS-DELFI approach worked with more patients and was simpler logistically, Velculescu says. The team then validated that ARTEMIS-DELFI was an effective treatment response monitoring tool in a second clinical trial called the PACTO trial. The study confirmed that ARTEMIS-DELFI could identify which patients were responding as soon as four weeks after therapy started.
    “The ‘fast-fail’ ARTEMIS-DELFI approach may be particularly useful in pancreatic cancer where changing therapies quickly could be helpful in patients who do not respond to the initial therapy,” says lead study author Carolyn Hruban, who was a graduate student at Johns Hopkins during the study and is now a postdoctoral researcher at the Dana-Farber Cancer Institute. “It’s simpler, likely less expensive, and more broadly applicable than using tumor samples.”
    The next step for the team will be prospective studies that test whether the information provided by ARTEMIS-DELFI helps clinicians more efficiently find an effective therapy and improve patient outcomes. A similar approach could also be used to monitor other cancers. Earlier this year, members of the team published a study in Nature Communications showing that a variation of the cell-free fragmentation monitoring approach called DELFI-TF was helpful in assessing colon cancer therapy response.
    “Our cell-free DNA fragmentation analyses provide a real-time assessment of a patient’s therapy response that can be used to personalize care and improve patient outcomes,” Velculescu says.
    Other co-authors include Daniel C. Bruhm, Shashikant Koul, Akshaya V. Annapragada, Nicholas A. Vulpescu, Sarah Short, Kavya Boyapati, Alessandro Leal, Stephen Cristiano, Vilmos Adleff, Robert B. Scharpf, Zachariah H. Foda, and Jillian Phallen of Johns Hopkins; Inna M. Chen, Susann Theile, and Julia S. Johannsen of Copenhagen University Hospital Herlev and Gentofte, and the University of Copenhagen; and Bahar Alipanahi and Zachary L. Skidmore of Delfi Diagnostics.
    The study was supported by the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation, SU2C Lung Cancer Interception Dream Team Grant; Stand Up to Cancer-Dutch Cancer Society International Translational Cancer Research Dream Team Grant, the Gray Foundation, the Honorable Tina Brozman Foundation, the Commonwealth Foundation, the Cole Foundation, a research grant from Delfi Diagnostics and National Institutes of Health grants CA121113,1T32GM136577, CA006973, CA233259, CA062924 and CA271896.
    Annapragada, Scharpf, and Velculescu are inventors on a patent submitted by Johns Hopkins University for genome-wide repeat and cell-free DNA in cancer (US patent application number 63/532,642). Annapragada, Bruhm, Adleff, Foda, Phallen and Scharpf are inventors on patent applications submitted by the university on related technology and licensed to Delfi Diagnostics. Phallen, Adleff, and Scharpf are founders of Delfi Diagnostics. Adleff and Scharpf are consultants for the company and Skidmore and Alipanahi are employees of the company. Velculescu is a founder of Delfi Diagnostics, member of its Board of Directors, and owns stock in the company. Johns Hopkins University owns equity in the company as well. Velculescu is an inventor on patent applications submitted by The Johns Hopkins University related to cancer genomic analyses and cell-free DNA that have been licensed to one or more entities, including Delfi Diagnostics, LabCorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana and ManaT Bio that result in royalties to the inventors and the University. These relationships are managed by Johns Hopkins in accordance with its conflict-of-interest policies. More

  • in

    Scientists discover class of crystals with properties that may prove revolutionary

    Rutgers University-New Brunswick researchers have discovered a new class of materials — called intercrystals — with unique electronic properties that could power future technologies.
    Intercrystals exhibit newly discovered forms of electronic properties that could pave the way for advancements in more efficient electronic components, quantum computing and environmentally friendly materials, the scientists said.
    As described in a report in the science journal Nature Materials, the scientists stacked two ultrathin layers of graphene, each a one-atom-thick sheet of carbon atoms arranged in a hexagonal grid. They twisted them slightly atop a layer of hexagonal boron nitride, a hexagonal crystal made of boron and nitrogen. A subtle misalignment between the layers that formed moiré patterns — patterns similar to those seen when two fine mesh screens are overlaid — significantly altered how electrons moved through the material, they found.
    “Our discovery opens a new path for material design,” said Eva Andrei, Board of Governors Professor in the Department of Physics and Astronomy in the Rutgers School of Arts and Sciences and lead author of the study. “Intercrystals give us a new handle to control electronic behavior using geometry alone, without having to change the material’s chemical composition.”
    By understanding and controlling the unique properties of electrons in intercrystals, scientists can use them to develop technologies such as more efficient transistors and sensors that previously required a more complex mix of materials and processing, the researchers said.
    “You can imagine designing an entire electronic circuit where every function — switching, sensing, signal propagation — is controlled by tuning geometry at the atomic level,” said Jedediah Pixley, an associate professor of physics and a co-author of the study. “Intercrystals could be the building blocks of such future technologies.
    “The discovery hinges on a rising technique in modern physics called “twistronics,” where layers of materials are contorted at specific angles to create moiré patterns. These configurations significantly alter the behavior of electrons within the substance, leading to properties that aren’t found in regular crystals.

    The foundational idea was first demonstrated by Andrei and her team in 2009, when they showed that moiré patterns in twisted graphene dramatically reshape its electronic structure. That discovery helped seed the field of twistronics.
    Electrons are tiny particles that move around in materials and are responsible for conducting electricity. In regular crystals, which possess a repeating pattern of atoms forming a perfectly arranged grid, the way electrons move is well understood and predictable. If a crystal is rotated or shifted by certain angles or distances, it looks the same because of an intrinsic characteristic known as symmetry.
    The researchers found the electronic properties of intercrystals, however, can vary significantly with small changes in their structure. This variability can lead to new and unusual behaviors, such as superconductivity and magnetism, which aren’t typically found in regular crystals. Superconducting materials offer the promise of continuously flowing electrical current because they conduct electricity with zero resistance.
    Intercrystals could be a part of the new circuitry for low loss electronics and atomic sensors that could play a part in the making of quantum computers and power new forms of consumer technologies, the scientists said.
    The materials also offer the prospect of functioning as the basis of more environmentally friendly electronic technologies.
    “Because these structures can be made out of abundant, non-toxic elements such as carbon, boron and nitrogen, rather than rare earth elements, they also offer a more sustainable and scalable pathway for future technologies,” Andrei said.

    Intercrystals aren’t only distinct from conventional crystals. They also are different from quasicrystals, a special type of crystal discovered in 1982 with an ordered structure but without the repeating pattern found in regular crystals.
    Research team members named their discovery “intercrystals” because they are a mix between crystals and quasicrystals: they have non-repeating patterns like quasicrystals but share symmetries in common with regular crystals.
    “The discovery of quasicrystals in the 1980s challenged the old rules about atomic order,” Andrei said. “With intercrystals, we go a step further, showing that materials can be engineered to access new phases of matter by exploiting geometric frustration at the smallest scale.”
    Rutgers researchers are optimistic about the future applications of intercrystals, opening new possibilities for exploring and manipulating the properties of materials at the atomic level.
    “This is just the beginning,” Pixley said. “We are excited to see where this discovery will lead us and how it will impact technology and science in the years to come.”
    Other Rutgers researchers who contributed to the study included research associates Xinyuan Lai, Guohong Li and Angela Coe of the Department of Physics and Astronomy.
    Scientists from the National Institute for Materials Science in Japan also contributed to the study. More

  • in

    Imaging technique removes the effect of water in underwater scenes

    The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.
    Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.
    The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.
    “With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.
    The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.
    The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.
    “Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”
    Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.

    Aquatic optics
    In the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.
    In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.
    But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.
    Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.
    “One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.

    A model swim
    In their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.
    Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.
    The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.
    From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.
    “Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.
    For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.
    “This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”
    This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation. More