More stories

  • in

    Infrared contact lenses allow people to see in the dark, even with their eyes closed

    Neuroscientists and materials scientists have created contact lenses that enable infrared vision in both humans and mice by converting infrared light into visible light. Unlike infrared night vision goggles, the contact lenses, described in the Cell Press journal Cell on May 22, do not require a power source — and they enable the wearer to perceive multiple infrared wavelengths. Because they’re transparent, users can see both infrared and visible light simultaneously, though infrared vision was enhanced when participants had their eyes closed.
    “Our research opens up the potential for non-invasive wearable devices to give people super-vision,” says senior author Tian Xue, a neuroscientist at the University of Science and Technology of China. “There are many potential applications right away for this material. For example, flickering infrared light could be used to transmit information in security, rescue, encryption or anti-counterfeiting settings.”
    The contact lens technology uses nanoparticles that absorb infrared light and convert it into wavelengths that are visible to mammalian eyes (e.g., electromagnetic radiation in the 400-700 nm range). The nanoparticles specifically enable detection of “near-infrared light,” which is infrared light in the 800-1600 nm range, just beyond what humans can already see. The team previously showed that these nanoparticles enable infrared vision in mice when injected into the retina, but they wanted to design a less invasive option.
    To create the contact lenses, the team combined the nanoparticles with flexible, non-toxic polymers that are used in standard soft contact lenses. After showing that the contact lenses were non-toxic, they tested their function in both humans and mice.
    They found that contact lens-wearing mice displayed behaviors suggesting that they could see infrared wavelengths. For example, when the mice were given the choice of a dark box and an infrared-illuminated box, contact-wearing mice chose the dark box whereas contact-less mice showed no preference. The mice also showed physiological signals of infrared vision: the pupils of contact-wearing mice constricted in the presence of infrared light, and brain imaging revealed that infrared light caused their visual processing centers to light up.
    In humans, the infrared contact lenses enabled participants to accurately detect flashing morse code-like signals and to perceive the direction of incoming infrared light. “It’s totally clear cut: without the contact lenses, the subject cannot see anything, but when they put them on, they can clearly see the flickering of the infrared light,” said Xue. “We also found that when the subject closes their eyes, they’re even better able to receive this flickering information, because near-infrared light penetrates the eyelid more effectively than visible light, so there is less interference from visible light.”
    An additional tweak to the contact lenses allows users to differentiate between different spectra of infrared light by engineering the nanoparticles to color-code different infrared wavelengths. For example, infrared wavelengths of 980 nm were converted to blue light, wavelengths of 808 nm were converted to green light, and wavelengths of 1,532 nm were converted to red light. In addition to enabling wearers to perceive more detail within the infrared spectrum, these color-coding nanoparticles could be modified to help color blind people see wavelengths that they would otherwise be unable to detect.

    “By converting red visible light into something like green visible light, this technology could make the invisible visible for color blind people,” says Xue.
    Because the contact lenses have limited ability to capture fine details (due to their close proximity to the retina, which causes the converted light particles to scatter), the team also developed a wearable glass system using the same nanoparticle technology, which enabled participants to perceive higher-resolution infrared information.
    Currently, the contact lenses are only able to detect infrared radiation projected from an LED light source, but the researchers are working to increase the nanoparticles’ sensitivity so that they can detect lower levels of infrared light.
    “In the future, by working together with materials scientists and optical experts, we hope to make a contact lens with more precise spatial resolution and higher sensitivity,” says Xue. More

  • in

    ‘Fast-fail’ AI blood test could steer patients with pancreatic cancer away from ineffective therapies

    An artificial intelligence technique for detecting DNA fragments shed by tumors and circulating in a patient’s blood, developed by Johns Hopkins Kimmel Cancer Center investigators, could help clinicians more quickly identify and determine if pancreatic cancer therapies are working.
    After testing the method, called ARTEMIS-DELFI, in blood samples from patients participating in two large clinical trials of pancreatic cancer treatments, researchers found that it could be used to identify therapeutic responses. ARTEMIS-DELFI and another method developed by investigators, called WGMAF, to study mutations were found to be better predictors of outcome than imaging or other existing clinical and molecular markers two months after treatment initiation. However, ARTEMIS-DELFI was determined to be the superior test as it was simpler and potentially more broadly applicable.
    A description of the work was published May 21 in Science Advances. It was partly supported by grants from the National Institutes of Health.
    Time is of the essence when treating patients with pancreatic cancer, explains senior study author Victor E. Velculescu, M.D., Ph.D., co-director of the cancer genetics and epigenetics program at the cancer center. Many patients with pancreatic cancer receive a diagnosis at a late stage, when cancer may progress rapidly.
    “Providing patients with more potential treatment options is especially vital as a growing number of experimental therapies for pancreatic cancer have become available,” Velculescu says. “We want to know as quickly as we can if the therapy is helping the patient or not. If it is not working, we want to be able to switch to another therapy.”
    Currently, clinicians use imaging tools to monitor cancer treatment response and tumor progression. However, these tools produce results that may not be timely and are less accurate for patients receiving immunotherapies, which can make the results more complicated to interpret. In the study, Velculescu and his colleagues tested two alternate approaches to monitoring treatment response in patients participating in the phase 2 CheckPAC trial of immunotherapy for pancreatic cancer.
    One approach, called WGMAF (tumor-informed plasma whole-genome sequencing), analyzed DNA from tumor biopsies as well as cell-free DNA in blood samples to detect a treatment response. The other, called ARTEMIS-DELFI (tumor-independent genome-wide cfDNA fragmentation profiles and repeat landscapes), used machine learning, a form of artificial intelligence, to scan millions of cell-free DNA fragments only in the patient’s blood samples. Both approaches were able to detect which patients were benefiting from the therapies. However, not all patients had tumor samples, and many patients’ tumor samples had only a small fraction of cancer cells compared to the overall tissue, which also contained normal pancreatic and other cells, thereby confounding the WGMAF test.

    The ARTEMIS-DELFI approach worked with more patients and was simpler logistically, Velculescu says. The team then validated that ARTEMIS-DELFI was an effective treatment response monitoring tool in a second clinical trial called the PACTO trial. The study confirmed that ARTEMIS-DELFI could identify which patients were responding as soon as four weeks after therapy started.
    “The ‘fast-fail’ ARTEMIS-DELFI approach may be particularly useful in pancreatic cancer where changing therapies quickly could be helpful in patients who do not respond to the initial therapy,” says lead study author Carolyn Hruban, who was a graduate student at Johns Hopkins during the study and is now a postdoctoral researcher at the Dana-Farber Cancer Institute. “It’s simpler, likely less expensive, and more broadly applicable than using tumor samples.”
    The next step for the team will be prospective studies that test whether the information provided by ARTEMIS-DELFI helps clinicians more efficiently find an effective therapy and improve patient outcomes. A similar approach could also be used to monitor other cancers. Earlier this year, members of the team published a study in Nature Communications showing that a variation of the cell-free fragmentation monitoring approach called DELFI-TF was helpful in assessing colon cancer therapy response.
    “Our cell-free DNA fragmentation analyses provide a real-time assessment of a patient’s therapy response that can be used to personalize care and improve patient outcomes,” Velculescu says.
    Other co-authors include Daniel C. Bruhm, Shashikant Koul, Akshaya V. Annapragada, Nicholas A. Vulpescu, Sarah Short, Kavya Boyapati, Alessandro Leal, Stephen Cristiano, Vilmos Adleff, Robert B. Scharpf, Zachariah H. Foda, and Jillian Phallen of Johns Hopkins; Inna M. Chen, Susann Theile, and Julia S. Johannsen of Copenhagen University Hospital Herlev and Gentofte, and the University of Copenhagen; and Bahar Alipanahi and Zachary L. Skidmore of Delfi Diagnostics.
    The study was supported by the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation, SU2C Lung Cancer Interception Dream Team Grant; Stand Up to Cancer-Dutch Cancer Society International Translational Cancer Research Dream Team Grant, the Gray Foundation, the Honorable Tina Brozman Foundation, the Commonwealth Foundation, the Cole Foundation, a research grant from Delfi Diagnostics and National Institutes of Health grants CA121113,1T32GM136577, CA006973, CA233259, CA062924 and CA271896.
    Annapragada, Scharpf, and Velculescu are inventors on a patent submitted by Johns Hopkins University for genome-wide repeat and cell-free DNA in cancer (US patent application number 63/532,642). Annapragada, Bruhm, Adleff, Foda, Phallen and Scharpf are inventors on patent applications submitted by the university on related technology and licensed to Delfi Diagnostics. Phallen, Adleff, and Scharpf are founders of Delfi Diagnostics. Adleff and Scharpf are consultants for the company and Skidmore and Alipanahi are employees of the company. Velculescu is a founder of Delfi Diagnostics, member of its Board of Directors, and owns stock in the company. Johns Hopkins University owns equity in the company as well. Velculescu is an inventor on patent applications submitted by The Johns Hopkins University related to cancer genomic analyses and cell-free DNA that have been licensed to one or more entities, including Delfi Diagnostics, LabCorp, Qiagen, Sysmex, Agios, Genzyme, Esoterix, Ventana and ManaT Bio that result in royalties to the inventors and the University. These relationships are managed by Johns Hopkins in accordance with its conflict-of-interest policies. More

  • in

    Scientists discover class of crystals with properties that may prove revolutionary

    Rutgers University-New Brunswick researchers have discovered a new class of materials — called intercrystals — with unique electronic properties that could power future technologies.
    Intercrystals exhibit newly discovered forms of electronic properties that could pave the way for advancements in more efficient electronic components, quantum computing and environmentally friendly materials, the scientists said.
    As described in a report in the science journal Nature Materials, the scientists stacked two ultrathin layers of graphene, each a one-atom-thick sheet of carbon atoms arranged in a hexagonal grid. They twisted them slightly atop a layer of hexagonal boron nitride, a hexagonal crystal made of boron and nitrogen. A subtle misalignment between the layers that formed moiré patterns — patterns similar to those seen when two fine mesh screens are overlaid — significantly altered how electrons moved through the material, they found.
    “Our discovery opens a new path for material design,” said Eva Andrei, Board of Governors Professor in the Department of Physics and Astronomy in the Rutgers School of Arts and Sciences and lead author of the study. “Intercrystals give us a new handle to control electronic behavior using geometry alone, without having to change the material’s chemical composition.”
    By understanding and controlling the unique properties of electrons in intercrystals, scientists can use them to develop technologies such as more efficient transistors and sensors that previously required a more complex mix of materials and processing, the researchers said.
    “You can imagine designing an entire electronic circuit where every function — switching, sensing, signal propagation — is controlled by tuning geometry at the atomic level,” said Jedediah Pixley, an associate professor of physics and a co-author of the study. “Intercrystals could be the building blocks of such future technologies.
    “The discovery hinges on a rising technique in modern physics called “twistronics,” where layers of materials are contorted at specific angles to create moiré patterns. These configurations significantly alter the behavior of electrons within the substance, leading to properties that aren’t found in regular crystals.

    The foundational idea was first demonstrated by Andrei and her team in 2009, when they showed that moiré patterns in twisted graphene dramatically reshape its electronic structure. That discovery helped seed the field of twistronics.
    Electrons are tiny particles that move around in materials and are responsible for conducting electricity. In regular crystals, which possess a repeating pattern of atoms forming a perfectly arranged grid, the way electrons move is well understood and predictable. If a crystal is rotated or shifted by certain angles or distances, it looks the same because of an intrinsic characteristic known as symmetry.
    The researchers found the electronic properties of intercrystals, however, can vary significantly with small changes in their structure. This variability can lead to new and unusual behaviors, such as superconductivity and magnetism, which aren’t typically found in regular crystals. Superconducting materials offer the promise of continuously flowing electrical current because they conduct electricity with zero resistance.
    Intercrystals could be a part of the new circuitry for low loss electronics and atomic sensors that could play a part in the making of quantum computers and power new forms of consumer technologies, the scientists said.
    The materials also offer the prospect of functioning as the basis of more environmentally friendly electronic technologies.
    “Because these structures can be made out of abundant, non-toxic elements such as carbon, boron and nitrogen, rather than rare earth elements, they also offer a more sustainable and scalable pathway for future technologies,” Andrei said.

    Intercrystals aren’t only distinct from conventional crystals. They also are different from quasicrystals, a special type of crystal discovered in 1982 with an ordered structure but without the repeating pattern found in regular crystals.
    Research team members named their discovery “intercrystals” because they are a mix between crystals and quasicrystals: they have non-repeating patterns like quasicrystals but share symmetries in common with regular crystals.
    “The discovery of quasicrystals in the 1980s challenged the old rules about atomic order,” Andrei said. “With intercrystals, we go a step further, showing that materials can be engineered to access new phases of matter by exploiting geometric frustration at the smallest scale.”
    Rutgers researchers are optimistic about the future applications of intercrystals, opening new possibilities for exploring and manipulating the properties of materials at the atomic level.
    “This is just the beginning,” Pixley said. “We are excited to see where this discovery will lead us and how it will impact technology and science in the years to come.”
    Other Rutgers researchers who contributed to the study included research associates Xinyuan Lai, Guohong Li and Angela Coe of the Department of Physics and Astronomy.
    Scientists from the National Institute for Materials Science in Japan also contributed to the study. More

  • in

    Imaging technique removes the effect of water in underwater scenes

    The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.
    Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.
    The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.
    “With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.
    The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.
    The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.
    “Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”
    Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.

    Aquatic optics
    In the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.
    In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.
    But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.
    Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.
    “One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.

    A model swim
    In their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.
    Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.
    The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.
    From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.
    “Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.
    For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.
    “This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”
    This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation. More

  • in

    A one-pixel camera for recording holographic movies

    A new camera setup can record three-dimensional movies with a single pixel. Moreover, the technique can obtain images outside the visible spectrum and even through tissues. The Kobe University development thus opens the door to holographic video microscopy.
    Holograms are not only used as fun-to-look-at safety stickers on credit cards, electronic products or banknotes; they have scientific applications in sensors and in microscopy as well. Traditionally, holograms require a laser for recording, but more recently, techniques that can record holograms with ambient light or light emanating from a sample have been developed. There are two main techniques that can achieve this: one is called “FINCH” and uses a 2D image sensor that is fast enough to record movies, but is limited to visible light and an unobstructed view, while the other is called “OSH,” which uses a one-pixel sensor and can record through scattering media and with light outside the visual spectrum, but can only practically record images of motionless objects.
    Kobe University applied optics researcher YONEDA Naru wanted to create a holographic recording technique that combines the best of both worlds. To tackle the speed-limiting weak point of OSH, he and his team constructed a setup that uses a high-speed “digital micromirror device” to project onto the object the patterns that are required for recording the hologram. “This device operates at 22 kHz, whereas previously used devices have a refresh rate of 60 Hz. This is a speed difference that’s equivalent to the difference between an old person taking a relaxed stroll and a Japanese bullet train,” Yoneda explains.
    In the journal Optics Express, the Kobe University team now publish the results of their proof-of-concept experiments. They show that their setup can not only record 3D images of moving objects, but they could also construct a microscope that can record a holographic movie through a light-scattering object — a mouse skull to be precise.
    Admittedly, the frame rate of just over one frame per second was still fairly low. But Yoneda and his team showed in calculations that they could in theory get that frame rate up to 30 Hz, which is a standard screen frame rate. This would be achieved through a compression technique called “sparse sampling,” which works by not recording every portion of the picture all the time.
    So, where will we be able to see such a hologram? Yoneda says: “We expect this to be applied to minimally invasive, three-dimensional biological observation, because it can visualize objects moving behind a scattering medium. But there are still obstacles to overcome. We need to increase the number of sampling points, and also the image quality. For that, we are now trying to optimize the patterns we project onto the samples and to use deep-learning algorithms for transforming the raw data into an image.”
    This research was funded by the Kawanishi Memorial ShinMaywa Education Foundation, the Japan Society for the Promotion of Science (grants 20H05886, 23K13680), the Agencia Estatal de Investigación (grant PID2022-142907OB-I00) and the European Regional Development Fund, and the Generalitat Valenciana (grant CIPROM/2023/44). It was conducted in collaboration with researchers from Universitat Jaume I. More

  • in

    High-quality OLED displays now enabling integrated thin and multichannel audio

    A research team led by Professor Su Seok Choi of the Department of Electrical Engineering at POSTECH (Pohang University of Science and Technology) and PhD candidate Inpyo Hong of the Graduate Program in Semiconductor Materials and Devices has developed the world’s first Pixel-Based Local Sound OLED technology. This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds, essentially allowing the display to function as a multichannel speaker array. The team successfully demonstrated the technology on a 13-inch OLED panel, equivalent to those used in laptops and tablets. The research has been published online in the journal Advanced Science.
    Visuals Meet Audio: Toward a Multisensory Display Era
    While display technologies have evolved with significant advances in resolution, high dynamic range, and color accuracy — particularly with OLEDs and QD-enhanced displays — the industry now faces the need for breakthroughs that enhance not only image quality but also the realism and immersion of user experience.
    As visual technologies approach maturity, integrating multisensory inputs — such as seeing, hearing, and touch — into displays has become a new frontier. Therefore, now displays are no longer passive panels that simply show images; they are evolving into immersive interfaces that engage multiple human senses. Among these, sound plays a critical role: research indicates that audiovisual synchronization accounts for nearly 90% of perceived immersion.
    However, most current displays still require external soundbars or multi-channel speakers, which add bulk and create design challenges — especially in compact environments like vehicle interiors, where integrating multiple speakers is difficult.
    OLEDs with Built-in Pixel-Based Sound: A Game Changer
    To address this, researchers have focused on integrating advanced sound capabilities directly into OLED panels, known for their slim, flexible form factors. While companies have explored attaching exciters to the back of TVs or bending OLEDs around speakers — as seen at MWC 2024 by Samsung and OLED panel speaker by LG — these methods still rely on bulky hardware and face challenges in accurate sound localization.

    The core issue is that traditional exciters — devices that vibrate to produce sound — are large and heavy, making it difficult to deploy multiple units without interference or compromising the OLED’s thin design. Additionally, sound crosstalk between multiple speakers leads to a lack of precise control over localized audio.
    Crosstalk-Free Pixel-Based Local Sound Control integrated with real working OLED
    The POSTECH team overcame these challenges by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space. Crucially, they are fully compatible with the thin form factor of OLED panels.
    As a result, each pixel can act as an independent sound source, enabling Pixel-Based Local Sound technology. The researchers also developed a method to completely eliminate sound crosstalk, ensuring that multiple sounds from different regions of the display do not interfere with each other — something previously unattainable in multichannel setups.
    Real Applications: Mobile, Tablet, Laptop, TV, Automotive, VR, and Beyond
    This innovation allows for truly localized sound experiences. For instance, in a car, the driver could hear navigation instructions while the passenger listens to music — all from the same screen. In virtual reality or smartphones, spatial sound can dynamically adapt to the user’s head or hand movements, enhancing realism and immersion.

    Most notably, the technology was successfully implemented on a 13-inch OLED panel, proving its practical scalability and commercial viability. The display delivers high-quality audio directly from the screen, without the need for external speakers, all while preserving the slim and lightweight benefits of OLED.
    A Word from the Researcher
    “Displays are evolving beyond visual output devices into comprehensive interfaces that engage both sight and sound,” said Professor Su Seok Choi. “This technology has the potential to become a core feature of next-generation devices, enabling sleek, lightweight designs in smartphones, laptops, and automotive displays — while delivering immersive, high-fidelity audio.”
    Funding Acknowledgment
    This research was supported by the Ministry of Trade, Industry and Energy under the Electronic Components Technology Innovation Program and the Graduate Program in Semiconductor Materials and Devices at POSTECH
    Key Highlights: POSTECH developed the world’s first Pixel-Based Local Sound OLED, where each pixel emits distinct sound. The display integrates ultra-thin piezo exciters to achieve crosstalk-free multichannel audio. Proven on a Real 13-inch OLED panel, the technology enables immersive sound directly from the screen — no external speakers needed. Opens new possibilities for automotive, VR, and mobile displays with slim, high-quality built-in sound More

  • in

    Nano-engineered thermoelectrics enable scalable, compressor-free cooling

    Researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, have developed a new, easily manufacturable solid-state thermoelectric refrigeration technology with nano-engineered materials that is twice as efficient as devices made with commercially available bulk thermoelectric materials. As global demand grows for more energy-efficient, reliable and compact cooling solutions, this advancement offers a scalable alternative to traditional compressor-based refrigeration.
    In a paper published in Nature Communications on May 21, a team of researchers from APL and refrigeration engineers from Samsung Electronics demonstrated improved heat-pumping efficiency and capacity in refrigeration systems attributable to high-performance nano-engineered thermoelectric materials invented at APL known as controlled hierarchically engineered superlattice structures (CHESS).
    The CHESS technology is the result of 10 years of APL research in advanced nano-engineered thermoelectric materials and applications development. Initially developed for national security applications, the material has also been used for noninvasive cooling therapies for prosthetics and won an R&D 100 award in 2023.
    “This real-world demonstration of refrigeration using new thermoelectric materials showcases the capabilities of nano-engineered CHESS thin films,” said Rama Venkatasubramanian, principal investigator of the joint project and chief technologist for thermoelectrics at APL. “It marks a significant leap in cooling technology and sets the stage for translating advances in thermoelectric materials into practical, large-scale, energy-efficient refrigeration applications.”
    A New Benchmark for Solid-State Cooling
    The push for more efficient and compact cooling technologies is fueled by a variety of factors, including population growth, urbanization and an increasing reliance on advanced electronics and data infrastructure. Conventional cooling systems, while effective, are often bulky, energy intensive and reliant on chemical refrigerants that can be harmful to the environment.
    Thermoelectric refrigeration is widely regarded as a potential solution. This method cools by using electrons to move heat through specialized semiconductor materials, eliminating the need for moving parts or harmful chemicals, making these next-generation refrigerators quiet, compact, reliable and sustainable. Bulk thermoelectric materials are used in small devices like mini-fridges, but their limited efficiency, low heat-pumping capacity and incompatibility with scalable semiconductor chip fabrication have historically prevented their wider use in high-performance systems.

    In the study, researchers compared refrigeration modules using traditional bulk thermoelectric materials with those using CHESS thin-film materials in standardized refrigeration tests, measuring and comparing the electrical power needed to achieve various cooling levels in the same commercial refrigerator test systems. The refrigeration team from Samsung Electronics, led by materials engineer Sungjin Jung, collaborated with APL to validate the results through detailed thermal modeling, quantifying heat loads and thermal resistance parameters to ensure accurate performance evaluation under real-world conditions.
    The results were striking: Using CHESS materials, the APL team achieved nearly 100% improvement in efficiency over traditional thermoelectric materials at room temperature (around 80 degrees Fahrenheit, or 25 C). They then translated these material-level gains into a near 75% improvement in efficiency at the device level in thermoelectric modules built with CHESS materials and a 70% improvement in efficiency in a fully integrated refrigeration system, each representing a significant improvement over state-of-the-art bulk thermoelectric devices. These tests were completed under conditions that involved significant amounts of heat pumping to replicate practical operation.
    Built to Scale
    Beyond improving efficiency, the CHESS thin-film technology uses remarkably less material — just 0.003 cubic centimeters, or about the size of a grain of sand, per refrigeration unit. This reduction in material means APL’s thermoelectric materials could be mass-produced using semiconductor chip production tools, driving cost efficiency and enabling widespread market adoption.
    “This thin-film technology has the potential to grow from powering small-scale refrigeration systems to supporting large building HVAC applications, similar to the way that lithium-ion batteries have been scaled to power devices as small as mobile phones and as large as electric vehicles,” Venkatasubramanian said.
    Additionally, the CHESS materials were created using a well-established process commonly used to manufacture high-efficiency solar cells that power satellites and commercial LED lights.

    “We used metal-organic chemical vapor deposition (MOCVD) to produce the CHESS materials, a method well known for its scalability, cost-effectiveness and ability to support large-volume manufacturing,” said Jon Pierce, a senior research engineer who leads the MOCVD growth capability at APL. “MOCVD is already widely used commercially, making it ideal for scaling up CHESS thin-film thermoelectric materials production.”
    These materials and devices continue to show promise for a broad range of energy harvesting and electronics applications in addition to the recent advances in refrigeration. APL plans to continue to partner with organizations to refine the CHESS thermoelectric materials with a focus on boosting efficiency to approach that of conventional mechanical systems. Future efforts include demonstrating larger-scale refrigeration systems, including freezers, and integrating artificial intelligence-driven methods to optimize energy efficiency in compartmentalized or distributed cooling in refrigeration and HVAC equipment.
    “Beyond refrigeration, CHESS materials are also able to convert temperature differences, like body heat, into usable power,” said Jeff Maranchi, Exploration Program Area manager in APL’s Research and Exploratory Development Mission Area. “In addition to advancing next-generation tactile systems, prosthetics and human-machine interfaces, this opens the door to scalable energy-harvesting technologies for applications ranging from computers to spacecraft — capabilities that weren’t feasible with older bulkier thermoelectric devices.”
    “The success of this collaborative effort demonstrates that high-efficiency solid-state refrigeration is not only scientifically viable but manufacturable at scale,” said Susan Ehrlich, an APL technology commercialization manager. “We’re looking forward to continued research and technology transfer opportunities with companies as we work toward translating these innovations into practical, real-world applications.” More

  • in

    Major step for flat and adjustable optics

    By carefully placing nanostructures on a flat surface, researchers at Linköping University, Sweden, have significantly improved the performance of so-called optical metasurfaces in conductive plastics. This is a major step for controllable flat optics, with future applications such as video holograms, invisibility materials, and sensors, as well as in biomedical imaging. The study has been published in the journal Nature Communications.
    To control light, curved lenses are used today, that are often made of glass that is either concave or convex, which refracts the light in different ways. These types of lenses can be found in everything from high-tech equipment such as space telescopes and radar systems to everyday items including camera lenses and spectacles. But the glass lenses take up space and it is difficult to make them smaller without compromising their function.
    With flat lenses, however, it may be possible to make very small optics and also find new areas of application. They are known as metalenses and are examples of optical metasurfaces that form a rapidly growing field of research with great potential, though at present the technology has its limitations.
    “Metasurfaces work in a way that nanostructures are placed in patterns on a flat surface and become receivers for light. Each receiver, or antenna, captures the light in a certain way and together these nanostructures can allow the light to be controlled as you desire,” says Magnus Jonsson, professor of applied physics at Linköping University.
    Today there are optical metasurfaces made of, for example, gold or titanium dioxide. But a major challenge has been that the function of the metasurfaces cannot be adjusted after manufacture. Both researchers and industry have requested features such as being able to turn metasurfaces on and off or dynamically change the focal point of a metalens.
    But, in 2019, Magnus Jonsson’s research group at the Laboratory of Organic Electronics showed that conductive plastics (conducting polymers) can crack that nut. They showed that the plastic could function optically as a metal and thus be used as a material for antennas that build a metasurface. Thanks to the ability of the polymers to oxidize and reduce, the nanoantennas were able to be switched on and off. However, the performance of metasurfaces built from conductive polymers has been limited and not comparable to metasurfaces made from traditional materials.
    Now, the same research team has managed to improve performance up to tenfold. By precisely controlling the distance between the antennas, these can help each other thanks to a kind of resonance that amplifies the light interaction, called collective lattice resonance.
    “We show that metasurfaces made of conducting polymers seem to be able to provide sufficiently high performance to be relevant for practical applications,” says Dongqing Lin who is principal author of the study and postdoc in the research group.
    So far, the researchers have been able to manufacture controllable antennas from conducting polymers for infrared light, but not for visible light. The next step is to develop the material to be functional also in the visible light spectrum. More