More stories

  • in

    Imaging technique removes the effect of water in underwater scenes

    The ocean is teeming with life. But unless you get up close, much of the marine world can easily remain unseen. That’s because water itself can act as an effective cloak: Light that shines through the ocean can bend, scatter, and quickly fade as it travels through the dense medium of water and reflects off the persistent haze of ocean particles. This makes it extremely challenging to capture the true color of objects in the ocean without imaging them at close range.
    Now a team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.
    The researchers have dubbed the new tool “SeaSplat,” in reference to both its underwater application and a method known as 3D gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.
    “With SeaSplat, it can model explicitly what the water is doing, and as a result it can in some ways remove the water, and produces better 3D models of an underwater scene,” says MIT graduate student Daniel Yang.
    The researchers applied SeaSplat to images of the sea floor taken by divers and underwater vehicles, in various locations including the U.S. Virgin Islands. The method generated 3D “worlds” from the images that were truer and more vivid and varied in color, compared to previous methods.
    The team says SeaSplat could help marine biologists monitor the health of certain ocean communities. For instance, as an underwater robot explores and takes pictures of a coral reef, SeaSplat would simultaneously process the images and render a true-color, 3D representation, that scientists could then virtually “fly” through, at their own pace and path, to inspect the underwater scene, for instance for signs of coral bleaching.
    “Bleaching looks white from close up, but could appear blue and hazy from far away, and you might not be able to detect it,” says Yogesh Girdhar, an associate scientist at WHOI. “Coral bleaching, and different coral species, could be easier to detect with SeaSplat imagery, to get the true colors in the ocean.”
    Girdhar and Yang will present a paper detailing SeaSplat at the IEEE International Conference on Robotics and Automation (ICRA). Their study co-author is John Leonard, professor of mechanical engineering at MIT.

    Aquatic optics
    In the ocean, the color and clarity of objects is distorted by the effects of light traveling through water. In recent years, researchers have developed color-correcting tools that aim to reproduce the true colors in the ocean. These efforts involved adapting tools that were developed originally for environments out of water, for instance to reveal the true color of features in foggy conditions. One recent work accurately reproduces true colors in the ocean, with an algorithm named “Sea-Thru,” though this method requires a huge amount of computational power, which makes its use in producing 3D scene models challenging.
    In parallel, others have made advances in 3D gaussian splatting, with tools that seamlessly stitch images of a scene together, and intelligently fill in any gaps to create a whole, 3D version of the scene. These 3D worlds enable “novel view synthesis,” meaning that someone can view the generated 3D scene, not just from the perspective of the original images, but from any angle and distance.
    But 3DGS has only successfully been applied to environments out of water. Efforts to adapt 3D reconstruction to underwater imagery have been hampered, mainly by two optical underwater effects: backscatter and attenuation. Backscatter occurs when light reflects off of tiny particles in the ocean, creating a veil-like haze. Attenuation is the phenomenon by which light of certain wavelengths attenuates, or fades with distance. In the ocean, for instance, red objects appear to fade more than blue objects when viewed from farther away.
    Out of water, the color of objects appears more or less the same regardless of the angle or distance from which they are viewed. In water, however, color can quickly change and fade depending on one’s perspective. When 3DGS methods attempt to stitch underwater images into a cohesive 3D whole, they are unable to resolve objects due to aquatic backscatter and attenuation effects that distort the color of objects at different angles.
    “One dream of underwater robotic vision that we have is: Imagine if you could remove all the water in the ocean. What would you see?” Leonard says.

    A model swim
    In their new work, Yang and his colleagues developed a color-correcting algorithm that accounts for the optical effects of backscatter and attenuation. The algorithm determines the degree to which every pixel in an image must have been distorted by backscatter and attenuation effects, and then essentially takes away those aquatic effects, and computes what the pixel’s true color must be.
    Yang then worked the color-correcting algorithm into a 3D gaussian splatting model to create SeaSplat, which can quickly analyze underwater images of a scene and generate a true-color, 3D virtual version of the same scene that can be explored in detail from any angle and distance.
    The team applied SeaSplat to multiple underwater scenes, including images taken in the Red Sea, in the Carribean off the coast of Curaçao, and the Pacific Ocean, near Panama. These images, which the team took from a pre-existing dataset, represent a range of ocean locations and water conditions. They also tested SeaSplat on images taken by a remote-controlled underwater robot in the U.S. Virgin Islands.
    From the images of each ocean scene, SeaSplat generated a true-color 3D world that the researchers were able to virtually explore, for instance zooming in and out of a scene and viewing certain features from different perspectives. Even when viewing from different angles and distances, they found objects in every scene retained their true color, rather than fading as they would if viewed through the actual ocean.
    “Once it generates a 3D model, a scientist can just ‘swim’ through the model as though they are scuba-diving, and look at things in high detail, with real color,” Yang says.
    For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.
    “This is the first approach that can very quickly build high-quality 3D models with accurate colors, underwater, and it can create them and render them fast,” Girdhar says. “That will help to quantify biodiversity, and assess the health of coral reef and other marine communities.”
    This work was supported, in part, by the Investment in Science Fund at WHOI, and by the U.S. National Science Foundation. More

  • in

    A one-pixel camera for recording holographic movies

    A new camera setup can record three-dimensional movies with a single pixel. Moreover, the technique can obtain images outside the visible spectrum and even through tissues. The Kobe University development thus opens the door to holographic video microscopy.
    Holograms are not only used as fun-to-look-at safety stickers on credit cards, electronic products or banknotes; they have scientific applications in sensors and in microscopy as well. Traditionally, holograms require a laser for recording, but more recently, techniques that can record holograms with ambient light or light emanating from a sample have been developed. There are two main techniques that can achieve this: one is called “FINCH” and uses a 2D image sensor that is fast enough to record movies, but is limited to visible light and an unobstructed view, while the other is called “OSH,” which uses a one-pixel sensor and can record through scattering media and with light outside the visual spectrum, but can only practically record images of motionless objects.
    Kobe University applied optics researcher YONEDA Naru wanted to create a holographic recording technique that combines the best of both worlds. To tackle the speed-limiting weak point of OSH, he and his team constructed a setup that uses a high-speed “digital micromirror device” to project onto the object the patterns that are required for recording the hologram. “This device operates at 22 kHz, whereas previously used devices have a refresh rate of 60 Hz. This is a speed difference that’s equivalent to the difference between an old person taking a relaxed stroll and a Japanese bullet train,” Yoneda explains.
    In the journal Optics Express, the Kobe University team now publish the results of their proof-of-concept experiments. They show that their setup can not only record 3D images of moving objects, but they could also construct a microscope that can record a holographic movie through a light-scattering object — a mouse skull to be precise.
    Admittedly, the frame rate of just over one frame per second was still fairly low. But Yoneda and his team showed in calculations that they could in theory get that frame rate up to 30 Hz, which is a standard screen frame rate. This would be achieved through a compression technique called “sparse sampling,” which works by not recording every portion of the picture all the time.
    So, where will we be able to see such a hologram? Yoneda says: “We expect this to be applied to minimally invasive, three-dimensional biological observation, because it can visualize objects moving behind a scattering medium. But there are still obstacles to overcome. We need to increase the number of sampling points, and also the image quality. For that, we are now trying to optimize the patterns we project onto the samples and to use deep-learning algorithms for transforming the raw data into an image.”
    This research was funded by the Kawanishi Memorial ShinMaywa Education Foundation, the Japan Society for the Promotion of Science (grants 20H05886, 23K13680), the Agencia Estatal de Investigación (grant PID2022-142907OB-I00) and the European Regional Development Fund, and the Generalitat Valenciana (grant CIPROM/2023/44). It was conducted in collaboration with researchers from Universitat Jaume I. More

  • in

    High-quality OLED displays now enabling integrated thin and multichannel audio

    A research team led by Professor Su Seok Choi of the Department of Electrical Engineering at POSTECH (Pohang University of Science and Technology) and PhD candidate Inpyo Hong of the Graduate Program in Semiconductor Materials and Devices has developed the world’s first Pixel-Based Local Sound OLED technology. This breakthrough enables each pixel of an OLED display to simultaneously emit different sounds, essentially allowing the display to function as a multichannel speaker array. The team successfully demonstrated the technology on a 13-inch OLED panel, equivalent to those used in laptops and tablets. The research has been published online in the journal Advanced Science.
    Visuals Meet Audio: Toward a Multisensory Display Era
    While display technologies have evolved with significant advances in resolution, high dynamic range, and color accuracy — particularly with OLEDs and QD-enhanced displays — the industry now faces the need for breakthroughs that enhance not only image quality but also the realism and immersion of user experience.
    As visual technologies approach maturity, integrating multisensory inputs — such as seeing, hearing, and touch — into displays has become a new frontier. Therefore, now displays are no longer passive panels that simply show images; they are evolving into immersive interfaces that engage multiple human senses. Among these, sound plays a critical role: research indicates that audiovisual synchronization accounts for nearly 90% of perceived immersion.
    However, most current displays still require external soundbars or multi-channel speakers, which add bulk and create design challenges — especially in compact environments like vehicle interiors, where integrating multiple speakers is difficult.
    OLEDs with Built-in Pixel-Based Sound: A Game Changer
    To address this, researchers have focused on integrating advanced sound capabilities directly into OLED panels, known for their slim, flexible form factors. While companies have explored attaching exciters to the back of TVs or bending OLEDs around speakers — as seen at MWC 2024 by Samsung and OLED panel speaker by LG — these methods still rely on bulky hardware and face challenges in accurate sound localization.

    The core issue is that traditional exciters — devices that vibrate to produce sound — are large and heavy, making it difficult to deploy multiple units without interference or compromising the OLED’s thin design. Additionally, sound crosstalk between multiple speakers leads to a lack of precise control over localized audio.
    Crosstalk-Free Pixel-Based Local Sound Control integrated with real working OLED
    The POSTECH team overcame these challenges by embedding ultra-thin piezoelectric exciters within the OLED display frame. These piezo exciters, arranged similarly to pixels, convert electrical signals into sound vibrations without occupying external space. Crucially, they are fully compatible with the thin form factor of OLED panels.
    As a result, each pixel can act as an independent sound source, enabling Pixel-Based Local Sound technology. The researchers also developed a method to completely eliminate sound crosstalk, ensuring that multiple sounds from different regions of the display do not interfere with each other — something previously unattainable in multichannel setups.
    Real Applications: Mobile, Tablet, Laptop, TV, Automotive, VR, and Beyond
    This innovation allows for truly localized sound experiences. For instance, in a car, the driver could hear navigation instructions while the passenger listens to music — all from the same screen. In virtual reality or smartphones, spatial sound can dynamically adapt to the user’s head or hand movements, enhancing realism and immersion.

    Most notably, the technology was successfully implemented on a 13-inch OLED panel, proving its practical scalability and commercial viability. The display delivers high-quality audio directly from the screen, without the need for external speakers, all while preserving the slim and lightweight benefits of OLED.
    A Word from the Researcher
    “Displays are evolving beyond visual output devices into comprehensive interfaces that engage both sight and sound,” said Professor Su Seok Choi. “This technology has the potential to become a core feature of next-generation devices, enabling sleek, lightweight designs in smartphones, laptops, and automotive displays — while delivering immersive, high-fidelity audio.”
    Funding Acknowledgment
    This research was supported by the Ministry of Trade, Industry and Energy under the Electronic Components Technology Innovation Program and the Graduate Program in Semiconductor Materials and Devices at POSTECH
    Key Highlights: POSTECH developed the world’s first Pixel-Based Local Sound OLED, where each pixel emits distinct sound. The display integrates ultra-thin piezo exciters to achieve crosstalk-free multichannel audio. Proven on a Real 13-inch OLED panel, the technology enables immersive sound directly from the screen — no external speakers needed. Opens new possibilities for automotive, VR, and mobile displays with slim, high-quality built-in sound More

  • in

    Nano-engineered thermoelectrics enable scalable, compressor-free cooling

    Researchers at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, have developed a new, easily manufacturable solid-state thermoelectric refrigeration technology with nano-engineered materials that is twice as efficient as devices made with commercially available bulk thermoelectric materials. As global demand grows for more energy-efficient, reliable and compact cooling solutions, this advancement offers a scalable alternative to traditional compressor-based refrigeration.
    In a paper published in Nature Communications on May 21, a team of researchers from APL and refrigeration engineers from Samsung Electronics demonstrated improved heat-pumping efficiency and capacity in refrigeration systems attributable to high-performance nano-engineered thermoelectric materials invented at APL known as controlled hierarchically engineered superlattice structures (CHESS).
    The CHESS technology is the result of 10 years of APL research in advanced nano-engineered thermoelectric materials and applications development. Initially developed for national security applications, the material has also been used for noninvasive cooling therapies for prosthetics and won an R&D 100 award in 2023.
    “This real-world demonstration of refrigeration using new thermoelectric materials showcases the capabilities of nano-engineered CHESS thin films,” said Rama Venkatasubramanian, principal investigator of the joint project and chief technologist for thermoelectrics at APL. “It marks a significant leap in cooling technology and sets the stage for translating advances in thermoelectric materials into practical, large-scale, energy-efficient refrigeration applications.”
    A New Benchmark for Solid-State Cooling
    The push for more efficient and compact cooling technologies is fueled by a variety of factors, including population growth, urbanization and an increasing reliance on advanced electronics and data infrastructure. Conventional cooling systems, while effective, are often bulky, energy intensive and reliant on chemical refrigerants that can be harmful to the environment.
    Thermoelectric refrigeration is widely regarded as a potential solution. This method cools by using electrons to move heat through specialized semiconductor materials, eliminating the need for moving parts or harmful chemicals, making these next-generation refrigerators quiet, compact, reliable and sustainable. Bulk thermoelectric materials are used in small devices like mini-fridges, but their limited efficiency, low heat-pumping capacity and incompatibility with scalable semiconductor chip fabrication have historically prevented their wider use in high-performance systems.

    In the study, researchers compared refrigeration modules using traditional bulk thermoelectric materials with those using CHESS thin-film materials in standardized refrigeration tests, measuring and comparing the electrical power needed to achieve various cooling levels in the same commercial refrigerator test systems. The refrigeration team from Samsung Electronics, led by materials engineer Sungjin Jung, collaborated with APL to validate the results through detailed thermal modeling, quantifying heat loads and thermal resistance parameters to ensure accurate performance evaluation under real-world conditions.
    The results were striking: Using CHESS materials, the APL team achieved nearly 100% improvement in efficiency over traditional thermoelectric materials at room temperature (around 80 degrees Fahrenheit, or 25 C). They then translated these material-level gains into a near 75% improvement in efficiency at the device level in thermoelectric modules built with CHESS materials and a 70% improvement in efficiency in a fully integrated refrigeration system, each representing a significant improvement over state-of-the-art bulk thermoelectric devices. These tests were completed under conditions that involved significant amounts of heat pumping to replicate practical operation.
    Built to Scale
    Beyond improving efficiency, the CHESS thin-film technology uses remarkably less material — just 0.003 cubic centimeters, or about the size of a grain of sand, per refrigeration unit. This reduction in material means APL’s thermoelectric materials could be mass-produced using semiconductor chip production tools, driving cost efficiency and enabling widespread market adoption.
    “This thin-film technology has the potential to grow from powering small-scale refrigeration systems to supporting large building HVAC applications, similar to the way that lithium-ion batteries have been scaled to power devices as small as mobile phones and as large as electric vehicles,” Venkatasubramanian said.
    Additionally, the CHESS materials were created using a well-established process commonly used to manufacture high-efficiency solar cells that power satellites and commercial LED lights.

    “We used metal-organic chemical vapor deposition (MOCVD) to produce the CHESS materials, a method well known for its scalability, cost-effectiveness and ability to support large-volume manufacturing,” said Jon Pierce, a senior research engineer who leads the MOCVD growth capability at APL. “MOCVD is already widely used commercially, making it ideal for scaling up CHESS thin-film thermoelectric materials production.”
    These materials and devices continue to show promise for a broad range of energy harvesting and electronics applications in addition to the recent advances in refrigeration. APL plans to continue to partner with organizations to refine the CHESS thermoelectric materials with a focus on boosting efficiency to approach that of conventional mechanical systems. Future efforts include demonstrating larger-scale refrigeration systems, including freezers, and integrating artificial intelligence-driven methods to optimize energy efficiency in compartmentalized or distributed cooling in refrigeration and HVAC equipment.
    “Beyond refrigeration, CHESS materials are also able to convert temperature differences, like body heat, into usable power,” said Jeff Maranchi, Exploration Program Area manager in APL’s Research and Exploratory Development Mission Area. “In addition to advancing next-generation tactile systems, prosthetics and human-machine interfaces, this opens the door to scalable energy-harvesting technologies for applications ranging from computers to spacecraft — capabilities that weren’t feasible with older bulkier thermoelectric devices.”
    “The success of this collaborative effort demonstrates that high-efficiency solid-state refrigeration is not only scientifically viable but manufacturable at scale,” said Susan Ehrlich, an APL technology commercialization manager. “We’re looking forward to continued research and technology transfer opportunities with companies as we work toward translating these innovations into practical, real-world applications.” More

  • in

    Major step for flat and adjustable optics

    By carefully placing nanostructures on a flat surface, researchers at Linköping University, Sweden, have significantly improved the performance of so-called optical metasurfaces in conductive plastics. This is a major step for controllable flat optics, with future applications such as video holograms, invisibility materials, and sensors, as well as in biomedical imaging. The study has been published in the journal Nature Communications.
    To control light, curved lenses are used today, that are often made of glass that is either concave or convex, which refracts the light in different ways. These types of lenses can be found in everything from high-tech equipment such as space telescopes and radar systems to everyday items including camera lenses and spectacles. But the glass lenses take up space and it is difficult to make them smaller without compromising their function.
    With flat lenses, however, it may be possible to make very small optics and also find new areas of application. They are known as metalenses and are examples of optical metasurfaces that form a rapidly growing field of research with great potential, though at present the technology has its limitations.
    “Metasurfaces work in a way that nanostructures are placed in patterns on a flat surface and become receivers for light. Each receiver, or antenna, captures the light in a certain way and together these nanostructures can allow the light to be controlled as you desire,” says Magnus Jonsson, professor of applied physics at Linköping University.
    Today there are optical metasurfaces made of, for example, gold or titanium dioxide. But a major challenge has been that the function of the metasurfaces cannot be adjusted after manufacture. Both researchers and industry have requested features such as being able to turn metasurfaces on and off or dynamically change the focal point of a metalens.
    But, in 2019, Magnus Jonsson’s research group at the Laboratory of Organic Electronics showed that conductive plastics (conducting polymers) can crack that nut. They showed that the plastic could function optically as a metal and thus be used as a material for antennas that build a metasurface. Thanks to the ability of the polymers to oxidize and reduce, the nanoantennas were able to be switched on and off. However, the performance of metasurfaces built from conductive polymers has been limited and not comparable to metasurfaces made from traditional materials.
    Now, the same research team has managed to improve performance up to tenfold. By precisely controlling the distance between the antennas, these can help each other thanks to a kind of resonance that amplifies the light interaction, called collective lattice resonance.
    “We show that metasurfaces made of conducting polymers seem to be able to provide sufficiently high performance to be relevant for practical applications,” says Dongqing Lin who is principal author of the study and postdoc in the research group.
    So far, the researchers have been able to manufacture controllable antennas from conducting polymers for infrared light, but not for visible light. The next step is to develop the material to be functional also in the visible light spectrum. More

  • in

    Achieving a record-high Curie temperature in ferromagnetic semiconductor

    Ferromagnetic semiconductors (FMSs) combine the unique properties of semiconductors and magnetism, making them ideal candidates for developing spintronic devices that integrate both semiconductor and magnetic functionalities. However, one of the key challenges in FMSs has been achieving high Curie temperatures (TC) that enable their stable operation at room temperature. Though previous studies achieved a TC of 420 K, which is higher than the room temperature, it was insufficient for effectively operating the spin functional materials, highlighting the demand for an increase in TC among FMSs. This challenge has been featured among the 125 unsolved questions selected by the journal Science in 2005. Materials such as (Ga,Mn)As exhibit low TC, limiting their practical use in spintronic devices. While adding Fe to narrow bandgap semiconductors like GaSb seemed promising, incorporating high concentrations of Fe while maintaining crystallinity proved difficult, restricting the attainable TC.
    To overcome these limitations, a team of researchers led by Professor Pham Nam Hai from Institute of Science Tokyo, Japan, developed a high-quality (Ga,Fe)Sb FMS using the step-flow growth method on vicinal GaAs (100) substrates with a high off-angle of 10°. Their findings are published in Volume 126, Issue 16 of Applied Physics Letters on April 24, 2025. Utilization of the step-flow growth approach allowed them to incorporate a high concentration of Fe while maintaining excellent crystallinity, resulting in a TC of up to 530 K — the highest reported so far for FMSs.
    The team utilized magnetic circular dichroism spectroscopy measurements to confirm the intrinsic ferromagnetism in the (Ga0.76,Fe0.24)Sb layer based on the spin-polarized band structure of FMS. In addition, the team employed Arrott plots, a standard technique for extrapolating the TC from magnetization data. This method helped identify the magnetic transition points, offering a more precise understanding of the material’s ferromagnetic behavior at varying temperatures.
    “In the conventional (Ga,Fe)Sb samples, maintaining crystallinity at high Fe doping levels was a persistent issue. By applying the step-flow growth technique on vicinal substrates, we successfully addressed this challenge and achieved the world’s highest TC in FMSs,” says Prof. Hai.
    Furthermore, the researchers also investigated the long-term stability of their sample by measuring the magnetic properties of a thinner (Ga,Fe)Sb (9.8 nm) layer stored in open air for 1.5 years. Despite a slight reduction in TC from 530 K to 470 K, the material retained significant ferromagnetic properties, showing its potential for practical applications. Additionally, the material exhibited a large magnetic moment per Fe atom (4.5 μB/atom), which is close to the ideal value for Fe3+ ions in a zinc blende crystal structure (5 μB/atom). This is twice that of α-Fe metal, highlighting the superior magnetic properties of the material.
    “Our results demonstrate the feasibility of fabricating high-TC FMSs that are compatible with room temperature operations, which is a crucial step towards the realization of spintronic devices,” adds Prof. Hai.
    Overall, the study highlights the effectiveness of film formation using step-flow growth on vicinal substrates in producing high-quality, high-performance FMSs with higher Fe concentrations. By overcoming the bottleneck of low TC, the study represents a significant step forward toward the realization of spin-functional semi-conductor devices that can operate at room temperature. More

  • in

    How to use AI to listen to the ‘heartbeat’ of a city

    When Jayedi Aman looks at a city, he notices more than just its buildings and streets — he considers how people move through and connect with those spaces. Aman, an assistant professor of architectural studies at the University of Missouri, suggests that the future design of cities may be guided as much by human experience as by physical materials.
    In a recent study, Aman and Tim Matisziw, a professor of geography and engineering at Mizzou, took a fresh approach to urban research by using artificial intelligence to explore the emotional side of city life. Their goal was to better understand the link between a city’s physical features and how people feel in those environments.
    Using public Instagram posts with location tags, the researchers trained an AI tool to read the emotional tone of the images and text of the posts, identifying whether people were happy, frustrated or relaxed. Then, using Google Street View and a second AI tool, they analyzed what those places looked like in real life and linked those features to how people felt in the moment they posted to social media.
    As a result, Aman and Matisziw created a digital “sentiment map” that shows what people are feeling across a city. Next, they plan to use this information to create a digital version of a city — called an urban digital twin — that can show how people are feeling in real time.
    This kind of emotional mapping gives city leaders a powerful new tool. Instead of relying solely on surveys — which take time and may not reach everyone — this AI-powered method uses data people already share online.
    “For example, if a new park gets lots of happy posts, we can start to understand why,” Aman, who leads the newly established Spatial Intelligence Lab at Mizzou, said. “It might be the green space, the quiet nature or the sense of community. We can now connect those feelings to what people are seeing and experiencing in these places.”
    Beyond parks, this tool could help officials improve services, identify areas where people feel unsafe, plan for emergencies or check in on public well-being after disasters.
    “AI doesn’t replace human input,” Matisziw said. “But it gives us another way to spot patterns and trends that we might otherwise miss, and that can lead to smarter decisions.”
    The researchers believe this information about how people feel could one day be shown next to traffic and weather updates on digital tools used by leaders to make decisions about city operations.
    “We envision a future where data on how people feel becomes a core part of city dashboards,” Aman said. “This opens the door to designing cities that not only work well but also feel right to the people who live in them.” More

  • in

    Demonstration of spin-torque heat-assisted magnetic recording

    National Institute for Materials Science, Japan. “Demonstration of spin-torque heat-assisted magnetic recording.” ScienceDaily. ScienceDaily, 21 May 2025. .
    National Institute for Materials Science, Japan. (2025, May 21). Demonstration of spin-torque heat-assisted magnetic recording. ScienceDaily. Retrieved June 3, 2025 from www.sciencedaily.com/releases/2025/05/250521124447.htm
    National Institute for Materials Science, Japan. “Demonstration of spin-torque heat-assisted magnetic recording.” ScienceDaily. www.sciencedaily.com/releases/2025/05/250521124447.htm (accessed June 3, 2025). More